Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes

Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes

Session at a glance

Summary

This Internet Governance Forum session focused on identifying allies and challenges in combating disinformation while protecting freedom of expression. The discussion brought together representatives from European institutions, fact-checking organizations, and civil society to examine the complex landscape of information integrity.


Paola Gori from EDMO (European Digital Media Observatory) opened by highlighting the dual challenge facing democracies: the spread of disinformation through various channels including AI-generated content, and the growing rhetoric against policy frameworks designed to address disinformation. She emphasized that effective responses must be grounded in fundamental rights while focusing on algorithmic transparency and multi-stakeholder approaches rather than content deletion.


Benjamin Schultz from the American Sunlight Project described the deteriorating situation in the United States, where democracy is backsliding and platforms are moving closer to the administration. However, he offered hope through recent bipartisan success in banning non-consensual deepfake pornography, suggesting that collaboration on specific issues with broad support could maintain transatlantic cooperation.


Nordic representatives Mikko Salo from Finland and Morten Langfeldt Dahlback from Norway provided regional perspectives on the challenges. Salo emphasized the urgent need for AI literacy and teacher training, particularly in trust-based Nordic societies. Dahlback raised three critical concerns: deteriorating access to platform data for research, the delicate balance between independence and government cooperation for fact-checkers, and the shift from observable public disinformation to private AI chatbot interactions that fact-checkers cannot monitor.


Alberto Rabbachin from the European Commission outlined the EU’s comprehensive framework, including the Digital Services Act and the Code of Practice on Disinformation, which now covers 42 signatories with 128 specific measures. He stressed that the EU supports independent fact-checking organizations rather than determining what constitutes disinformation itself.


The discussion concluded with recognition that the battle against disinformation is evolving from reactive fact-checking toward proactive media literacy and user empowerment, as AI makes the information landscape increasingly complex and personalized.


Keypoints

## Major Discussion Points:


– **The complexity of disinformation as a global phenomenon**: Speakers emphasized that disinformation is not a simple problem with easy solutions, involving state and non-state actors, AI-generated content, and targeting various issues like elections, health, climate change, and migration. The phenomenon creates doubt and division in society while eroding information integrity essential for democratic processes.


– **Regulatory approaches and the tension between content moderation and freedom of expression**: The discussion covered various policy frameworks including the EU’s Digital Services Act, UNESCO guidelines, and the Global Digital Compact. There’s ongoing debate about balancing disinformation countermeasures with protecting fundamental rights and free speech, with speakers noting that emotional rhetoric often overshadows factual assessment of these policies.


– **Transatlantic divergence and changing political landscape**: Speakers highlighted growing differences between US and European approaches to platform regulation and content moderation, particularly following recent political changes in the US. This includes concerns about democratic backsliding, reduced cooperation between platforms and fact-checkers, and threats to research access.


– **The shift from reactive fact-checking to proactive media literacy**: Multiple speakers discussed the evolution from traditional fact-checking and content debunking toward empowering users with digital and AI literacy skills. This shift is driven partly by the rise of AI chatbots that generate personalized responses invisible to external fact-checkers.


– **Challenges in understanding the scope and impact of disinformation**: Speakers noted difficulties in measuring the actual extent of disinformation due to limited platform transparency, reduced research access, and the complexity of distinguishing disinformation from the broader information ecosystem. This knowledge gap hampers effective policy responses.


## Overall Purpose:


The discussion aimed to examine the current landscape of internet governance and disinformation, identifying key stakeholders (“friends and foes”) in the fight against false information while exploring policy approaches, challenges, and future directions for maintaining information integrity in democratic societies.


## Overall Tone:


The discussion maintained a professional but increasingly concerned tone throughout. It began with a comprehensive, somewhat optimistic overview of existing frameworks and cooperation mechanisms, but gradually became more sobering as speakers addressed current challenges including political polarization, regulatory divergence, and technological complications from AI. While speakers acknowledged significant obstacles, they maintained a constructive approach focused on finding solutions and maintaining international cooperation despite growing difficulties.


Speakers

**Speakers from the provided list:**


– **Moderator (Giacomo)** – Session moderator organizing the discussion on Internet Governance and disinformation


– **Paula Gori** – Secretary General of EDMO (European Digital Media Observatory), the body tasked by the European Union for fighting disinformation


– **Benjamin Shultz** – Works for American Sunlight Project, a non-profit based in Washington D.C. that analyzes and fights back against information campaigns that undermine democracy; currently based in Berlin


– **Mikko Salo** – Representative of Faktabari, a Finnish NGO focused on fact-checking and digital information literacy services; part of the Nordic hub within EDMO network


– **Morten Langfeldt Dahlback** – From Faktisk, the Norwegian fact-checking organization jointly owned by major Norwegian media companies including public and commercial broadcasters; coordinator of Nordisk (the Nordic hub of EDMO)


– **Alberto Rabbachin** – Representative from the European Commission


– **Audience** – Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Eric Lambert** – Mentioned as being present to make the report of the session, described as “an essential figure” working “behind the scene”


– **Lou Kotny** – Retired American librarian who asked a question about EU bias regarding the Ukraine war


– **Thora** – PhD researcher from Iceland examining how large platforms and search engines undermine democracy; research fellow at the Humboldt Institute


– **Mohamed Aded Ali** – From Somalia, part of the RECIPE programme, asked about recognizing AI propaganda and digital integrity violations


Full session report

# Internet Governance Forum Session: Combating Disinformation – Identifying Allies and Challenges


## Executive Summary


This Internet Governance Forum session brought together European policymakers, fact-checking organisations, and civil society representatives to examine the evolving landscape of disinformation and information integrity. Moderated by **Giacomo**, the discussion featured perspectives from EDMO, Nordic fact-checking organisations, the American Sunlight Project, and the European Commission.


The session highlighted the complexity of addressing disinformation while protecting fundamental rights, with speakers discussing challenges ranging from AI-generated content to platform transparency and the need for enhanced media literacy. Key themes included the evolution from reactive fact-checking to proactive education approaches, concerns about research access to platform data, and the importance of maintaining independence while fostering multi-stakeholder cooperation.


## Opening Framework and Context


**Paola Gori**, Secretary General of EDMO (European Digital Media Observatory), opened by characterising disinformation as a phenomenon that “creates doubt and division in society” while eroding information integrity essential for democratic decision-making. She noted that disinformation manifests across multiple domains – elections, health, climate change, and migration – involving both state and non-state actors, including increasingly sophisticated AI-generated content.


Gori outlined EDMO’s structure as a network of 14 hubs covering EU member states, soon expanding to 15 with Ukraine and Moldova, comprising more than 120 organizations. She referenced Eurobarometer survey results showing that 38% of Europeans consider disinformation one of the biggest threats to democracy, with 82% considering it a problem for democracy.


She positioned EDMO’s approach within broader global frameworks, including UNESCO guidelines and the Global Digital Compact, emphasising fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion. Gori also highlighted concerning rhetoric against policy frameworks designed to address disinformation, noting that “emotional rhetoric often overshadows factual assessment of these policies.”


## Nordic Perspectives: Trust, Education, and Evolving Challenges


The moderator **Giacomo** opened the Nordic discussion by asking whether participants were “more afraid of neighbors or supposed friends,” prompting responses about regional security dynamics.


**Mikko Salo** from Faktabari in Finland responded by referencing Finland’s 50-year history of preparedness with neighbors, then emphasised the urgent need for AI literacy, particularly in trust-based Nordic societies. He introduced the concept of “AI native persons,” questioning how people who grow up with AI will develop critical thinking skills. His central argument was that “people need to develop AI literacy and learn to think critically before using AI tools.”


Salo also raised questions about societal investment in information integrity, referencing security spending and suggesting that cognitive security deserves significant attention as part of whole-of-society security approaches.


**Morten Langfeldt Dahlback** from Faktisk in Norway identified three critical concerns challenging current approaches to combating disinformation:


First, he highlighted deteriorating access to platform data for research, noting that “major platforms are limiting researcher access to data, with research APIs being more restricted than expected.” He expressed concern that “we don’t know enough about the scope of the problem, and we don’t know enough about its impact,” while “the conditions for gaining more knowledge about this problem have become worse.”


Second, Dahlback addressed the balance between independence and government cooperation for fact-checkers, observing that “once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, it’s easy for others to throw our independence into doubt, because the alignment is too close.”


Third, he identified the shift from observable public disinformation to private AI chatbot interactions that fact-checkers cannot monitor. He explained that “when you use chatbots like ChatGPT or Claude, the information that you receive from the chatbot is not in the public sphere at all,” making traditional fact-checking approaches obsolete. This led him to suggest “a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work.”


## Transatlantic Perspectives and Political Challenges


**Benjamin Shultz** from the American Sunlight Project described the deteriorating situation in the United States, characterising it as “democratic backsliding” with platforms moving closer to the administration. He described the current environment as one where “bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric.”


However, Shultz offered a pragmatic path forward through recent bipartisan success in banning non-consensual deepfake pornography. He argued that “small steps like these that have been taken in the states that do have broad support” could maintain transatlantic cooperation despite broader political tensions.


## European Union Policy Framework and Implementation


**Alberto Rabbachin** from the European Commission provided an overview of the EU’s regulatory approach, emphasising that European frameworks focus on algorithmic transparency and platform accountability rather than content censorship.


Rabbachin outlined the Digital Services Act as “pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content.” He stressed that the EU supports independent fact-checking organisations rather than determining what constitutes disinformation itself, noting that “the EU supports an independent, multidisciplinary community of more than 120 organisations whose fact-checking work is completely independent from the European Commission and governments.”


He detailed the evolution of the Code of Practice on Disinformation, which has grown from 16 signatories with 21 commitments to 42 signatories with 43 commitments and 128 measures. He announced that this code would be fully integrated into the DSA framework as of July 1st, making it auditable and creating binding obligations for platform signatories.


Regarding research access to platform data, Rabbachin acknowledged the challenges while pointing to upcoming delegated acts designed to improve researcher access.


## Audience Engagement


The audience questions revealed additional concerns within the broader community working on information integrity issues.


**Thora**, a PhD researcher from Iceland, highlighted ongoing problems with academic access to platform data, noting that “large platforms are dragging their feet on providing academic access, claiming the EU needs to make definitions first.”


**Mohamed Aded Ali** from Somalia raised questions about recognising AI propaganda and digital integrity violations, highlighting the global nature of these challenges.


**Lou Kotny**, a retired American librarian, raised concerns about potential EU bias regarding the Ukraine war, introducing questions about how fact-checking organisations maintain objectivity in politically charged environments.


## Key Themes and Challenges


Several important themes emerged from the discussion:


**Shift Toward Media Literacy**: Multiple speakers emphasised the growing importance of media literacy and critical thinking education, with some suggesting this represents a necessary evolution from traditional fact-checking approaches.


**Platform Transparency Concerns**: Both researchers and fact-checkers expressed frustration with decreasing access to platform data needed for understanding and addressing disinformation.


**Independence vs. Cooperation**: The tension between maintaining organisational independence while cooperating with government initiatives emerged as a significant concern for civil society organisations.


**AI Challenges**: All speakers acknowledged that AI is fundamentally changing the disinformation landscape, making detection more difficult and requiring new approaches, particularly regarding private AI interactions that are not publicly observable.


**Local Context**: Speakers emphasised that disinformation responses must account for local cultural, political, and linguistic contexts.


## Conclusion


The session demonstrated the complexity of addressing disinformation while protecting fundamental rights and democratic values. While speakers agreed on the importance of multi-stakeholder cooperation and media literacy, significant challenges remain around platform transparency, maintaining organisational independence, and adapting to new technologies.


The moderator concluded by noting that information integrity is becoming increasingly important and announced an upcoming workshop by BBC and Deutsche Welle. **Eric Lambert** was mentioned as the session’s rapporteur.


The discussion revealed a field grappling with fundamental changes in how information is created and consumed, particularly the shift from public, observable disinformation to private AI interactions that traditional oversight mechanisms cannot monitor.


Session transcript

Moderator: Good morning. Good morning, everybody. Thank you for being so kind to be here so early in the morning, after a long trip to here, and to be for this session that will be about, as you have seen from the title, trying to understand who are the friends and who are the foes in this very complicated and unclear situation for the Internet Governance, and especially for the fight to disinformation. It’s a session that will have some participants with me here, from Nordic countries, one from Finland, another one from Norway, but also we will have other participants from remote. will be from Brussels, we will have somebody from the European Commission and we will have somebody from the, based in Berlin at the moment, but is one of the most active person in the fact-checking and countering disinformation in the U.S., and we will have Paola Gori that will open, that is from the EDMO, this is the Secretary General of EDMO, that is the body that the European Union has tasked for fighting disinformation. So, I think that we don’t have too much time, I would prefer that we start immediately. If Paola is ready, I will give the floor to her. Hello, good morning. Are you ready? Yes, she’s with us. Welcome, Paola. You look frozen.


Paula Gori: I guess you’re hearing me and are sharing also a quick presentation. Can you hear me? Yes, we can hear you. Can you hear me well? Yes, well, but we don’t see the presentation yet. She is coming. I see it on my screen, so just let me know when you see it. Yes, now we can see the first slide, it’s okay. Okay, great. Thank you very much, Giacomo. And good morning, everyone. I’m very happy to be in this, to start the day, actually, this day zero with this session. As Giacomo was saying, the overall IGF focus is on internet governance and within this topic, of course, disinformation is creating quite a few, quite a lot of, if you want, emotional reactions around. Also, in past editions of the IGF, and I think everybody here, what I just wanted to bring up here today, again, is this situation. which we are aware on one side, we have the spread of disinformation now. Be it from internal or from external actors, they are called different policies, big migration, climate change, elections, of course, health. They’re often very linked. For example, the same goes, for example, with disinformation and so on. And it can be spread internally and externally and by internal and external actors. It can be state-backed, it can be also not state-backed. There can be the use of proxies. There can be the use of artificial intelligence, both to generate content, but also to spread it. These are all things that I think we all know, as well as the fact that disinformation is there in a broader mission of creating doubt, creating division in our society, put us in a situation in which at a certain moment, we don’t actually are in a position to really be sure about stuff because we got so many information with so different facts or non-facts, actually. And this puts us in a very difficult situation overall. And this erodes, of course, the information integrity. Information integrity is key in a democratic process because, let’s put it in a very simple and easy way, if we want to take any decision, we have to have a basis on which we can make this decision. So if the basis is actually not based on facts, then we are in a situation in which we may make a decision which is not in our interest in the end. On the other side, what we are seeing more and more is a huge rhetoric against any policy framework that tries to tackle disinformation. One of the main arguments at the basis of this is the fact that it may violate freedom of expression, which if you look at it from a very neutral point of view, it’s a very fair concern because it is very important that whichever policy that deals with disinformation respects fundamental rights. fundamental rights and also freedom of expression, but the rhetoric that we’re seeing there is actually more if you want an emotional one rather than a rhetoric which actually looks at the real framework and then actually does a real assessment of whether freedom of expression is violated or not, because very often actually it is not. And the two reinforce each other. And this is something we are seeing globally. So I’m just setting the scene in a very global way. What we are seeing as approaches, and of course EDMO, and I will say a few words about EDMO. Those who are familiar with the IGF are also familiar with EDMO so far, because it’s not the first session we’re having here, is that whichever response to get back to information integrity starts of course with digital literacy, media literacy, with strengthening quality journalism and so on. And if you look at the global frameworks that we have around, like the Global Digital Compact, the guidelines by UNESCO on the governance of digital platforms, the recent communication by the High Representative and the European Commission on the communication on international digital strategy for the EU, and also the Digital Service Act, which is a regulation, there are a few elements which are common there, which are the fact that any response, as we were saying before, has to be grounded on fundamental values and the respect of human rights. We cannot transcend from it. It has to happen. The focus is rather on algorithmic and transparency. There should be a multi-stakeholder approach. This is, I think the IGF is actually one of the responses to that, right? So it’s really a multi-stakeholder level. It is based on risk mitigation, which means that it looks at the risk, at the way that the platforms, for example, or some online actors work, could have on certain elements, for example, public health, minors, civic discourse, and so on. So just to remind us that the focus is not on we delete content, we look at content, but rather on… we look at if the way the platforms work can actually be abused for malign purposes. So you just wanted to set the scene in highlighting these differences. And the instruments that I was mentioning earlier, I think show that we are all going into that direction so that the global principles overall are those. And then of course, the regional specificities, they rightly so also have differences. And this is normal. I don’t think we will ever, ever get to something which is global in this sense, but this is fine. As long as the principles are shared and the principles are all agreed, then I think that it is important to keep regional specificities also because, especially when it comes to this information, it is a global phenomenon, but the local characteristics are playing quite a strong role. Now, I will not go into this slide, but I just wanted to show these two slides. This is one is climate change is information. The next one will be on the economics of this information. Just to show how complicated it is to navigate the disinformation sphere. It’s not just one problem that is easy to understand and with an easy solution. This makes it very complicated, but probably also very interesting for everybody involved to try to address it. And I will not go through it as I was saying, but just, I wanted to just. So with this, in the interest of time, I will just, sorry? Can you repeat the last phrase, you break up? Yeah, sorry. So I just wanted to say that I was showing these slides, not to go through them because we don’t have time, but just to show how complex the disinformation phenomenon is. And by consequence, how complex it is also to find a solution. So I think it’s not by chance that it is years and years that we’re all together sitting, also sometimes disagreeing in trying to find a solution because the problem itself is complex and we cannot always simplify. complex situations, like in the case of disinformation, and you cannot simplify it precisely because human rights are at stake. So before giving the floor to our next panelist, I just wanted to recap for those who are not familiar with Edmo, what is Edmo doing and why was I showing all this complexity? Because the complexity brings us to a situation in which we have to understand properly the phenomenon in order to come with solutions, and the solutions cannot be just one solution, it’s a mix of different solutions. And what Edmo is doing, Edmo is funded by the European Commission and is one of the pillars of the response to disinformation, it’s precisely that. We are a sort of a platform that brings together the different stakeholders, it’s sort of like what the IGF is doing more generally on internet governance, we are bringing them all together. When possible, we are trying to provide tools like trainings or like repositories of fact-checking articles and so on. And by putting the community together, we are also in a position to find common trends, to do investigations, to do joint media literacy initiatives, to do policy analysis. So how are we doing it? Just to say that we have an Edmo platform, if you want, which goes EU-wide, and then we work with 14 hubs, which are national or multinational, they cover all EU member states. And these are key, because you remember what I was saying at the beginning, we cannot avoid looking at the local specificities when it comes to disinformation. Very easily said, the culture, the policy, the politics, the history, the language, the media diet of a country are actually having an impact on whether disinformation is impactful or not, if it is entering a country or not, and so on. So we really need the local element to be there, otherwise we would miss part of the picture. These hubs working all together under our coordination also allow us, as you can imagine, to do pan-European analysis, pan-European comparison, and so on. So I hope I was… clear enough to somehow set this scene. I started with the global element and then I focused a little more on the EU, and our next speakers will continue in this sense, and I think I can give it


Moderator: over to Benjamin Schultz. Thank you very much, Paula. Yes, from Europe you make a very comprehensive panorama. Now we go to the US. Benjamin, American Sunlight, can you introduce yourself? Yes. Am I coming through clear? No, no, you can go now. Oh, okay. Yeah. Is the audio okay? Yes,


Benjamin Shultz: please. Wonderful. Well, thank you so much, Giacomo, Paula. I saw Miko and Martin there on the screen. It’s great to be back here with you all at the IGF. This is a really wonderful gathering and I think a great place for dialogue, for understanding, for discussing the issues of the day and really remembering just how global and borderless and connected the internet makes us all. And of course, that leaves ample opportunity for bad actors to misuse the internet and all of its wonderful technologies to spread disinformation. My name is Ben. I work for the American Sunlight Project, a non-profit based in Washington, D.C., although I’m based in Berlin at the moment. And we analyze and fight back against information campaigns that undermine democracy and pollute our information environment. It’s no secret that in the US a lot has changed in the last six months. Things have shifted. We’ve noticed. Things have shifted greatly. And we’ve seen, just putting it frankly, democracy begin to backslide in the United States. We’ve seen bad actors become more active than ever in spreading information campaigns and using information operations to tear at the social fabric of the US. And we’ve also seen the platforms move closer and closer to the administration. really in a total sea change from the last four years and even the four years before that. We’ve seen people be denied entry to the US based on having critical text messages of the administration, something that really as an American, I thought I would never see happen to my country. And so in this day and age in which content moderation, the removal of harmful or illegal content online is being equated falsely to censorship, to a violation of the right to free speech, free expression, in order to really make progress on making our internet safer and continuing the work that we all do, we have to really start to reframe how we approach this. We have to start to think about new ways, new creative ways to maintain the alliance, the Transatlantic Alliance in these rough times. And so in the preparatory call that we all hopped on for this panel, I was told not to be so negative. So I’m gonna cut myself off there on the bad and we’re gonna shift to the good. And I’m gonna tell you all kind of how I’m approaching this reframing. As someone working in this space, someone whose organization has been called evil by a certain person that runs X and so forth, there’s some work we can do, I think, to maintain the progress that we’ve made in making the internet a safer, better place. Recently in the US, non-consensual explicit deepfakes, colloquially known as deepfake porn, have actually been made illegal. And this is a really groundbreaking achievement, advancement in our country. And it’s something that we’ve done a lot of advocacy work for a really long time. And finally, just in the last months, we had enough votes in Congress to make this happen. And this achieved wide bipartisan. support. And the way that we framed this is we actually showed Congress just how affected by this problem they were too. A lot of times our elected officials, you know, putting it frankly, maybe aren’t keeping up as in the weeds as we are with all of the things happening online. You know, they’re busy people, fair enough. And one action that we took was we wrote a report in which we laid out just in very plain terms how Congress was being affected by this problem, how people online of all ages, particularly young women, were being affected by this problem of being depicted in deepfakes. And we were able to push a bill over the finish line and it was signed recently. And now platforms have to take down deepfake videos after receiving a request from a victim within 48 hours. You know, there’s been plenty of criticism of this bill. It’s not perfect, but it was a really, it’s been a big step forward. And I think we’re going to get into a little bit more of this later on, on this panel, on, you know, the varying degrees of regulation in different European countries. Of course, Europe is a big continent. The EU is big 20, 26, 7, you know, plus a few more in the EEA member states. And there’s a lot of conflicting values and arguments around regulating content online. But my hope amidst all of the not-so-nice things happening in the U.S. right now and the, you know, unfortunate degradation of the transatlantic relationship, my hope is that with small steps like these that have been taken in the states that do have broad support, such as banning explicit deepfakes that are made non-consensually, my hope is that collaborating on these issues that Europe and the U.S. and countries all around the world can continue the dialogue and continue to make some progress on keeping the internet safe and making it safer. And so with that, I will stop myself and pass. it back to you, Giacomo, and the panel can continue and I’m sure we’ll have some good discussion coming up. Thank you very much, Benjamin. Just one question, you moved to Berlin before November or after November? I moved in January. The timing just sort of worked out, but you know.


Moderator: Very timely. Yeah. I can understand. Okay, thank you. I think that, I hope that we will have time for questions. I remember that there is a mic over there. As soon as we finish with the presentation, we will discuss with the audience, because I think there are questions that are coming. So, who’s next? Okay. Mikko, please introduce yourself. Thank you. You are one of the members


Mikko Salo: of the network that Edmo just presented us. So, my name is Mikko Salo. I’m representing a Finnish NGO around fact-checking and digital information literacy service, Faktabari. We are part of a Nordis, that is part of the Edmo, a kind of Nordic hub, and we’re working with Morten on that one. I probably kind of opened it up a little bit, my angle, civil society point of view, whereas I understood Morten is more like the journalistic side that we are working on. But yeah, indeed, very, very challenging times. We started 11 years ago, it was still like accuracy. I think now it’s more about the integrity of the information, and when you are coming from a country with a Finland that is now praised for its preparedness culture, so I try to phrase it like, where do we need to prepare now? And I think it’s very much to the information integrity and the kind of AI literacy that is very, very urgently needed. And there, our small NGO has been working with actually government officials, pushing them to get the kind of, retrain the teachers, and then providing guidance to teachers that are very lost, of course, with the AI at the moment. And why am I so worried as an organization, starting from fact-checking, is that what is happening to our information, what is happening to our sources? Do people really anymore know where the information stems from, and what kind of consequences it has, especially in trust-based societies like the Nordic societies? And so, these are big challenges. But what gives me some hope is that I can say that we are happy to be part of the EU context, that there is at least some sort of rulebook for the internet that is badly broken. There is like a raising awareness that we need to kind of know something. I think, as we’re speaking, they are currently in Hague actually framing what is security at the moment, and I would talk about the cognitive security at the moment. And then we are talking about the famous five percent of investment. investment to security, but now what I’m referring to is the 1.5, the whole of society’s security and the information integrity. And I think that’s the frame that we should be talking. In general, the media education investments are all over the world pretty non-existing at the moment. So there is a lot to improve, at least at the moment. Finland is apparently performing the best as we are doing it. If I would invest something now, and what we are trying to do in Finland is exactly going back to the basics, is still the children, the next generation. I think that’s where we have to find some sort of protection and ensure that before they first need to… I mean, this sounds kind of crazy, but they need to be able to think before they use AI. And I was just framing and I was actually asking the chat, how does it look like an AI native person? Because if we are not able to think ourselves, we are not able to use the AI as it’s meant at the moment. So I would perhaps leave you with these thoughts about the importance of the education and the possibilities that we have in empowering the teachers in different societies to at least address the youngsters for the information integrity. Thank you. Thank you very much, Mikko. Are you more afraid of your neighbours or your supposed friends? We are not afraid of our neighbours. We are prepared and there is a 50 years of history of that one. But we are, I mean, everybody has a lot to do with this information side and it’s very mental, so to say. And I think nobody’s too prepared for that one. And this is a new battlefield and we just need to take it calm and try to progress. And that’s why the IGF is doing very important work to keep a kind of internet somehow in place.


Moderator: Thank you very much. So before to give the floor to Morten, that is the next speaker, I want to remember that we have with us also Eric Lambert that will make the report of this and it’s an essential figure. He’s not with us, but he’s behind the scene. Morten, your organisation is partially also owned by the National Public Service Broadcaster.


Morten Langfeldt Dahlback: Among others, yes. So my name is Morten. I’m from Faktisken, the Norwegian fact-checking organisation. We’re jointly owned by all of the major media companies in Norway, including the public broadcaster and also the commercial public broadcaster, yes. So I’m going… I’m going to talk about three issues that I think are important in this context. So I’m both part of Faktisk, the fact checker, but I’m also the coordinator of Nordisk, so the hub of Edmo that Mikko and Fakta Barri is also part of. And the first point I want to raise is that we talk about disinformation and misinformation here. I think one of the core challenges that we face in responding to this problem is that we don’t know enough about the scope of the problem, and we don’t know enough about its impact, either at least in a lot of domains. And I think the conditions for gaining more knowledge about this problem have become worse over the past few months. So the reason why it’s becoming worse is because of regulatory divergence between Europe and the US. So up until about a year ago, several legislations came into being which were supposed to increase transparency from major tech platforms, forcing them to provide more information to independent fact checkers, but also to researchers. And I think this is one. Except one. Except one, of course. The legislation was supposed to apply to all of them, but X didn’t refuse to be part of the legislation. That is correct. But we see that there were already this last year, there were some, or this year, there were some signs that things were deteriorating when MEDA closed down the fact checking program in the US. And we were expecting them to do so in Europe as well. That hasn’t happened, fortunately. But we think these programs that allow us to gain more knowledge about the disinformation phenomenon are probably under threat, which is going to make our life more difficult. But there is a different problem here as well. It’s very hard to, because of the wealth of information that is online in the first place, it’s very difficult to estimate the scope of disinformation there. So you can see when Paula, for example, shows you a model of the disinformation phenomenon, it’s very complex. It has a lot of variables. And it’s very difficult to disentangle just the overall composition of platforms, the algorithms there from disinformation, misinformation specifically. So I think it’s become more difficult to obtain knowledge about this phenomenon. And that hampers the size, the scope of our response. So I think we have a fundamental problem there. It’s probably solvable, but it’s something that worries me. The second thing I want to address is the relationship between policymakers and political bodies and independent actors, like Faktisk, for example, and like Faktabari, now that disinformation and misinformation is, to a greater extent, on the political agenda. So I think, overall, it’s a good thing that both governments and the European Union and others are attempting to limit the impact and the spread of disinformation. But it also places independent actors in a difficult position, because we need to be and maintain our independence from governments and from regulatory bodies in order to do our job and to maintain the trust of our audience. And once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, I think it’s easy for others to throw our independence into doubt, because the alignment is too close. And this is, I think, a very important problem. I think something that both we as fact checkers and as hubs of Edmo, but also the political bodies need to work out over the next couple of years to figure out what would be the right kind of cooperative coexistence between journalistic organizations that have been at the forefront of the battle against disinformation for years and governmental bodies as well. I think it’s a difficult challenge, but it’s one that we are in the process of addressing. The final point I want to address has to do with something that Mikko just mentioned, which is he asked that GPT to give him some information that would be relevant, pertinent to this session. And I think this to me raises the challenge that we may, when we talk about mis- and disinformation, may be fighting yesterday’s battles. Because up until now, the way we have related to mis- and disinformation, both as consumers, accidental consumers maybe, but also as organizations that try to address it as a problem, is that we know that the disinformation and misinformation that’s out there is usually observable from the outside. And that means that we can see posts on Facebook. We can see videos on TikTok. They might be algorithmically delivered to individual people on their private feed, but the content is out there in the open. However, when you use chatbots like ChatGPT or Clod, which is you can use whichever you want, the information that you receive from the chatbot is not in the public sphere at all. It’s a response generated on the basis of a prompt that you give to the language model, which means that we, as fact-checkers, for example, are unable, we can’t see what responses you’re getting. And the more information consumption is driven into chatbots, the less we will be able to observe the misinformation out there, and the less able we will be to respond to it as well. So I don’t have a solution to this. I think what’s going to happen if this development accelerates is that literacy and information literacy will be much more important than it is today, because it will be up to the individual consumer and the individual user of chatbots and LLMs to actually assess the information that they’re being provided. So I think we might see a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work, and really empowering people to think critically about the outputs of chatbots, for example. So I’m going to close there. I think we will see some big changes in the battle against misinformation in the coming years, but it really depends on both the regulatory divergence between the US and Europe, but also the AI development and usage of AI in the general public. Thank you.


Moderator: Thank you very much. I think that this last thing that you said are food for thought, so we need to reflect on that. But who has to reflect more is probably the European Commission that is with us in the form of Alberto Rabacin. This shift from the fact-checking to media literacy and empowerment of the users. You agree with that?


Alberto Rabbachin: Thank you, Giacomo. for this question. Indeed, this is, I hope you can hear me well. Yes, we can hear you well. This is certainly a shift that is happening and we are acknowledging that. And I would like to make, show you a few slides that I have prepared to accompany my presentation. Just give me a second that I make this happening. Yep. You should be able to see it. Yes, it’s coming. Okay. Still black, but we hope that we’ll see it in a second. Yes, now we can. Okay. So yes, indeed. So what do we, from the European Commission point of view, what we have in place, you know, is a framework which is quite a richer framework trying to, you know, preserve the integrity of the information sphere. It’s not necessarily, it’s a problem of content but it’s also a problem of functioning of the information ecosystem, of the digital information ecosystem. And first of all, I think we have to make sure that also the citizen that, the European citizen are also themselves considering disinformation and misinformation and information integrity as an issue, you know, as a problem, as a challenge. And in fact, the latest Eurobarometer survey from 2023 and 2024 had made the head of the European election had shown that 38% of the European consider, you know, disinformation, misinformation, one of the biggest threat to democracy. There is really also recently 82% of the Europeans consider that this information is a problem for democracy and they are aware, most of them are aware of this problem. So we are doing something that is perceived as useful by the citizen and also where we have to look at when we try to address the disinformation phenomenon. Certainly also from the citizen point of view, social media, online social network are the sources of the problem, the biggest source of the problem. And this also reflects the technological development that we have witnessed in the last 10 years, where the digital online information ecosystem became the main source of information. Of course, you mentioned also, some of you mentioned also, the role of AI and certainly the use of AI. Of course, AI opens a lot of opportunities in all sectors, but can also be used for malicious activity. And also thanks to Edmo, we are currently monitoring the amount of disinformation that is linked to content that has been generated by AI. And we see that this type of content is taking up. And we have witnessed this in particular in the latest national election in Europe. But what is the EU doing? First of all, we are working with partners among EU countries, with other countries outside the European border and with international organizations, and we are very happy. happy to be here talking about this important subject. Of course, there is also a very important mission, which is rising awareness and communicating about this phenomenon. I think Edmo is doing a great job with his network to also inform the citizen on the different forms that this phenomenon can take. Of course, we are also promoting access to independent media, to fact-check content. We support media literacy activity. And then we also foster this, in particular, around the Code of Conduct on Disinformation. We foster this cooperation between social media platform and civil society organization. Last but not least, of course, there is a pioneering regulation, which is the Digital Services Act. The Digital Services Act is the first global legal standard for taking disinformation, while protecting freedom of expression and information. This regulation does not look at content, but looks how the content is distributed based on, looks at the functioning of the algorithm, looks at avoiding that malicious actor abuse this algorithm to spread disinformation, to manipulate public discourse, to create different systemic risk. It gives to the Commission strong investigatory powers, which is also helping increasing transparency on the functioning of social media platform. Then we have, I mentioned, the Code of Conduct on Disinformation. The most recent development is that the Code of Practices on Disinformation has now been brought within the co-regulatory framework of the DSA. So it becomes a meaningful benchmark. for a very large online platform to fulfil the DSA requirement from, of course, the disinformation point of view. It contains a large set of commitment and measure. And then there is the third pillar, which is societal resilience. I will put AdMob under this basket. As I said, AdMob is a great tool that we support to increase awareness about the phenomenon of disinformation through the detection and the analysis of it. We have supported also the creation of the highest ethical and professional standard in the fact-checking for fact-checking in Europe. And we finance a lot of media literacy activities. This is a little bit of story of the code. We started back in 2018 with 16 signatory and 21 commitments. Now we are in 2025, 42 signatory with a very granular code that includes 43 commitments and 128 measures. As of the 1st of July, the code, as I mentioned before, we fully enter into the DSA framework and will be auditable. So this is also the big transformation that we are doing with this moving the code under the DSA. So the signatory of the code will need to be audited on the implementation of the code. This will be an obligation under the DSA. I’m not spending a lot of words on the code because maybe people are familiar, but the code wants to take several areas that are relevant for the disinformation phenomenon. The monetization of disinformation, transparency of political advertising. We also have new regulation coming into place, reducing manipulative behavior, empower user, empower fact-checkers, and provide access to data for research purposes. And then I’m concluding. you know, you have seen and it’s really a pleasure to see that in this panel there are a lot of admiral representatives. It was a huge effort from our side to create this network of 14 hubs, soon to be 15. We will have a new hub coming up which aligns to the new strategy for international cooperation of the European Union. We will have a new hub that will cover also Ukraine and Moldova, which are a critical regional spot if we want to fight disinformation. And let me also remind that maybe it’s not clear to everyone how big is this network. EDMO includes more than 120 organizations across the EU, including Norway and soon also Ukraine and Moldova. And last but not least, you mentioned it at the beginning, Giacomo, media literacy. Media literacy is an aspect that appears in different parts of our strategy. It is a part of our policy and regulatory framework, both in the DSA and in the European Media Freedom Act. We have a media literacy expert group. We also have the new European Board for Media Services that has a subgroup on media literacy. EDMO is doing great activities, in particular at the local level, with initiatives that are tailored to the needs of the different member states, and in particular to Creative Europe. And through pilot projects, we support a lot of cross-border media literacy activities. I will stop here and give you back the floor. Thank you very much, Alberto. We are quite late, but I don’t want to spoil the audience from the possibility to raise questions. I see that already there is somebody there. Could you introduce yourself, please? Yes, my name is Lou Kotny.


Audience: I’m a retired American librarian over here for my younger Norwegian-American children. On LinkedIn, I have a white paper about the Ukraine war titled Biden-Blinken’s War Beginning Holocaust Objective Facts Footnoted. And two big lies are being pushed by the European Union, by Europeans. First of all, Kyiv 2014 was an outside agitated coup for four objective reasons, which I put in my paper. Secondly, the attack in 2022 was provoked by Zelensky himself, pumped up by the Europeans in Munich, threatening Ukraine, getting nuclear weapons. And finally, which really concerns me, Europe is voting against the annual United Nations anti-Nazi resolution, which sort of is self-defining, self-incriminating that we are quizzling collaborators. Now, my question is, if the EU is so bias, pro-war biased, shouldn’t the United Nations keep it at far arm’s length as far as judging what’s misinformation and disinformation? Thank you for letting me ask my question. Thank you. Other questions from the room? Okay, in the meantime, Alberto, do you want to answer to this first question while, oh, yes, please, go ahead. There’s a second question. Hi. My name is Thora. I’m a PhD researcher from Iceland examining how very large platforms and search engines are undermining democracy. I am asking about academic access because this is a big problem, and I’ve been a research fellow at the Humboldt Institute where they have Friends of the DSA, which is a group of academics who are trying to gain this access, but the large platforms are dragging their feet and claiming that EU has to make a few definitions in order for this to start, and I’m wondering what is the status of academic access, and what should we start with? Thank you. Thank you very much. Okay. Do you want to answer to this, and then Alberto will give the other question?


Moderator: Yes, I can just echo what was just said from the audience.


Morten Langfeldt Dahlback: We recently tried to run a project where we were supposed to work with researchers to extract information from one of the major platforms, and we noticed very quickly that the research APIs where you can actually extract information was much more limited than we had expected. So, I think this is a major problem that a lot of people experience, and it definitely


Moderator: has not been fixed yet. Okay. So, Alberto, do you have some element of answer to the first question? And probably you have to complement what has been said about access to the data from the platforms. That is essential for understanding what happens.


Alberto Rabbachin: Yes, Giacomo. On the first question, and this is an important element that I want to stress, when we talk about detection of disinformation, analysis of disinformation, We don’t want to be the one calling the shots. We are supporting an independent, multidisciplinary community, which is represented by EDMO here, 120 organisations, which are selected by independent experts. And the work that they do in fact-checking and analysing this information is completely independent not only from the European Commission, but also from other EU governments. And this is really something that we really are taking care of and we want to be preserved. On the second question, I think there is the Digital Services Act that obliges platforms to provide data for research activity. There is an upcoming delegated act that should also move the bar up or down, let’s say, in terms of providing more access to researchers in Europe for doing their work.


Moderator: I think this is fundamental to have a better understanding of the phenomenon and therefore to design proper policy responses. Thank you very much. I think that we’ve run out of time. There is one more question. Yes, please. Thank you very much.


Audience: My name is Mohamed Aded Ali. I’m from Somalia. I’m part of the RECIPE programme. Recognising artificial AI propaganda in terms of digital integrity violations involves identifying when AI technologies are misused to deceive, manipulate or misinform individuals or groups. These violations can threaten trust, prosperity, ethical standards and digital communication. My question is, how can EU rulers recognise this in terms of internet integrity? Thank you.


Moderator: In terms of the internet? Internet and digital integrity, based on EU rules. I think that we can give a generic answer, that is, that the information integrity becomes now more and more, as Mikko said before, the relevant point, because especially we will have to face, thanks to the artificial intelligence, a flood of disinformation. automatically made and so becomes more and more important to identify which are the reliable sources and if the information has been manipulated and this according to what Martin was saying before will become more and more difficult. So a mix of rules as European Union is trying to do and work on the on the media integrity made by the media and the journalists is absolutely essential to try to to face this unpredictable future. Thank you very much. Sorry that we didn’t gave you too much answers but we share with you a lot of questions but this is the times we are living and we hope that in the next days we can we can find some other answers from other partners. I just remember you that in few minutes we’ll start in workshop room number two a seminar by BBC and Deutsche Welle that is about how the public service could remediate to part of the problems that we have faced this morning. Thank you very much everybody for participating and I wish you a nice IGF and thank you for coming again. Thank you. Thank you all. Thanks. Thank you.


P

Paula Gori

Speech speed

172 words per minute

Speech length

1567 words

Speech time

545 seconds

Disinformation creates doubt and division in society, eroding information integrity essential for democratic decision-making

Explanation

Gori argues that disinformation puts society in a situation where people cannot be sure about information due to conflicting facts and non-facts. This erosion of information integrity is problematic for democracy because decision-making requires a factual basis, and without it, people may make decisions not in their interest.


Evidence

Examples given include disinformation on migration, climate change, elections, and health topics that are often interconnected


Major discussion point

Information integrity as foundation for democracy


Topics

Human rights | Sociocultural


The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights

Explanation

Gori emphasizes that disinformation cannot be simplified because it involves many complex factors and human rights are at stake. She argues that the complexity requires a mix of different solutions rather than a single approach.


Evidence

References to slides showing climate change disinformation and economics of disinformation to demonstrate complexity


Major discussion point

Complexity of disinformation requires nuanced solutions


Topics

Human rights | Sociocultural


Agreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion

Explanation

Gori outlines that international frameworks like the Global Digital Compact and UNESCO guidelines focus on protecting human rights, ensuring algorithmic transparency, involving multiple stakeholders, and mitigating risks. The approach targets how platforms work rather than directly removing content.


Evidence

References to Global Digital Compact, UNESCO guidelines on digital platform governance, EU’s international digital strategy, and Digital Service Act


Major discussion point

Framework approaches to disinformation


Topics

Legal and regulatory | Human rights


EDMO serves as a platform bringing together different stakeholders, similar to IGF’s approach to internet governance

Explanation

Gori describes EDMO as a multi-stakeholder platform that brings together various actors to address disinformation, providing tools like training and fact-checking repositories. It operates through 14 hubs covering all EU member states to address local specificities.


Evidence

EDMO works with 14 national or multinational hubs covering all EU member states, funded by the European Commission


Major discussion point

Multi-stakeholder cooperation in combating disinformation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Alberto Rabbachin
– Moderator

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


Local specificities in culture, politics, and language are crucial for understanding how disinformation impacts different countries

Explanation

Gori argues that culture, policy, politics, history, language, and media consumption patterns of a country significantly impact whether disinformation is effective or enters a country. This necessitates local elements in any response strategy.


Evidence

EDMO’s structure with local hubs to address regional specificities while enabling pan-European analysis


Major discussion point

Importance of local context in disinformation response


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo

Agreed on

Local context and specificities are crucial for effective disinformation response


M

Morten Langfeldt Dahlback

Speech speed

182 words per minute

Speech length

1146 words

Speech time

376 seconds

There is insufficient knowledge about the scope and impact of disinformation, and conditions for gaining this knowledge are deteriorating

Explanation

Dahlback argues that understanding the scope and impact of disinformation is limited, and the situation is worsening due to regulatory divergence between Europe and the US. He notes that legislation meant to increase platform transparency is being undermined.


Evidence

META closed down fact-checking programs in the US, X refused to comply with transparency legislation, and research APIs are more limited than expected


Major discussion point

Knowledge gaps about disinformation scope and impact


Topics

Legal and regulatory | Sociocultural


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies

Explanation

Dahlback highlights the difficulty fact-checkers face in maintaining independence and audience trust when their objectives align closely with government goals. This alignment can lead others to question their independence.


Evidence

The challenge of cooperative coexistence between journalistic organizations and governmental bodies in addressing disinformation


Major discussion point

Independence of fact-checking organizations


Topics

Human rights | Sociocultural


Disagreed with

– Alberto Rabbachin

Disagreed on

Approach to combating disinformation: regulatory vs. independence concerns


The shift toward AI chatbots creates invisible information consumption that fact-checkers cannot observe or respond to effectively

Explanation

Dahlback warns that as information consumption moves to private chatbot interactions, fact-checkers lose the ability to observe and respond to misinformation. Unlike social media posts that are publicly observable, chatbot responses are private and generated individually.


Evidence

Comparison between observable content on Facebook and TikTok versus private responses from ChatGPT and Claude


Major discussion point

AI chatbots creating invisible misinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Alberto Rabbachin
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically

Explanation

Dahlback suggests that as AI-generated content becomes more prevalent and less observable, the focus should shift from reactive fact-checking to proactive media literacy. This would empower individuals to critically assess information they receive from chatbots and other AI tools.


Evidence

The increasing use of chatbots and LLMs making traditional fact-checking approaches less effective


Major discussion point

Evolution from fact-checking to media literacy


Topics

Sociocultural | Human rights


Agreed with

– Mikko Salo
– Alberto Rabbachin
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


Major platforms are limiting researcher access to data, with research APIs being more restricted than expected

Explanation

Dahlback reports that recent attempts to work with researchers on extracting platform information revealed that research APIs provide much more limited access than anticipated. This restricts the ability to study and understand disinformation phenomena.


Evidence

Direct experience from a recent project attempting to extract information from a major platform


Major discussion point

Platform data access for research


Topics

Legal and regulatory | Development


Disagreed with

– Alberto Rabbachin
– Audience

Disagreed on

Platform data access and transparency


B

Benjamin Shultz

Speech speed

161 words per minute

Speech length

908 words

Speech time

336 seconds

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric

Explanation

Shultz describes how information operations are being used more aggressively to damage democratic institutions and social cohesion in the US. He notes that platforms are moving closer to the administration and that there are concerning restrictions on free expression.


Evidence

People being denied entry to the US based on critical text messages about the administration, and platforms aligning more closely with government


Major discussion point

Increasing threats to democracy from information campaigns


Topics

Human rights | Sociocultural


Small legislative victories like banning non-consensual deepfakes can maintain transatlantic cooperation despite broader challenges

Explanation

Shultz argues that despite deteriorating US-Europe relations, focusing on specific issues with broad bipartisan support can preserve cooperation. He cites the success in making non-consensual explicit deepfakes illegal as an example of achievable progress.


Evidence

Recent US legislation requiring platforms to remove deepfake videos within 48 hours of victim requests, achieved through bipartisan support


Major discussion point

Maintaining international cooperation through targeted legislation


Topics

Legal and regulatory | Human rights


M

Mikko Salo

Speech speed

130 words per minute

Speech length

685 words

Speech time

314 seconds

Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools

Explanation

Salo emphasizes that media education investments are insufficient globally and that Finland is focusing on preparing the next generation. He argues that children need to develop thinking skills before they can effectively use AI tools.


Evidence

Finland’s work with government officials to retrain teachers and provide guidance for AI literacy, described as ‘whole of society security’


Major discussion point

Education as foundation for information integrity


Topics

Sociocultural | Development


Agreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


People need to develop AI literacy and learn to think critically before using AI tools

Explanation

Salo argues that individuals must be able to think independently before they can properly utilize AI. He questions what an ‘AI native person’ looks like and emphasizes the importance of maintaining human critical thinking capabilities.


Evidence

Reference to asking ChatGPT for information and the need for people to assess AI outputs critically


Major discussion point

AI literacy and critical thinking skills


Topics

Sociocultural | Development


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


A

Alberto Rabbachin

Speech speed

127 words per minute

Speech length

1426 words

Speech time

671 seconds

The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content

Explanation

Rabbachin describes the DSA as the first global legal standard for tackling disinformation while preserving free speech rights. It examines how content is distributed through algorithms rather than the content itself, aiming to prevent malicious actors from abusing algorithms.


Evidence

The DSA provides the Commission with strong investigatory powers and increases transparency on social media platform functioning


Major discussion point

Regulatory approach focusing on algorithmic transparency


Topics

Legal and regulatory | Human rights


AI-generated disinformation is increasing and was particularly witnessed in recent European elections

Explanation

Rabbachin notes that EDMO monitoring shows AI-generated disinformation content is rising, with particular evidence during recent national elections in Europe. This represents a growing challenge that requires attention.


Evidence

EDMO monitoring data showing increased AI-generated disinformation during recent European national elections


Major discussion point

AI’s role in generating disinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


The Code of Practice on Disinformation has grown from 16 signatories with 21 commitments to 42 signatories with 128 measures

Explanation

Rabbachin highlights the expansion of the voluntary code since 2018, showing increased industry engagement. The code has been integrated into the DSA framework, making it auditable and creating obligations for signatories.


Evidence

Specific numbers showing growth from 16 to 42 signatories and from 21 to 128 measures, with integration into DSA making it auditable


Major discussion point

Evolution of industry self-regulation


Topics

Legal and regulatory | Sociocultural


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments

Explanation

Rabbachin emphasizes that the EU doesn’t directly determine what constitutes disinformation but supports an independent network of organizations. These organizations are selected by independent experts and maintain complete independence in their fact-checking and analysis work.


Evidence

EDMO network includes more than 120 organizations across the EU, Norway, and soon Ukraine and Moldova, selected by independent experts


Major discussion point

Independence of EU-supported fact-checking network


Topics

Human rights | Sociocultural


Agreed with

– Paula Gori
– Moderator

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


Disagreed with

– Morten Langfeldt Dahlback

Disagreed on

Approach to combating disinformation: regulatory vs. independence concerns


The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access

Explanation

Rabbachin explains that the DSA obligates platforms to provide data for research purposes, and there is an upcoming delegated act that should further improve researcher access to platform data for their work.


Evidence

Reference to DSA obligations and upcoming delegated act to enhance researcher access


Major discussion point

Platform data access for research under DSA


Topics

Legal and regulatory | Development


Disagreed with

– Morten Langfeldt Dahlback
– Audience

Disagreed on

Platform data access and transparency


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups

Explanation

Rabbachin outlines how media literacy is integrated into various EU policies including the DSA and European Media Freedom Act. The EU supports media literacy through expert groups, pilot projects, and local initiatives tailored to member state needs.


Evidence

Media literacy provisions in DSA and European Media Freedom Act, media literacy expert group, European Board for Media Services subgroup, and Creative Europe pilot projects


Major discussion point

Comprehensive EU approach to media literacy


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo
– Morten Langfeldt Dahlback
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


A

Audience

Speech speed

119 words per minute

Speech length

375 words

Speech time

188 seconds

Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance

Explanation

An audience member from Iceland studying platform impacts on democracy reports that large platforms are avoiding providing academic access by claiming the EU needs to make clearer definitions. This affects research into how platforms undermine democratic processes.


Evidence

Experience from Humboldt Institute’s Friends of the DSA group of academics trying to gain access


Major discussion point

Platform compliance with data access requirements


Topics

Legal and regulatory | Development


Disagreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin

Disagreed on

Platform data access and transparency


M

Moderator

Speech speed

133 words per minute

Speech length

816 words

Speech time

367 seconds

The session aims to understand who are friends and foes in the complicated situation of Internet Governance and the fight against disinformation

Explanation

The moderator frames the discussion as needing to identify allies and adversaries in the complex landscape of internet governance, particularly regarding disinformation challenges. This sets up the session as exploring the different stakeholders and their roles in addressing these issues.


Evidence

Session title and opening remarks about the complicated and unclear situation for Internet Governance


Major discussion point

Identifying stakeholders in internet governance and disinformation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Paula Gori
– Alberto Rabbachin

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers

Explanation

The moderator highlights and questions this transition from reactive fact-checking approaches to proactive media literacy and user empowerment strategies. He specifically asks the European Commission representative whether they agree with this shift, indicating it’s a significant policy consideration.


Evidence

Direct question to Alberto Rabbachin about agreeing with the shift from fact-checking to media literacy


Major discussion point

Evolution from fact-checking to media literacy approaches


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo
– Morten Langfeldt Dahlback
– Alberto Rabbachin

Agreed on

Media literacy and education are fundamental to combating disinformation


Information integrity and reliable source identification become increasingly important due to AI-generated disinformation floods

Explanation

The moderator synthesizes the discussion by emphasizing that information integrity is becoming more crucial as artificial intelligence enables automatic generation of disinformation at scale. He argues that identifying reliable sources and detecting manipulation will become increasingly difficult, requiring a combination of regulatory approaches and media integrity work.


Evidence

Reference to the flood of automatically generated disinformation through AI and the increasing difficulty of identification


Major discussion point

Information integrity in the age of AI-generated content


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo

Agreed on

AI is creating new challenges for disinformation detection and response


A mix of rules and media integrity work by journalists is essential to face an unpredictable future

Explanation

The moderator concludes that addressing disinformation challenges requires combining regulatory frameworks (like those the EU is developing) with professional media integrity work conducted by journalists and media organizations. He presents this as necessary preparation for an uncertain technological and information landscape.


Evidence

Reference to European Union’s regulatory efforts and the work of media and journalists


Major discussion point

Combined regulatory and professional approach to disinformation


Topics

Legal and regulatory | Sociocultural


Agreements

Agreement points

Multi-stakeholder approach is essential for addressing disinformation

Speakers

– Paula Gori
– Alberto Rabbachin
– Moderator

Arguments

EDMO serves as a platform bringing together different stakeholders, similar to IGF’s approach to internet governance


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments


The session aims to understand who are friends and foes in the complicated situation of Internet Governance and the fight against disinformation


Summary

All speakers agree that combating disinformation requires collaboration between multiple stakeholders including civil society, government, platforms, and international organizations, while maintaining independence of fact-checking organizations


Topics

Legal and regulatory | Sociocultural


Local context and specificities are crucial for effective disinformation response

Speakers

– Paula Gori
– Mikko Salo

Arguments

Local specificities in culture, politics, and language are crucial for understanding how disinformation impacts different countries


Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


Summary

Both speakers emphasize that disinformation responses must account for local cultural, political, and linguistic contexts, with tailored approaches for different countries and communities


Topics

Sociocultural | Development


Media literacy and education are fundamental to combating disinformation

Speakers

– Mikko Salo
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Arguments

Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers


Summary

All speakers agree that media literacy and critical thinking education are becoming increasingly important, potentially more so than reactive fact-checking approaches


Topics

Sociocultural | Development


AI is creating new challenges for disinformation detection and response

Speakers

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo
– Moderator

Arguments

The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights


The shift toward AI chatbots creates invisible information consumption that fact-checkers cannot observe or respond to effectively


AI-generated disinformation is increasing and was particularly witnessed in recent European elections


People need to develop AI literacy and learn to think critically before using AI tools


Information integrity and reliable source identification become increasingly important due to AI-generated disinformation floods


Summary

All speakers acknowledge that AI is fundamentally changing the disinformation landscape, making detection more difficult and requiring new approaches to combat AI-generated false content


Topics

Sociocultural | Legal and regulatory


Similar viewpoints

Both speakers advocate for regulatory approaches that focus on algorithmic transparency and platform functioning rather than direct content moderation, emphasizing protection of fundamental rights and freedom of expression

Speakers

– Paula Gori
– Alberto Rabbachin

Arguments

Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Topics

Legal and regulatory | Human rights


Both express frustration with limited platform data access for research purposes, highlighting that platforms are not providing adequate transparency despite regulatory requirements

Speakers

– Morten Langfeldt Dahlback
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Topics

Legal and regulatory | Development


Both speakers express concerns about threats to democratic institutions and the challenges of maintaining independence while working to combat disinformation

Speakers

– Benjamin Shultz
– Morten Langfeldt Dahlback

Arguments

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


Topics

Human rights | Sociocultural


Unexpected consensus

Shift from reactive fact-checking to proactive media literacy

Speakers

– Morten Langfeldt Dahlback
– Mikko Salo
– Alberto Rabbachin
– Moderator

Arguments

There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically


Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers


Explanation

It’s unexpected that fact-checkers themselves (Dahlback) are advocating for a shift away from their traditional reactive approach toward proactive education, with broad agreement from policymakers and civil society representatives


Topics

Sociocultural | Development


Complexity requires nuanced rather than simple solutions

Speakers

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin

Arguments

The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights


There is insufficient knowledge about the scope and impact of disinformation, and conditions for gaining this knowledge are deteriorating


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Explanation

Unexpected consensus among different stakeholder types (NGO, fact-checker, policymaker) that simple solutions are inadequate and that the complexity of disinformation requires sophisticated, multi-faceted approaches


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

Strong consensus exists on the need for multi-stakeholder cooperation, importance of media literacy education, challenges posed by AI-generated disinformation, and the necessity of protecting fundamental rights while addressing disinformation. There is also agreement on the importance of local context and the complexity of the phenomenon requiring nuanced solutions.


Consensus level

High level of consensus among speakers despite representing different sectors (EU policy, fact-checking, civil society, US perspective). The consensus suggests a mature understanding of disinformation challenges and broad agreement on fundamental principles, though implementation details may vary. This strong alignment across different stakeholder groups indicates potential for effective collaborative approaches to combating disinformation while preserving democratic values.


Differences

Different viewpoints

Approach to combating disinformation: regulatory vs. independence concerns

Speakers

– Morten Langfeldt Dahlback
– Alberto Rabbachin

Arguments

Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments


Summary

Dahlback expresses concern about fact-checkers maintaining independence when their objectives align with government goals, while Rabbachin emphasizes that EU-supported organizations maintain complete independence from government influence


Topics

Human rights | Sociocultural


Platform data access and transparency

Speakers

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Summary

There’s disagreement about the effectiveness of current data access provisions – Rabbachin presents the DSA as providing adequate framework, while Dahlback and audience members report practical difficulties in accessing platform data for research


Topics

Legal and regulatory | Development


Unexpected differences

Effectiveness of current transparency and research access mechanisms

Speakers

– Alberto Rabbachin
– Morten Langfeldt Dahlback
– Audience

Arguments

The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access


Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Explanation

This disagreement is unexpected because it reveals a gap between regulatory intentions and practical implementation. While the EU representative presents the DSA as providing adequate framework for research access, practitioners report significant difficulties in actually obtaining data, suggesting implementation challenges not acknowledged in policy discussions


Topics

Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on the balance between regulatory approaches and independence concerns, the effectiveness of current data access mechanisms, and the optimal balance between different anti-disinformation strategies (fact-checking vs. media literacy vs. regulatory frameworks)


Disagreement level

Moderate disagreement with significant implications – while speakers share common goals of protecting information integrity and fundamental rights, they differ substantially on implementation approaches and the effectiveness of current measures. This suggests potential coordination challenges between policy makers, practitioners, and researchers in addressing disinformation effectively


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for regulatory approaches that focus on algorithmic transparency and platform functioning rather than direct content moderation, emphasizing protection of fundamental rights and freedom of expression

Speakers

– Paula Gori
– Alberto Rabbachin

Arguments

Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Topics

Legal and regulatory | Human rights


Both express frustration with limited platform data access for research purposes, highlighting that platforms are not providing adequate transparency despite regulatory requirements

Speakers

– Morten Langfeldt Dahlback
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Topics

Legal and regulatory | Development


Both speakers express concerns about threats to democratic institutions and the challenges of maintaining independence while working to combat disinformation

Speakers

– Benjamin Shultz
– Morten Langfeldt Dahlback

Arguments

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Disinformation is a complex, multi-faceted phenomenon that erodes information integrity essential for democratic decision-making, requiring sophisticated responses rather than simple solutions


There is a fundamental shift occurring from traditional fact-checking approaches toward media literacy and user empowerment, particularly as AI chatbots make disinformation less observable to fact-checkers


Regulatory divergence between the US and Europe is hampering knowledge gathering about disinformation, with the US experiencing democratic backsliding while Europe maintains stronger regulatory frameworks


The EU’s approach focuses on algorithmic transparency and platform accountability rather than content censorship, exemplified by the Digital Services Act which addresses how content is distributed rather than the content itself


Education and media literacy, particularly for children, are becoming increasingly critical as AI-generated disinformation proliferates and people need to develop critical thinking skills before using AI tools


Independent fact-checking organizations face the challenge of maintaining credibility and independence while their objectives increasingly align with government anti-disinformation efforts


Multi-stakeholder cooperation through networks like EDMO (120+ organizations across EU) is essential, but must respect local specificities in culture, politics, and language


Platform data access for researchers remains severely limited despite regulatory requirements, hindering understanding of disinformation scope and impact


Resolutions and action items

The Code of Practice on Disinformation will be fully integrated into the DSA framework as of July 1st, making it auditable and creating binding obligations for platform signatories


A new EDMO hub covering Ukraine and Moldova will be established to address critical regional disinformation challenges


Upcoming EU delegated acts will improve researcher access to platform data for disinformation studies


Continued investment in media literacy programs across EU member states, with initiatives tailored to local needs


Maintenance of transatlantic cooperation through focus on areas of broad bipartisan support, such as banning non-consensual deepfakes


Unresolved issues

How to effectively monitor and respond to disinformation distributed through private AI chatbot interactions that are not publicly observable


How independent fact-checking organizations can maintain credibility while working closely with government anti-disinformation initiatives


How to obtain sufficient knowledge about the scope and impact of disinformation when platform transparency is decreasing


How to balance the need for platform regulation with protecting freedom of expression, particularly given varying cultural and political contexts


How to address the growing regulatory divergence between the US and Europe while maintaining effective global cooperation against disinformation


How to scale media literacy education effectively when current global investments in media education are minimal


How to ensure meaningful academic and researcher access to platform data despite platform resistance and technical limitations


Suggested compromises

Focus on areas of broad political consensus (like banning non-consensual deepfakes) to maintain transatlantic cooperation despite broader disagreements


Emphasize algorithmic transparency and platform accountability rather than content moderation to address free speech concerns while tackling disinformation


Combine regulatory approaches with voluntary industry cooperation through codes of practice that can evolve into binding obligations


Balance global principles with regional specificities, allowing for local adaptation while maintaining shared fundamental values


Shift emphasis from reactive fact-checking to proactive media literacy education to address the changing nature of information consumption


Maintain independence of fact-checking organizations through multi-stakeholder governance structures rather than direct government control


Thought provoking comments

We have to start to think about new ways, new creative ways to maintain the alliance, the Transatlantic Alliance in these rough times… Recently in the US, non-consensual explicit deepfakes, colloquially known as deepfake porn, have actually been made illegal… my hope is that with small steps like these that have been taken in the states that do have broad support, such as banning explicit deepfakes that are made non-consensually, my hope is that collaborating on these issues that Europe and the U.S. and countries all around the world can continue the dialogue

Speaker

Benjamin Shultz


Reason

This comment was insightful because it reframed the discussion from focusing on problems to identifying practical solutions for maintaining international cooperation despite political tensions. Shultz acknowledged the deteriorating transatlantic relationship while proposing a pragmatic approach of finding common ground on specific, less politically charged issues.


Impact

This shifted the conversation from a purely analytical discussion of disinformation challenges to a more solution-oriented dialogue about maintaining cooperation. It introduced the concept of incremental progress through bipartisan issues, which influenced subsequent speakers to consider practical approaches rather than just theoretical frameworks.


However, when you use chatbots like ChatGPT or Clod… the information that you receive from the chatbot is not in the public sphere at all. It’s a response generated on the basis of a prompt that you give to the language model, which means that we, as fact-checkers, for example, are unable, we can’t see what responses you’re getting… I think we might see a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work

Speaker

Morten Langfeldt Dahlback


Reason

This was perhaps the most thought-provoking comment of the session because it fundamentally challenged the existing paradigm of fighting disinformation. Dahlback identified a critical blind spot in current approaches – that AI-generated responses in private conversations are invisible to fact-checkers, making traditional debunking methods obsolete.


Impact

This comment created a pivotal moment in the discussion, shifting focus from current regulatory frameworks to future challenges. It prompted the moderator to specifically ask the European Commission representative about this shift from fact-checking to media literacy, making it a central theme for the remainder of the session. It essentially redefined the problem space from observable public content to private, personalized AI interactions.


I think one of the core challenges that we face in responding to this problem is that we don’t know enough about the scope of the problem, and we don’t know enough about its impact… the conditions for gaining more knowledge about this problem have become worse over the past few months… because of regulatory divergence between Europe and the US

Speaker

Morten Langfeldt Dahlback


Reason

This comment was insightful because it identified a fundamental epistemological problem – that effective policy responses require understanding the scope and impact of disinformation, but the tools for gaining this knowledge are being eroded. It connected regulatory divergence to practical research limitations.


Impact

This comment established a critical foundation for understanding why the disinformation fight is becoming more difficult. It influenced subsequent discussion about data access for researchers and highlighted the interconnected nature of regulatory frameworks and research capabilities.


Once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, I think it’s easy for others to throw our independence into doubt, because the alignment is too close

Speaker

Morten Langfeldt Dahlback


Reason

This comment revealed a sophisticated understanding of the paradox facing independent fact-checkers: the more successful they are in aligning with government anti-disinformation efforts, the more their independence and credibility can be questioned. It highlighted the delicate balance between cooperation and independence.


Impact

This comment introduced a nuanced discussion about the relationship between civil society organizations and government bodies in the fight against disinformation. It added complexity to what might otherwise be seen as straightforward cooperation, showing how political dynamics can undermine the very organizations trying to combat disinformation.


I think that’s where we have to find some sort of protection and ensure that before they first need to… they need to be able to think before they use AI. And I was just framing and I was actually asking the chat, how does it look like an AI native person? Because if we are not able to think ourselves, we are not able to use the AI as it’s meant at the moment

Speaker

Mikko Salo


Reason

This comment was thought-provoking because it identified a fundamental cognitive challenge of the AI era – that people need critical thinking skills before they can effectively use AI tools. The concept of ‘AI native persons’ and the need to ‘think before using AI’ highlighted a crucial educational gap.


Impact

This comment reinforced the emerging theme about the importance of education and media literacy over traditional fact-checking approaches. It provided concrete support for the shift in strategy that other speakers were advocating, emphasizing the foundational role of critical thinking skills.


Overall assessment

These key comments fundamentally reshaped the discussion from a traditional focus on current disinformation challenges and regulatory responses to a forward-looking examination of how the landscape is changing. Morten Langfeldt Dahlback’s insights about AI-generated content being invisible to fact-checkers and the erosion of research capabilities created pivotal moments that shifted the conversation toward future challenges and the need for new approaches. Benjamin Shultz’s reframing toward practical cooperation despite political tensions moved the discussion from problem identification to solution-seeking. Together, these comments transformed what could have been a routine policy discussion into a more sophisticated analysis of the evolving nature of information integrity challenges, the limitations of current approaches, and the need for adaptive strategies that emphasize education and literacy over traditional content moderation.


Follow-up questions

How can we better understand the scope and impact of disinformation across different domains?

Speaker

Morten Langfeldt Dahlback


Explanation

He identified this as a core challenge, noting that we don’t know enough about the scope of the problem and its impact, and that conditions for gaining knowledge have worsened due to regulatory divergence and platform restrictions


How can independent fact-checking organizations maintain their independence while working with governments and regulatory bodies on disinformation?

Speaker

Morten Langfeldt Dahlback


Explanation

He highlighted the difficult position independent actors face when their objectives align with governments, as it can throw their independence into doubt and affect audience trust


How can fact-checkers and researchers address misinformation generated by private chatbot interactions that are not publicly observable?

Speaker

Morten Langfeldt Dahlback


Explanation

He noted that chatbot responses are not in the public sphere, making it impossible for fact-checkers to observe and respond to misinformation delivered through these channels


What does an AI-native person look like and how should we prepare them for information integrity?

Speaker

Mikko Salo


Explanation

He emphasized the urgent need for AI literacy and questioned how people who grow up with AI will think critically about information, stressing that people need to be able to think before they use AI


What is the current status of academic access to platform data under the Digital Services Act?

Speaker

Thora (audience member)


Explanation

She highlighted that large platforms are dragging their feet on providing academic access, claiming the EU needs to make definitions first, which is hindering research on how platforms undermine democracy


How can EU rules help recognize AI propaganda and digital integrity violations?

Speaker

Mohamed Aded Ali (audience member)


Explanation

He asked about identifying when AI technologies are misused to deceive or manipulate, and how EU frameworks can address these threats to digital communication integrity


How much investment should be allocated to cognitive security and information integrity as part of societal security?

Speaker

Mikko Salo


Explanation

He referenced the 5% investment in security and suggested 1.5% should go to whole-of-society security including information integrity, but questioned what the appropriate investment level should be


How can we transition from debunking and fact-checking work to more effective literacy work?

Speaker

Morten Langfeldt Dahlback


Explanation

He suggested this transition may be necessary as more information consumption moves to private chatbot interactions, requiring individuals to assess information themselves rather than relying on public fact-checking


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government

Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government

Session at a glance

Summary

This OECD open forum discussion focused on implementing AI principles and using AI in government services, featuring two main segments with international experts and policymakers. The first segment examined the OECD AI Principles Implementation Toolkit, a practical initiative designed to help countries, particularly in the Global South, develop responsible AI policies tailored to their local contexts. Costa Rica’s Marlon Avalos explained how his country initiated this toolkit project after recognizing that while OECD principles provide strong ethical guidance, many developing countries lack the tools to translate these principles into actionable policies. The toolkit will feature a self-assessment component and repository of best practices to guide countries through AI governance challenges.


OECD’s Lucia Rossi detailed the toolkit’s structure, emphasizing its co-creation approach through regional workshops with countries in Asia, Africa, and Latin America. Mozilla’s Jibu Elias shared India’s community-driven approach to responsible AI, highlighting successful grassroots initiatives like student-developed accessibility tools and tribal community workshops that demonstrate how AI adoption must be locally rooted and people-centered. Niger’s Anne Rachel Ng discussed African countries’ opportunities and challenges, noting that while AI can address development barriers in healthcare, agriculture, and education, the continent faces significant infrastructure constraints, with only 22% of Africans having broadband access and many AI systems performing poorly on African populations due to training bias.


The second segment explored practical government AI implementation, with Norway’s Katarina de Brisis sharing successful use cases including AI-powered X-ray analysis that reduced patient waiting times by 79 days and tax fraud detection that increased detection rates from 12% to 85%. Korea’s Jungwook Kim emphasized three key pillars for effective AI adoption: innovation in data and infrastructure, inclusion to address digital divides, and strategic investment in capabilities. Both speakers stressed the importance of building employee competence, establishing legal frameworks, and ensuring data security when implementing AI in government services. The discussion concluded that successful AI implementation requires inclusive, context-sensitive approaches that prioritize trustworthiness, local capacity building, and international cooperation to prevent widening digital divides.


Keypoints

## Major Discussion Points:


– **OECD AI Principles Implementation Toolkit Development**: A collaborative initiative led by Costa Rica to create practical tools that help countries, especially in the Global South, translate the high-level OECD AI principles into actionable policies. The toolkit will feature self-assessment tools and region-specific guidance based on best practices from comparable countries.


– **Inclusive AI Development in Emerging Economies**: Speakers from India, Costa Rica, and Niger emphasized the importance of community-rooted, locally-contextualized AI solutions. Examples included student-developed accessibility tools, tribal community workshops, and addressing infrastructure challenges like connectivity and the digital divide.


– **AI Implementation in Government Services**: Discussion of practical AI applications in public sector services, with Norway sharing successful cases like AI-assisted medical diagnosis, tax fraud detection, and police transcription services. The focus was on improving efficiency while maintaining trustworthiness and citizen safety.


– **Challenges and Risks in AI Governance**: Identification of key barriers including inadequate infrastructure, skills gaps, data scarcity, and the need for inclusive governance frameworks. Speakers highlighted risks around bias, exclusion, and the importance of building public trust through transparent, accountable AI systems.


– **International Cooperation and Capacity Building**: Emphasis on the need for collaborative approaches to AI development, with particular attention to supporting developing countries through knowledge sharing, technical assistance, and ensuring no country is left behind in the AI transformation.


## Overall Purpose:


The discussion aimed to showcase practical approaches for implementing responsible AI governance globally, with a particular focus on supporting developing countries. The session sought to bridge the gap between high-level AI principles and concrete policy actions, while demonstrating real-world applications of AI in government services.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by knowledge sharing and mutual learning. Speakers were optimistic about AI’s potential while remaining realistic about challenges. The tone was particularly inclusive, with strong emphasis on ensuring global participation in AI development. Technical difficulties with some remote speakers added a touch of informality but reinforced the speakers’ points about digital infrastructure challenges. The session concluded on an encouraging note, emphasizing collective action and continued cooperation.


Speakers

– **Moderator (Yoichi Iida)**: Chair of the OECD Committee on Digital Policy


– **Marlon Avalos**: Online Director of Research Development and Innovation at the Ministry of Science and Technology from Costa Rica


– **Lucia Rossi**: Economist at Artificial Intelligence and Digital Emerging Technology Division from OECD


– **Jibu Elias**: Responsible Computing Lead for India from Mozilla


– **Anne Rachel Ng**: Director General at National Agency for Information Society, ANSI from Niger


– **Katarina de Brisis**: Deputy Director General at the Ministry of Digitalization and Public Governance from Norway, and long-standing representative at OECD Digital Policy Committee


– **Jungwook Kim**: Executive Director at Center for International Development from KDI


– **Seong Ju Park**: Policy Analyst at Innovative Digital and Open Government Division from OECD


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# OECD Open Forum: Implementing AI Principles and Government AI Services – Discussion Report


## Executive Summary


This OECD open forum at the Internet Governance Forum 2025 brought together international experts to discuss two critical aspects of AI governance: implementing AI principles through practical toolkits and deploying AI in government services. The session featured representatives from Costa Rica, Niger, India, Norway, and Korea, alongside OECD officials, creating dialogue between developed and developing nations on shared AI governance challenges.


The discussion was structured in two segments: first examining the OECD AI Principles Implementation Toolkit led by Costa Rica, and second exploring practical government AI applications. Key themes included the need for international cooperation, community-centered approaches to AI development, and addressing infrastructure challenges while scaling AI implementations effectively.


## Session Overview and Structure


The forum was moderated by Yoichi Iida, Chair of the OECD Committee on Digital Policy, who noted Japan’s role in proposing the OECD AI principles in 2016. The session transitioned to Seong Ju Park, Policy Analyst at OECD’s Innovative Digital and Open Government Division, who moderated the second segment on government AI services.


## Segment 1: OECD AI Principles Implementation Toolkit


### Initiative Background


Marlon Avalos, Online Director of Research Development and Innovation at Costa Rica’s Ministry of Science and Technology, explained the toolkit’s origins in Costa Rica’s experience developing their national AI strategy. Despite being politically stable and technically skilled, Costa Rica recognized significant challenges in translating OECD AI principles into actionable policies. As Avalos noted, “even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge.”


The initiative gained momentum when the Global Partnership on AI (GPAI) joined with the OECD AI community in July 2024, creating opportunities for broader collaboration on practical implementation tools.


### Toolkit Structure and Co-Creation Approach


Lucia Rossi, Economist at OECD’s Artificial Intelligence and Digital Emerging Technology Division, outlined the toolkit’s development through regional co-creation workshops across Asia, Africa, and Latin America. The toolkit will include:


– A self-assessment tool for countries to evaluate their AI governance capabilities


– Region-specific guidance tailored to different developmental contexts


– A repository of best practices from comparable countries


– Resources available through the OECD AI Policy Observatory on oecd.ai


The co-creation workshops serve dual purposes: informing toolkit development and creating knowledge-sharing networks among participating countries.


### Country Experiences and Perspectives


**India – Community-Driven Development**


Jibu Elias, Responsible Computing Lead for India at Mozilla, presented examples of grassroots AI initiatives including student-developed accessibility tools like WebBeast (a web accessibility checker) and PhysioPlay (a physiotherapy game), plus tribal community workshops. He emphasized that “responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development.”


Elias posed a fundamental question: “Don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it.”


**Niger – African Context and Challenges**


Anne Rachel Ng, Director General at Niger’s National Agency for Information Society (ANSI), highlighted both opportunities and significant barriers for AI adoption in Africa. She identified potential applications in healthcare, agriculture, and education, while noting critical infrastructure constraints: only 22% of Africans have broadband access, and 16 African countries are landlocked.


Ng addressed data bias issues, noting that only 2% of African-generated data is used locally, and facial recognition systems perform poorly on African populations. She referenced how pulse oximeters during COVID-19 were less accurate for people with darker skin tones due to training bias.


Despite challenges, Ng advocated for patient, culturally-grounded approaches, invoking an African saying: “Europeans have watches, we have time,” explaining that “taking the time to develop context-appropriate solutions is more important than rushing implementation without proper understanding.”


## Segment 2: AI in Government Services


### OECD Research Findings


Seong Ju Park presented OECD research showing that while AI offers significant potential for improving public services, implementation faces numerous barriers. AI use cases are unevenly distributed across government functions, with many initiatives remaining at the piloting stage rather than scaling to wider systems.


Government AI carries higher risks than private sector applications, including ethical, operational, exclusion, and public resistance risks. Some government functions face particular barriers, such as stricter data access rules and requirements for audit trails in public integrity functions.


### Country Implementation Examples


**Norway – Systematic Deployment**


Katarina de Brisis, Deputy Director General at Norway’s Ministry of Digitalisation and Public Governance, shared concrete examples of successful AI implementation:


– AI-powered X-ray analysis allowing patients to go home immediately instead of waiting, affecting about 2,000 patients


– Tax administration fraud detection improving from 12% to 85% detection rates, generating 110 million kroner in additional revenue


– Police transcription services streamlining administrative processes


Currently, 70% of Norwegian state agencies use AI, with targets of 80% by 2025 and 100% by 2030. Norway is investing in Norwegian language foundational models and computing infrastructure while implementing the EU AI Act.


**Korea – Strategic Framework**


Jungwook Kim, Executive Director at Korea’s KDI Center for International Development, outlined a three-pillar framework: innovation (data and infrastructure development), inclusion (addressing digital divides through accessibility improvements), and investment (strategic resource allocation).


Kim noted that AI involves “moving targets” requiring “agile measures to take care of the AI safety issues,” highlighting the need for adaptive governance frameworks.


## Key Themes and Consensus Points


### International Cooperation


All speakers emphasized the critical importance of international cooperation for successful AI development. The OECD toolkit represents collaborative efforts to bridge gaps between principles and practice, with support across different developmental contexts.


### Community-Centered Approaches


Multiple speakers stressed involving local communities, especially marginalized groups, in AI development to ensure solutions address real local needs rather than imposing external solutions.


### Infrastructure as Foundation


Representatives from developing countries highlighted connectivity and infrastructure limitations as fundamental barriers requiring attention before sophisticated AI governance frameworks can be effectively implemented.


## Challenges and Implementation Barriers


### Scaling from Pilots to Systems


A significant challenge identified across countries is moving AI initiatives from pilot projects to systematic implementation across government services.


### Capacity Building


The pace of AI development often exceeds the speed at which human capacity can be developed, creating mismatches between technological advancement and workforce readiness.


### Bias and Inclusivity


Current AI systems often fail to serve non-Western populations effectively due to bias and lack of representative training data, requiring both technical solutions and inclusive development processes.


## Next Steps and Commitments


The OECD committed to:


– Launching a comprehensive report on governing with AI


– Creating a dedicated hub for AI in the public sector on oecd.ai


– Organizing regional co-creation workshops, starting with ASEAN countries in Thailand


– Conducting global data collection on AI policies and use cases for the OECD AI Policy Observatory


Regional workshops will continue with African, Central American, and South American countries to inform toolkit development and build knowledge-sharing networks.


## Conclusion


This forum demonstrated both the potential and challenges of implementing responsible AI governance globally. While speakers showed strong consensus on fundamental principles—international cooperation, inclusive approaches, and context-sensitive solutions—they also acknowledged significant differences in implementation approaches based on developmental contexts and available resources.


The discussion revealed that successful AI implementation requires more than technical capabilities; it demands inclusive governance frameworks, robust infrastructure, community engagement, and sustained capacity building. The OECD AI Principles Implementation Toolkit represents an important step toward bridging the gap between high-level principles and practical implementation, supported by ongoing collaboration and knowledge sharing among countries facing similar challenges.


The path forward emphasizes balancing international cooperation with local ownership, ensuring that AI development serves community needs while building the foundational capabilities necessary for sustainable and equitable AI adoption.


Session transcript

Moderator: Thanks for watching, don’t forget to subscribe! Good afternoon everyone, and welcome to this open forum organized by the OECD. Thank you for joining us here in the Lillestorm and also online. This session brings together two connected discussions. Before jumping to the content, my name is Yoichi Iida, the chair of the OECD Committee on Digital Policy, and I’m very happy to be here together with all of you to moderate this session. So as first part, we begin with a panel on the OECD AI Principles Implementation Toolkit that is a practical initiative designed to support countries in strengthening their AI ecosystems and in adapting governance frameworks to local contexts. The toolkit will offer region-specific guidance to help bridge AI divides and advance responsible inclusive AI development. We will then transition to a second segment. focused on how governments are using AI in practice to improve public service deliveries and policy making. Since 2019, the OECD AI principles have guided national strategies and international cooperation on AI. The OECD AI principles also serve as the common foundation guiding the work of the global partnership on AI GPA, which recently joined with the OECD AI community in July 2024 in a new integrated partnership. Despite the transformative potential of AI, access to benefit of this technology remains uneven. Many countries face challenges related to infrastructure, human capacity, and the policy frameworks, along with greater exposure to risks such as task replacement. Today’s discussion will spotlight on policy efforts and initiatives that help close those gaps and promote inclusive AI ecosystems around the world. Please join me in welcoming our four distinguished speakers. Mr Marlon Avalos, Online Director of Research Development and Innovation at the Ministry of Science and Technology from Costa Rica. Second, on my left side, Ms Lucia Rossi, Economist at Artificial Intelligence and Digital Emerging Technology Division from OECD. Third, again online, Jibu Elias. Mr Jibu Elias, Responsible Computing Lead for India from Mozilla. And of course, last but not least, of course, Miss Anne Rachel Ng, Director General at National Agency for Information Society, ANSI from Nigel. Welcome. So we will first hear from the panelists about their experience in designing policies for fostering AI development and diffusion. After the first round of questions, we will go around for a short final reflection from each speaker. We will then hear from our second segment, which will talk about AI in the public sector. Here we will listen from three distinguished speakers. So on my right side, Miss Katarina de Brisis, Deputy Director General at the Ministry of Digitalization and Public Governance from Norway, and also the long-standing representative at OECD Digital Policy Committee. And Dr. Jungwook Kim, Executive Director at Center for International Development from KDI. And Miss Seong Ju Park, Policy Analyst at Innovative Digital and Open Government Division from OECD. So after the second segment on AI in the public sector, we will then open the floor for questions and answers session to hear from you and engage in a conversation. So we will monitor the online chat and take questions from the room also. So as we will be taking questions after the second segment, if you are joining online, feel free to ask your comments and put your questions in the chat box. If you are here with us in the room, please note your questions down on the note, and we will reply to them after the second segment of this open forum. So we start with the first segment, and I would like to start with the discussion on collaboration on trustworthy AI and hear about the designing AI policies and plans for the OECD toolkit to provide support to countries while elaborating these policies. So I will start with Mr. Avalos online. Mr. Avalos Martin from Costa Rica initiated the work on the OECD principles implementation toolkit. So Avalos, what prompted this initiative and what has been Costa Rica’s experience so far up until now in developing a national AI strategy from this perspective?


Marlon Avalos: Thank you very much, Ida-san, for giving me the floor. Good morning and good afternoon, dear colleagues connected virtually and there in Norway. It’s an honor to be in this Internet Governance Forum 2025 to tell a little bit about our experience, design our


Moderator: It seems to have some technical issues online, so please wait a little bit before we get him back, but otherwise we will proceed to the second speaker. Okay, so thank you for your patience. Before we get him back online, I would like to proceed to the second speaker. So, moving to Lucia, I would like to ask you, could you tell us more about the OECD AI principles implementation toolkit, with its objectives, structure, and how it aims to support governments with different levels of AI maturity in policymaking. What is the overall vision for this project going forward? Lucia, the floor is yours.


Lucia Rossi: Thank you, Yoichi, and good afternoon to the audience here and online. It’s a pleasure being here at the IGF. So, as Marlon was starting to say, this project was initiated by Costa Rica, and it started off from the consideration that AI opportunities are manifold across sectors and across the globe. And there are, of course, several potential transformative effects of AI across sectors, and we will hear later on about AI in the public sector. and as well as we know in agriculture, in health care, in education. And these opportunities are however difficult to seize for different countries as there are several bottlenecks that oftentimes prevent countries from having the capacity or the financial resources or the organizational resources to devise effective AI policies. So with these considerations in mind, we started with our delegates in the Global Partnership on AI and with the support of several countries including Japan, Costa Rica, the UK, France, Korea to developing what is a practical toolkit to implement the OECD principles. And just allow me to stay a bit on the principles that as we heard are the foundational document for the OECD in AI governance and that were adopted in 2019. And these principles have since then been the object of further work from the OECD to provide analytical analysis but also guidance on how to implement them. And they are constituted by five policy principles that are recommendations to governments around areas such as research and development, infrastructure, the policy environment, skills and jobs that are required to effectively implement AI across sectors and international cooperation. But also they are values-based principles that cover those values that all stakeholders should strive to embed in. in AI systems and, of course, to respect democratic values, fairness, transparency, explainability, accountability, among others. So what this toolkit aims to do is to provide really practical resources for implementing, facilitating adoption across countries with a specific focus on emerging and developing economies but tailored to the diversity of needs, preferences and available policy options across countries. So ultimately these resources will support advancing a more inclusive and effective AI governance. So in practice what this toolkit will look like is an online tool that will be composed of two main elements, the first one being a self-assessment that countries will be able to navigate autonomously and that would guide them through, on one hand, the areas that they would need to strengthen in AI governance and, on the other hand, priorities that they may want to establish. And then once this self-assessment is completed, the toolkit will provide suggestions based on best practices in regions that are at the same or that are comparable or have similar challenges so that they can take inspiration from these other countries. So the second component will build on the repository of national AI policies that we have on the OECD. AI Policy Observatory and that we aim to strengthen by collecting further information on national initiatives and regional initiatives. And in terms of the design of this toolkit, one key feature is really the co-creation component. So to develop the toolkit, we are currently planning and organizing, and we have already won such a regional workshop planned, to have really engaging engagement with countries, with the designer of AI policies, to understand better on one hand what are the key challenges they face when devising AI policies and when thinking about AI governance in their respective countries. And on the other hand, understand what resources they need, but also, as I mentioned, also understand what practices they have put in place to overcome these challenges. So we will have one first such workshop in Thailand, again supported by Japan with ASEAN countries, and we will then organize several others, for instance, with African countries, with Central American and South American countries. And we plan to make this tool as helpful as possible. I think I will stop here in the interest of time, and I’m just checking online if Marlon is there, but I don’t see him.


Marlon Avalos: So, please. Thank you, Ida-san. This is an immersive experience. I just lost my connection, and this is a challenge that developing countries like us face every day, every time. And, well, I was saying that our decisions to promote this OECD AI principle implementation toolkit wasn’t a coincidence. It was intentioned based on our national experience, as you can see. And we saw a reality while the OECD principles provide strong ethical guidance, and many countries, especially in the Global South, still lack the tools and institutions to turn those principles into actions. And our initiative was motivated by three aspects, necessity, urgency, and opportunity. Why necessity? Because the AI revolution is reaching all countries, but the capacities needed to adopt it responsibly are still unequally shared. Urgency, because we saw how quickly the benefits of AI were concentrated in advanced economies, leaving others behind, mainly in infrastructure and AI compute capacity. And opportunity, we have a chance to move from principles to concrete capabilities, mainly in developing countries. As context, we launched our national AI strategy last October. Currently, it’s being implemented with the support of over 50 entities across government, academia, civil society, and the private sector. And we learned a lot of things with this process. First, that a successful strategy must be grounded in reality. That’s why we try to focus on what truly matters, ensuring the ethical, secure, and responsible use. Development and Adoption of Artificial Intelligence, always with the people at the center and aligned with our national priorities and values. We prioritize key sectors where AI can add tangible value like health, education, agriculture, and public services, reflecting our development goals and our comparative advantage like environmental leadership, political stability, and international engagement. We also decide to build a solid foundation first based on our strategic objective, first, design flexible and adaptive regulatory frameworks, second, strengthen our R&D and innovation ecosystem, three, develop talent and skills for a changing world, and fourth, leverage AI in the public sector as tools for inclusion and efficiency. Our guiding principles emerge through a diverse benchmarking from the OECD and UNESCO recommendations to the Edochime AI process, Code of Conduct, and our national values rooted in peace and human dignity. As I said, we take the best parts of a lot of instruments. For example, we are so inspired by the European Union, AI Act, the U.S. AI Risk Management, and AI policies from our regional peers in Latin America, and several papers and reports. We don’t stop there. We conducted a national risk assessment based on real threats and prior experience. As you can see, we got inspired from a lot of instruments and references, but one of our most important conclusions was international collaboration is essential, mainly for developing countries like us. That’s why we embed these international leaderships as a core line of action. in our strategy based on our active participation in the OECD as a member in GPIE, in regional initiatives, European programs and other programs gave us the path to do it. Design a strategy like this wasn’t easy because we had a lot of goals, we had a lot of priorities, but we lack maybe the knowledge that other countries, that the developed countries have. Even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge. Just a few days ago, as chair of the OECD Ministerial Council meeting, Costa Rica proposed the development of this OECD AI principles implementation toolkit, a tool now endorsed by several countries, members and non-members. Getting to this point required months of preparation and negotiation with developing and developing countries, thanks to the support and talent of the OECD Secretariat, represented today by Lucia Rossi at the panel, to design a tool that will contain simple and actionable features to help governments in the struggle of building their own AI policies. A self-assessment and implementation guide that my colleague Lucia Rossi will explain more in her intervention or was explained in their intervention after my reconnecting issue. This is not only a Costa Rica initiative, this is a collective project that is entering a phase of regional co-creations with the support of countries like Japan, Korea, Italy, France, the European Union, Slovakia, Republic and other countries that are supporting us not only politically but financially. Countries of different regions, Central American region, Latin American region, Africa, and Asia, will help shape the toolkit’s next iterations, ensuring it adapts as technologies evolve and societies change. Lastly, the success of the toolkit will depend on two things, we hope. Customization, learning, and evidence. We need features that reflect local needs, processes that evolve over time, and metrics that show that AI is actually delivering value for people. Costa Rica offers its lessons based on our experience in the design of AI policies and the next tools and instruments that we are designing, for example, the sandbox, the regulations, and other instruments. And for sure, our full commitment to help turn the energy that we have and the support that countries gave us into actions so that no country, regardless the size or income, is left behind in the age of this artificial intelligence age that we face in this moment. I will stop here, and thank you, and my apologies for the connection issue. Thank you.


Moderator: Okay, thank you very much, Marlon, for your sharing the experience and your efforts on this very important initiative. If you allow me to talk a little bit about Japan’s experience, because we actually started this discussion in the year 2016 and proposed international discussion to OECD on AI principles. That was the beginning of the whole process, and the When people agreed on OECD AI principles, it is actually very comprehensive and very high-level. So some people said, you know, this is wonderful, but how we can make this into practical policies and actions? So now we are making efforts together, not by only Japan, but all together with Costa Rica, Korea and others, of course, backed by OECD Secretariat to guide the governments and other stakeholders to understand and make this very comprehensive set of principles into actions and practical policies. So this is a wonderful process and I’m very happy to hear these two presentations. And now I would like to move on to Jibu Elias from Mozilla online. So based on your experience and work with Mozilla and also your experience in India’s AI ecosystem, Jibu, what types of community-led or policy-driven initiatives have proven most effective in supporting responsible AI adoption, particularly in emerging economies? So what insights can we derive from these initiatives that could be relevant for policymakers? So Jibu, the floor is yours.


Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI ecosystem in India, one of the most complex and dynamic tech environments in the world. So let’s begin with the foundational truth. In emerging economies, AI adoption is not just a question of capacity, but a larger question of context as well. Now responsible AI must be inclusive, accessible, and rooted in the values and live realities of people it should serve. And at Mozilla Foundation, we tried to meet these challenges head-on through a unique initiative called Responsible Computing Challenge or RCC. So in India, India has the world’s one of the largest or I think second largest developer population in the world. Yet there are a lot of shortcomings. For example, ethics, accessibility, and inclusion are almost entirely missing from the mainstream AI or even the tech curricula. The AI workforce in India is concentrated in elite urban clusters around cities like Bangalore or Gurgaon, leaving the smaller tier two, tier three cities, rural communities, and especially female workforce, women behind. And fundamentally, there’s a growing trust deficit. People are rightfully skeptical of opaque systems that affect their jobs, access to welfare, or even their freedom. So in RCC India, we decided not to start with rather abstract frameworks. We focused with people, especially students, academic faculties, women, community, marginalized groups like tribal population, and most importantly, first-generation learners who never had been asked what Responsive AI meant in their world. So from the starting point, we have designed a deeply localized and community-rooted approach where we begin with this question, what does Responsive AI mean to those who are most affected by it, but at the same time, least represented in building it? So our answer came from the communities we mentioned before, you know, students, marginalized communities, and importantly, young innovators across the country. So, one of the most striking experiences came from one of the colleges we worked with called Merian College in a hilly terrain in the Western Ghats campus in Kerala, where they became a testbed for some of our ethical tech innovations. One of its standout outputs is that it’s an AI-powered tool called WebBeast, which was developed by a first-year BCS student. So the tool is a lightweight, open-source, AI-powered accessibility widget, which was built as part of an equitable digital access course we developed with the university. It’s now been used by 30 websites across the world, and it even received a design patent from the Indian Patent Office. So this isn’t just about a student project. It’s a project that even first-year undergraduates, when empowered with ethical frameworks and open tools, can create global public goods. Similarly, we had another tool called PhysioPlay, which is a WhatsApp-based AI simulation tool for physiotherapy students designed to help them build diagnostic skills through gamified real-world casework built by a physiology student. SpeakBoost, a communication coaching platform that provides AI-powered feedback on fluency, filler words, grammar, tone, and supporting students prepare for interviews and presentations. TwinSage, which was developed by, again, a community of students from Maharashtra, coming from very marginalized groups who don’t have the privilege of access to buy high-tech technology or access. So they have developed this tool, which is a personal finance chatbot that teaches college students about budgeting, saving, financial planning through natural language conversations. So each of these tools we mentioned here are, first of all, community-based tools, community-routed tools, in some cases built by students for their peers, understanding what is lacking in their ecosystem, what they need to build. Their ethics-aware or responsible pillars are focused on AI. and Open Source Fuzz. They represent not just innovation, but how does democratized digital leadership look like. While students demonstrate what a responsible tech looks like from ground up, when we work with faculties, that led to initiatives addressing another critical frontier of AI, such as explainability in high-stake domains. So our work with the Indian Institute of Information Technology, IIT Kottayam, we developed something called the FactSets Lab, which launched a suite of explainability dashboards designed to tackle the larger black box problem in AI. So one of their dashboard helps users understand why an AI system made a decision using shared values, biased audits, and fairness metrics. Similarly, we developed a dashboard called AI Fora, which enables real-time interactive testing of AI predictions on real data sets, making model behavior visible to even non-technical users. And finally, IXI, which applies explainability to medical AI by using GradCams heat maps to highlight what influenced diagnostic decisions in retinal scans are like. So these are open, and the key impact is that they give everyday users and regulators and policymakers the ability to question and importantly correct the cost of AI. This is the future of public AI infrastructure, transparent, participatory, and grounded in accountability. And finally, our most powerful insights came not from labs, but from communities often left out of the AI conversation altogether. So at Lendi Institute of Engineering Technology in Andhra Pradesh, we ran an ideathon with students from rural and semi-urban backgrounds, where we guided activities in empathy, inquiry, creative problem-solving, and student-identified challenges in their own communities, from waste to safety, to waste management, to safety, to water scarcity. They even built tech-assisted AI, such as solutions, blueprints, and video pictures applying digital ethics in a more practical and personal way. In parallel, we also took RCC model even further to an area called Chintapalli, it’s a tribal area in Eastern Ghats where we conducted workshops with 56 tribal women, many of whom have never accessed AI tools before. We did it in the local language Telugu through participatory storytelling, visuals, and guided use of AI tools such as ChaiGPT and map real problems such as unemployment, safety, healthcare, and explore how AI could support micro-enterprises in herbal medicine, food production, and arts and crafts, some of which are the prominent employment methods these people use. The results were not just minimal tech exposure, but rather, I’m happy to say, it’s a tech transformation powered by a powerful tech like AI on cultural grounding, peer collaboration, and a dignity-first design. So these workshops proved that responsible AI doesn’t begin with the tools, it begins with trust. So while wrapping up, let me say the main lessons from India’s AI ecosystem and what we may see works in emerging economics or global south or global majority as we call it is, you know, especially having worked in the intersection of civil society, academia, and national policy is that we need ecosystems that are locally rooted, capacity-driven, and above all, people-centered. And the most powerful lesson here is that don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it. So if you want AI that is safe, just truly inclusive, we must design not only the code and policy, but the humility, memory, and imagination as well. So thank you very much for this opportunity. I will stop here.


Moderator: Okay, thank you very much Jibu for this wonderful story and it’s great to hear about these experiences from the ground and congratulations on your work. India’s success with DPI and the digital public goods is a powerful example of good policy practice and the I’m very happy to hear that the responsible AI principles is just backing up such success for digitalization. So now I would like to turn to Miss Anne Rachel Ng. So from your perspective as a digital policy leader in Africa, what are some of the key opportunities and also challenges for African countries in developing inclusive and context-sensitive AI policies? How can international initiatives like the OECD AI policy toolkit better support countries in that region? What key considerations should be made? So Anne Rachel, the floor is yours.


Anne Rachel: Thank you very much and good afternoon everybody. I’m actually very happy to go after Jibu in this conversation because he gave a lot of examples that I can relate to. But I’m going to start by saying that in the Global AI Index, it places African countries in general among waking up, nascent when it comes to AI investment, innovation, implementation in general. So for example, Egypt, Nigeria, the United States, and the United Kingdom. So it places African countries in general Kenya are nascent, while Morocco, South Africa and Tunisia are waking up. There’s a lot more waking up and I really hope that we will soon, you know, all be graduating. So, we do face opportunities and challenges and those are basically in developing everything that is, as Jibu said, inclusive, context-sensitive AI policies and I’m pretty sure international initiatives like the OECD toolkit can help because it does give, you know, a few places where we can pick and choose and also make sure that we look into others’ experiences so that in doing what we have to do to get there, we do it the right way. So, in terms of the key opportunities, for example, we do have development barriers that can be alleviated. AI can accelerate our, you know, critical sectors like healthcare and in there, for example, if I take the case of my own country that is Niger, we started years ago something that is called a program on smart villages and we started with healthcare. So, you know, with telemedicine that is geared mostly to skin diseases because it was easier to take pictures, send them to dermatologists and, you know, get treatment to people and also, you know, disease prediction. But it’s gone to the point that, for example, I have a group of young people right now at home who are looking at, who are working on a device Remember the oximeter during COVID where you would measure oxygen levels in a person that was sick? So, a lot of researchers found out that, for example, that is a device that does not gauge oxygen level the right way in people who are melanated. So, they decided that it was something that they wanted to do during COVID. And today, they actually have a little device that is just like the regular oximeter, but whereby the light can penetrate a darker skin and give true measures of what oxygen level is in a person’s body. And in, you know, agriculture, precision farming, agroforestry is one of the places where we’ve been using AI, education, of course, personalized learning, and use of languages in general. Because this is a place where nobody grows up with just one language in Africa, hardly any. It is important that when we’re trying to get context AI, that we make sure that to get trustworthiness, we have people who really understand what’s in it for them. We tend to have policies that are geared to people who can read and write what we call the official languages. And then we forget that in our settings, we have about, you know, 60 to 80% of our populations that are still rural. So, they don’t speak English, they don’t speak French. And if you want them to be part of this, you really have to explain it to them in their language. And that’s also one of the reasons why the little applications that the kids are doing in terms of voice recognition softwares that can be helped whether in can help people whether in FinTech or health care and others are really helping. We do have another opportunity which is simply that we have a very young population in the region. Now we do need a skilled workforce so capacity development and deployment is something that we absolutely need. Now one of the big constraints that also come with that is that kids do not grow at the speed that artificial intelligence is growing and when I take my again my own country we have you know 65 plus that are under 25 and at least 50 percent that are under 15. So it’s really a very young population and as much as we need a lot of capacity building we need to give it time you know for the kids to get to the point where we can have a sound and real workforce. We do have local innovation ecosystems that are really growing AI solutions that are geared to the local place as in for example using a lot of mobile financial tools to make sure that from the women agriculturists all the way to land sharing and deeds recognition in rural areas things like that are being done. So those are you know some of the key opportunities and of course we do have the regular challenges that everybody know in terms of infrastructure. Again, when I take the case of my country in the African region, we have 16 countries that are landlocked. So connectivity infrastructure is already something that is quite dear. We do have, we still have, you couple that together and you have only about 22% of Africans that have broadband access. So that’s still something that we need to work on because it exacerbates the divide. In terms of policy and regulatory frameworks, we have a deep fragmentation also because many countries like cohesive strategies, AI strategies, or harmonized regulations. So you do have uneven implementation or even, you know, missed cross-border collaboration opportunities because we don’t, in as much as we have some of the ministerial meetings, for example, on the continent to talk about one policies or the other, we absolutely need, if we’re going to use, you know, AI tools in fintech, we have to make sure that the finance minister understand it’s not only the, you know, the technology or digital minister talking about this. We need to make sure that if we’re going for a national ID that the person who is going to be ID’d understands the reasons why and what it’s bringing to them in terms of advantages. And we also need all of the different government ministries like, you know, Interior, Defense, all the way to the National Data Protection Agency to talk together to make sure that whatever is put in place is really protecting people’s privacy. So we also have, of course, data scarcity and bias. As I just said, we do have a lot of facial recognition systems, for example, globally that are trained on non-African data, and they perform poorly on our people. And in general, right now, at the minute that we’re speaking, only about 2% of data generated on the continent is used locally. So it’s basically hard to get real data back to our institutions just because it’s managed by global platforms that do not necessarily want to share it readily with us. And again, we do have the capacity constraints just because the governments struggle to keep pace also with AI advancements. So you’ve started barely talking about data privacy, that your agricultural minister wants to put a lot more stuff in there and environment and everything. So all of it collides to the point where, honestly, we come to a point where governments are having a hard time sieving through the little data that they have to make sense of it locally. So toolkits like the OECD one can help. because it also, but it can only help if we really have modular, flexible guidance also, you know, on low resource settings. So things like Jibu and Malone talked about are really interesting and can be looked at and that can also help some of our countries because it’s much better to have real case uses than generic benchmarks because those are great but, you know, they don’t really show you how to make it work at home. So in terms of capacity building, we need definitely more AI research centers. We need policy training and knowledge sharing, you know, with platforms. How to make that happen is also one of the things that we’re grappling with and we need all of that, of course, so that our own policymakers can be empowered to have discussions at the level where, you know, policies can then trickle down to people. And, of course, we all talked about it, inclusive governance, you know, we must include, globally, we must include African voices to avoid the one-size-fits-all, you know, that I love the idea of that oximeter because we’ll kind of sew it and we’ve experienced it somehow, but to suddenly discover that this little device that we were trusting to do something is not really doing the right thing for us was really eye-opening. So it’s important that, you know, everybody’s perspective is taken into making sure that these global toolkits are done the right way, looking at people’s I guess particular settings and context. So in terms of also, you know, developing public-private partnerships, it is something that is starting to get traction more in the region, because of course government cannot do it all. We absolutely need the private sectors to, you know, to be part of this whole process and to also make sure that they can develop things that they can, you know, live on. So I think having said that, I will conclude by saying, I’m going to say something that makes us all laugh all the time, that maybe a few here can relate to, at least if you’re African. We do say Europeans have watches, we have time. So I’m just saying this to plead for, you know, taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything, because people do not understand what it is we’re trying to do or where is it that we’re trying to get to. So it is truly important that everybody is listened to, everybody is part of the discussion, everybody is brought to the table, so that that trustworthiness that we want be not only in AI, but in the whole, you know, digital transformation that we want to see in our countries. Thank you.


Moderator: Okay, thank you very much for this very insightful presentation, Rachel. And I saw a lot of commonalities between your country and our country, like issues such as education or maybe the… Spreading the idea is always very difficult in Japan. But I really agree to the point that, you know, the inclusive multi-stakeholder approach is definitely important in this section. So thank you very much. And for the sake of the time, I thank to all speakers for those rich and insightful contributions. And now we turn to the second part of our session, which will focus on how governments are using AI in practice across key public functions. This is also of relevance to the previous segment, as the OECD AI Policy Toolkit will have information on sectors, including the public sector. So I’m pleased to hand over the moderation to Ms Seon-Joo Park, Policy Analyst at Innovative Digital and Open Government Division of OECD, who will lead the next segment. So Seong Ju , please.


Seong Ju Park: Thank you, Mr Moderator. So before we start, I just want to quickly share, I was recently back in my country, Korea, and then I needed to explain about a history of a palace to the friends that I had over there. Before, I would have used search for the palace and then try to understand the information I find, and then explain that in English to my friends. But this time, I just asked ChatGPT to give me a very catchy explanation about this palace. And then I just played it for my friends. So AI has changed many aspects of our lives, how we communicate, how we seek information. And this is affecting governments as well. This is accelerating digital transformation of public sector, changing how governments work, how government design and deliver policies and services. And it also changed the expectations and needs of the citizens and businesses that they serve. So before I invite two panelists that I have here, I want to quickly present to you some of the OECD findings on AI in government. May I have the slides? Okay, can we put it on, it’s in a presenter mode. Thank you. So AI as a tool has a great potential to support government to improve productivity, responsiveness and accountability. So AI can automate and streamline mundane and repetitive tasks, allocating efforts of the public servants into more meaningful tasks, interacting with citizens and businesses. And AI can also support tailoring processes and personalizing government services to meet users’ needs. AI can enhance decision-making by supporting governments with making sense of the present and better forecasting for the future. AI can also support enhancing accountability and detecting anomalies. Also, AI can help governments unlock opportunities for external stakeholders. So how can governments enjoy this potential benefits in a trustworthy and in a responsible way? So the work on governing with AI seeks to address this question of how to develop and then deploy trustworthy AI in governments. So AI is a tool that can be used to develop and then deploy trustworthy AI in governments. So AI is a tool that can be used And then we started with looking at what has been done across different government functions. So we have conducted analysis of use cases across 11 government functions covering three broad categories, police functions, key government processes, and service and justice. So in total, 200 use cases were selected and based on the influence, diversity, and then representativeness. So based on the use cases, literature research, and then recent policy developments, we were able to identify key trends, shaping the current state of play, major risk, and then implementation challenges that governments face, and also explore potential use and future pathways. So the first trend we saw is that use cases are unevenly distributed. There are a number of potential explanations for this distribution that you see on the screen. I won’t be able to share all, but I will try to share a couple with you. The policy functions most represented tend to be the ones most in the public eye, potentially suggesting a focus on areas that have immediate visibility to citizens. Factors going into this could involve both more demands from the citizens, but also a desire among governments and political leaders to visibly demonstrate a value of using AI in government. And we also found that some functions face particular barriers or complexities, such as particularly stricter rules on data access and sharing, and then stricter requirements for thorough audit trails in public integrity. Another trend we saw is a big emphasis on automating and personalizing processes and services. The slightly more than half of the examined use cases, they seek to contribute to the automation, streamlining and tailoring and personalization of government processes and services, particularly in justice, public services, civic participation and regulatory design and delivery. We found that four out of 10 use cases seek to enhance decision-making, sense-making and forecasting, with most concentrated in public services, regulation and civic participation. I have some of the use cases, I won’t be able to go through them, but the OECD is planning to launch the more comprehensive report where you will be able to find all 200, well, some of the 200 use cases that I mentioned earlier. So I will skip through different use cases we found for supporting different functions of the government, and then I will go to the most important topic when it comes to government AI in government. So it might not be a fun topic for us to discuss, but government’s use of AI is quite different from use of AI in private sector. It comes with higher risk. It has potential dangers and threats that could seriously harm individuals’ lives and also society as a whole. It could potentially undermine public’s trust in government, the legitimacy of government’s AI use, and even democratic values. So to address these concerns, it is important to continuously consider potential risks that may not exist today, and here on the screen you see the general five risks that we identified through our research. So these risks range from ethical risk, operational risk, exclusion risk, to public resistance and missed opportunities and then it was mentioned during earlier segment, a widened gap between the public sector and then private sector capacities. So beyond grappling with this risk, we also found that governments all face a number of implementation challenges when seeking to develop and use AI. So we found that there are many use cases, however, they remain at a piloting stage and many are struggling to scale the pilots into the wider systems or services. And also there is a large room for improvement when it comes to actionable guidelines. Also governments need to navigate a rigid regulatory environment. And the next challenge is shared by almost every government on this planet. There are inadequate data, skills and infrastructure in the public sector. In addition, governments need to better understand the cost and benefit of AI in the public sector. Many are still, the cost and benefits around the use of AI in government is quite unknown. That makes it quite difficult for policy makers to make business cases to scale up their AI efforts. So to support governments to mitigate this risk and then overcome these challenges, we have worked together with the OECD and then the partner countries. on a framework to support government’s AI efforts. This is an evolving framework and then we only seek to provide guidance for countries so that they can continue on through this AI journey. As you can see, the framework is organized around three sections. So first is a level of engagement. This includes the different stakeholders that needs to be engaged in building the foundations for a responsible use of AI in the public sector. Our previous speakers, they mentioned involving different stakeholders not only from the public sector but also from private academia users into devising AI strategies or developing AI solutions. So it’s important to have a different actors around the table. Then the second element is enablers. So enablers include areas where policy actions can be prioritized to establish a solid enabling environment and then unlock the full-scale adoption of AI in the public sector. So these areas include governance and capabilities, collaborations and partnerships where policymakers currently indicate the existence of important constraints and shortcomings. The last element is on guardrails. So guardrails include options for policy levers that governments can consider developing for a responsible, trustworthy, and human-centered use of AI in the public sector. So this can range from soft laws and guidance as standard to legislation on AI enforcement mechanisms or oversight bodies. So this work is a part of a bigger OECD project called a Horizontal Project on Thriving with AI. Under this project, there are specific deliverables focusing on AI in government. So as I mentioned before, there will be a OECD report on governing with AI, which goes much deeper and then into details of what I just quickly presented with you. And then there will be a dedicated hub for AI in the public sector. It will be on oecd.ai. It will be sort of a repository for policymakers, practitioners and researchers. And we are planning to have a global data collection exercise on AI policies and then use cases, which will also be presented through OECD AI Policy Observatory. So thank you very much. That was my very quick presentation on, just to give you an idea on where OECD research has been when it comes to AI in government. So now I would like to invite two panelists to hear from them on what it means for governments to harness AI in practice. So the first topic will be around the AI opportunities in the public sector. So I would like to invite Katarina first. So Katarina, Norway has been exploring AI to enhance the efficiency and then effectiveness of public sector services. Can you share with us some early impact that you see or early impact that you expect from Norway’s AI use in government?


Katarina de Brisis: Thank you, Seju, for your introduction. Artificial intelligence tends to be perceived by now as being chart GPT or the likes, but actually artificial intelligence is much more than that. and it has been applied and used in Norway already in some years in many government services, especially in the health sector. We have several applications that are really having a practical impact on people’s lives. One case is our Vestreviken hospital community where they implemented AI analysing x-rays of fractures and it really saved time for the patients. By 79 days many patients, about 2000, were able to go home immediately instead of waiting for results of their analysis and their diagnosis and this is now being deployed to several other hospitals. So it gives really practical benefits on the ground. Then we have our Norwegian tax administration that has used AI, developed an AI model which combined with the rule-based models analysed deposits of tax returns looking for missing returns on lending out secondary homes and that actually led to 85% detection rates across or opposite of 12% before and it saved taxpayers for 110 million kroner. It was the additional revenue they were able to produce. In cancer treatment there are hospitals using AI to produce three-dimensional maps of internal organs to have more direct radiation treatment and it already has been in use since 2023. There are also hospitals using AI to give more accurate analysis of patients with epilepsy that can diagnose it precisely and quickly. Our state loans, student loan agency uses AI to control housing, they do housing verification checks just to be sure that no public funds are misappropriated by students saying we are living there while they are actually living some other place and collecting grants for that. Our police authorities use AI for transcriptions of interrogations when they do an investigation on crime, which saves a lot of time because the AI just transcripts spoken language into written language immediately. So, in general, we have a lot of this kind of use already, but still the potential is very great. We have done a state employer survey in 2025 which asked 200 state agencies about their use of AI and 70% answered that they actually use AI in their daily work. I think this is mostly generative AI systems which they use for things like designing job advertisements, case processing, analytical work, helping them in recruitment procedures and this kind of stuff. But this is state, we have about 400 or more municipalities which are very small and potential there is much greater. We still have a way to go there and what we also need to work on is better tools to assess benefits from AI. We have cases, we have real benefits already produced. but to look across the board and have some tools that will really give us methodological background to assess benefits of introducing AI in various sectors and government levels that we need to work more on. So I’ll just maybe finish here.


Seong Ju Park: No, thank you. That is a really important point. I think many governments are still trying to find out the best way to measure what benefits and then impact use of AI actually brings in long run. But some of the cases that you share, it clearly demonstrated that use of AI has supported the Norwegian governments to enhance efficiency, but then also enhancing people’s lives, saving them time and money. Then I will go to Dr. Kim. So, Dr. Kim, you have conducted extensive research on Korea’s use of digital technology including AI and for enhancing services and policies. Could you describe the key elements that governments should consider when using AI to ensure that it is used effectively, innovatively, and inclusively?


Jungwook Kim: Thank you. So Korea is ranked as one of the leading countries in OECD Digital Government Index, which was published recently. And as Anne Rachel states, there’s some different stages of development or adoption of the AI technologies in the public side. But I’m pretty sure that there is no graduation. That means it’s a long journey and it’s a gradual change of the government services delivered to the public. So I’d like to explain and address some of the key enablers or pillars of the Korean history of AI adoptions or digitalization in public services. And the first one is innovation. Innovation is change. Change in your life, change in what you work, and change in what you address your needs and deliver your services. So for the innovations, we have three different aspects of the targets. One is data. So we need open data, but we need machine-readable data, which is not available before. That means we need to make some researches on development in data and accessing data and processing data and make aggregation and changing the data formats so that we can utilize it in AI adoption. So we need change in the data. And the other one is infrastructure. So each and every government has infrastructure in dealing with and providing public services, but for the adoption of AI, it has challenging aspects. That means we need innovative ways to take care of the current infrastructure of the public service delivery. And the third one is public service delivery itself. That means we need brand new citizen-centric AI public services, which was not available before. However, it is feasible, and we need to coin out the way we provide the services and the way we try to address the demand by the public citizens. So those are innovations like data and infrastructure and public service development. And the other pillar is inclusion. That means we should take care of the digital divide for sure. and we experience digital divide, even Korea experience digital divide and by gender, by region, by income, also by the education. So, we need enhanced accessibility for the AI adoption for the public services, of course. That might be enhancing accessibility through AI-driven hyper-personalized services by the public sector or focus on the effectiveness, access of the vulnerable peoples or isolated groups so that they can take care, they can assess easily for the public services. The other one is capability. So, we need educate, we need train the public officers as well as the citizens because it’s changing the life, you know, innovative way to take care of the issues. So, we need inclusion which can be separated into accessibility enhancement, also education for the capacity building and capability increasement, also ability increase. So, those are two pillars of the AI adoptions in public services. And the final element is investment. That requires huge resources in adopt and develop and deploy those AI services into the public sector. So, innovation, inclusion requires investment. So, you should spend your money wisely and strategically in order for the AI adoptions.


Seong Ju Park: Thank you very much. So, the data infrastructure and then also innovating how we approach public services design. These are the hot topics of many of our delegates as well. And then also the last point on investment. It has put a bigger more spotlight now with AI that governments needs to have a strategic thinking around how they’re going to use public money on investing and digital or AI related systems and services. And then I cannot agree with you more that we are in this a long journey and then I often say a moving target so there’s always new target every day and then no graduation. I think this is for many governments around the world. So thank you for sharing the key policy issues. I understand that your work also includes elements to support safe and trustworthy use of AI. How could governments use AI in a responsible and trustworthy way? What are the key elements to avoid or mitigate the five risks that I mentioned earlier?


Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public organization or public body’s work in dealing with AI technology and there is a big challenges in dealing with those security issues especially for the public services because many a lot of actually detailed personal data is accumulated and processed in public body. That means we need to secure those safety of data and that’s top priority. That means so we need citizens rights to their personal data not just you know giving access to the personalized data for anyone or some of the stakeholders. Rather you need a bit of consensus and you get the explicit consent in utilizing and processing your personal data for sure. So it’s a way to secure some of the safety issues in dealing with personalized and privacy issues. And the second one is security issues. So it’s vulnerable to like hacking or other malicious function of the system. So attention to the open network infrastructure and mobile-based system has some challenges of those ones. So system itself should be secured, should be designed and maintained in a safer way. So that is another challenge for dealing with safety issues. And the third one is AI safety and governance. So as you said, it’s moving targets. Then we need agile measures to take care of the AI safety issues. So we have examples which breaches privacy, which has harm for the citizens’ safety issues. And there are so many dialogues on those ones, but each and every country should establish those safety and governance in the right manner, in a sound system, so that they can take care of those issues for real time and even in advance, to minimize the risks or uncertainty associated with the AI implementation. So those ones are not independent from our daily life. Rather, it reflects and it has great impact on the daily life of the citizens in large scale. So for the public services, AI employment and deployment, those ones should be narrated clearly in the AI safety and governance in one specific country. So that’s what we can say based upon the Korean experience.


Seong Ju Park: Thank you very much. It’s really important when it comes to data, but also sensitive data, because we found that some of the sectors, including social security sectors and then healthcare sector, justice sector, they hold a lot more sensitive and then personal information on the users, the citizens and businesses. And I cannot agree with you more on the need for the agile governance. I think many governments have been talking about being more agile, but I think we haven’t reached. the point yet, but it will be important to have governance that will allow the proactive measures and also timely measures to prevent or mitigate this risk that we see. Katarina, I will come to you. What concrete initiatives in Norway is Norway implementing to ensure that AI in government are safe and trustworthy?


Katarina de Brisis: Thank you. Let me start with a couple of reflections on the challenges when implementing AI. For us, one of the main challenges is leadership and competence level in government agencies. So actually that will underpin also trustworthy use of AI. If we have managers in public state agencies who understand both the opportunities and risks associated with using AI, and we know that 60% of our state organizations already implement measures to increase employee competence. These are the people who are actually working and managing artificial intelligence-based systems. And 43% created internal guidelines for using AI. So this is sort of building a fundament within each public agency. And one other important issue is also a dialogue between the employer, the management, and the employee representatives. So that also those people feel having a finger on the levers of how AI is being deployed and implemented in the agency. And then the second thing is the access to data. I agree with Professor Kim that this is a crucial issue. and we have a number of very good quality registers and we have been working for several years on opening those data but the opening must happen in a responsible way and that’s why in Norway at least to access personal data for a purpose of training and using AI systems requires legal basis. So you cannot just say okay I have this data, I pick them and then I train a system and here we go. You have to have legal basis. So you have to procure this legal basis that may take time with the legislative branch. When you have that then you can proceed but within also safety and security constraints. Another thing is of course to have a legal framework in general. So Norway is now working on implementing the EU AI Act which will be our overarching framework for using AI in Norway. We aim at implementing in on par with EU countries to create level playing field. We have already in 2020 put forward a national strategy for AI which put forward seven principles for responsible and trustworthy AI. Those principles are further endorsed by our new digitalization strategy for Norway, published just recently in the fall of 24. And in that strategy our government has a very ambitious goals. They want public agencies to adopt AI at very quick rate. Already in 25 80 percent of public agencies should use AI and by 2030 100 percent. So as you see it’s very ambitious. but we work quite diligently to make it possible both within agencies as I was describing but also on national level by investing in Israel infrastructure so we the government has invested for example 40 million kroner early in developing foundational models in our language that is Norwegian and Sami languages based on our societal values so that we have systems that really reflect who we are not the whole of internet sorry and then the other investment we are looking at is our high performance computing infrastructure to enable actually develop and train AI at a scale that is needed so that’s also the investment and this infrastructure may be used by both public and private entities for example we have one startup which is called Digifarm that uses AI to help farmers predict what to sow when and where and so on and this requires computing power so this kind of infrastructure may provide it even to small startups and companies and of course in enforcing the AI act we will establish or are establishing a national enforcement structure so we will have one authority in our national communication authority that will look at the compliance with the AI act and we will also establish AI Norway which will be an arena for sharing experience guidance and testing in a regulatory sandbox of systems in a very safe environment before deploy so and we will also collaborate with our data protection authority on this regulatory sandbox so and also systems which are trained on personal data may be tested there. So this is sort of outline how do we work both at the micro level and macro level on enabling trustworthy and safe AI in Norway. Thank you.


Seong Ju Park: Thank you very much for sharing Norway’s experience and then what Norway has been doing. I remember about this one tool implemented by one of the countries I would name and it was supposed to support the public sector officials with their job but then the users of that tool wasn’t really trained on how to use the tool and at the end what was supposed to be a supporting tool ended up making wrong decisions for the government. So I see how building employee capabilities and then the leadership around AI and digital is a key to ensuring trustworthy use of AI. So I will conclude our segment here. Thank you very much to you both and then I give the floor back to you, Mr. Moderator.


Moderator: Okay, thank you very much for the wonderful discussion to all the speakers in segment two and I apologize to all the speakers in segment one that I cannot come back to you for finalizing comment but now I will open the floor for audience for any questions or comments on both segments of this open forum. So no questions. So I’m sorry the time has run out, so sorry about the management but I hope you enjoyed the discussion and if you have any questions please contact directly to the individual speakers and let me also share we will have another session on AI tomorrow morning at nine o’clock in the conference hall. So thank you very much to all the audience and also to all the speakers and this session is closed. Thank you very much.


M

Marlon Avalos

Speech speed

116 words per minute

Speech length

951 words

Speech time

487 seconds

Costa Rica initiated the toolkit based on their national AI strategy experience, recognizing that developing countries need practical tools to implement OECD principles

Explanation

Costa Rica proposed the OECD AI principles implementation toolkit after experiencing challenges in developing their own national AI strategy. They recognized that while OECD principles provide strong ethical guidance, many countries in the Global South lack the tools and institutions to turn those principles into concrete actions.


Evidence

Costa Rica launched their national AI strategy in October with support from over 50 entities across government, academia, civil society, and private sector. They conducted national risk assessment and benchmarked against various international instruments including EU AI Act and U.S. AI Risk Management.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Development | Legal and regulatory


Agreed with

– Lucia Rossi
– Moderator
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


International collaboration is essential for developing countries, requiring customization, learning, and evidence-based approaches

Explanation

Avalos emphasized that even politically stable and technically skilled countries like Costa Rica face challenges in AI policy development, making international collaboration crucial. The success of the toolkit depends on features that reflect local needs, processes that evolve over time, and metrics that show AI delivers value for people.


Evidence

Costa Rica’s active participation in OECD, GPAI, regional initiatives, and European programs provided the foundation for their strategy. The toolkit is now endorsed by several countries and entering regional co-creation phase with support from Japan, Korea, Italy, France, EU, and Slovakia.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Development | Legal and regulatory


Agreed with

– Anne Rachel
– Jibu Elias

Agreed on

International collaboration is essential for AI development, especially for developing countries


Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation

Explanation

During the session, Avalos experienced connection problems which he used as a real-time example of the infrastructure challenges that developing countries face every day. This technical difficulty illustrated the broader connectivity and infrastructure barriers that hinder AI adoption in the Global South.


Evidence

Avalos lost his internet connection during the presentation and had to reconnect, stating ‘this is a challenge that developing countries like us face every day, every time.’


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Infrastructure | Development


Agreed with

– Anne Rachel

Agreed on

Infrastructure and connectivity challenges are major barriers for developing countries


L

Lucia Rossi

Speech speed

108 words per minute

Speech length

682 words

Speech time

377 seconds

The toolkit will provide self-assessment tools and region-specific guidance through co-creation workshops to help countries bridge AI divides

Explanation

The OECD AI principles implementation toolkit will be an online tool with two main components: a self-assessment that guides countries through areas to strengthen in AI governance and priorities to establish, followed by suggestions based on best practices from comparable regions. The toolkit emphasizes co-creation through regional workshops to understand challenges and resource needs.


Evidence

The toolkit will build on the OECD AI Policy Observatory repository and include regional workshops starting with one in Thailand supported by Japan with ASEAN countries, followed by workshops with African countries and Central/South American countries.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Development | Legal and regulatory


Agreed with

– Marlon Avalos
– Moderator
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


J

Jibu Elias

Speech speed

140 words per minute

Speech length

1209 words

Speech time

515 seconds

Responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development

Explanation

Elias argued that responsible AI adoption in emerging economies requires focusing on context and inclusion rather than just capacity. The approach should center on people, especially students, marginalized communities, women, and first-generation learners who are most affected by AI but least represented in building it.


Evidence

Mozilla’s Responsible Computing Challenge in India worked with students, academic faculties, women, tribal populations, and first-generation learners. They conducted workshops with 56 tribal women in Chintapalli using local language Telugu and participatory methods.


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Human rights principles


Agreed with

– Anne Rachel

Agreed on

Community-centered and inclusive approaches are crucial for responsible AI development


Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools

Explanation

When provided with ethical frameworks and open-source tools, even first-year students can develop innovative AI solutions that address real community needs. These tools demonstrate that democratized digital leadership can produce globally relevant innovations rooted in local contexts.


Evidence

Examples include WebBeast (AI-powered accessibility widget by a first-year BCS student, now used by 30 websites globally and received Indian design patent), PhysioPlay (WhatsApp-based AI simulation for physiotherapy students), SpeakBoost (communication coaching platform), and TwinSage (personal finance chatbot for college students).


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Sociocultural


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems

Explanation

Elias emphasized that in countries like India, trust in AI systems is not automatically given but must be earned through inclusive development processes. When communities are treated as co-creators rather than just end users, they don’t just adopt technology but transform it to meet their specific needs and contexts.


Evidence

The tribal women workshops in Chintapalli resulted in tech transformation powered by AI but grounded in cultural values, peer collaboration, and dignity-first design. The workshops proved that responsible AI begins with trust-building rather than just tool deployment.


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Sociocultural


Agreed with

– Marlon Avalos
– Anne Rachel

Agreed on

International collaboration is essential for AI development, especially for developing countries


A

Anne Rachel

Speech speed

124 words per minute

Speech length

1723 words

Speech time

833 seconds

AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations

Explanation

African countries have significant opportunities to use AI for development challenges in key sectors, but face constraints in connectivity and need time to build workforce capacity. The young population (65% under 25 in Niger) represents both an opportunity and a challenge requiring patient capacity development.


Evidence

Niger’s smart villages program started with telemedicine for skin diseases, students developed an oximeter for melanated skin during COVID, and various AI applications in precision farming, agroforestry, personalized learning, and voice recognition software for local languages.


Major discussion point

AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Infrastructure


Agreed with

– Jibu Elias

Agreed on

Community-centered and inclusive approaches are crucial for responsible AI development


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges

Explanation

Infrastructure limitations significantly constrain AI adoption across Africa, with low broadband penetration rates and geographic challenges for landlocked countries. These connectivity issues exacerbate digital divides and limit access to AI technologies and services.


Evidence

Specific statistics: 22% broadband access rate across Africa, 16 landlocked countries in the region, and connectivity infrastructure costs are particularly high for these geographic constraints.


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Infrastructure | Development


Agreed with

– Marlon Avalos

Agreed on

Infrastructure and connectivity challenges are major barriers for developing countries


Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations

Explanation

African countries face significant data challenges where most locally generated data is managed by global platforms and not shared back with local institutions. Additionally, many AI systems trained on non-African data perform poorly for African users, creating bias and effectiveness issues.


Evidence

Only 2% of data generated on the African continent is used locally, and facial recognition systems globally are trained on non-African data and perform poorly on African people.


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Human rights principles | Development


Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding

Explanation

Anne Rachel emphasized the African saying ‘Europeans have watches, we have time’ to advocate for patient, context-sensitive AI development. Rushing into AI implementation without proper understanding of local contexts and needs keeps countries behind rather than advancing them.


Evidence

The African proverb ‘Europeans have watches, we have time’ and emphasis on the need for everyone to be part of the discussion and brought to the table for trustworthy digital transformation.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Development | Sociocultural


Agreed with

– Marlon Avalos
– Jibu Elias

Agreed on

International collaboration is essential for AI development, especially for developing countries


Disagreed with

– Katarina de Brisis

Disagreed on

Pace and approach to AI implementation


K

Katarina de Brisis

Speech speed

120 words per minute

Speech length

1219 words

Speech time

606 seconds

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits

Explanation

Norway has deployed AI across multiple government sectors with measurable impacts on efficiency and citizen services. These implementations demonstrate concrete benefits including reduced waiting times for patients, increased detection rates for tax fraud, and time savings for police investigations.


Evidence

Vestreviken hospital’s AI x-ray analysis saved 2000 patients 79 days of waiting time; tax administration AI increased detection rates from 12% to 85% and generated 110 million kroner in additional revenue; police use AI for automatic transcription of interrogations.


Major discussion point

AI Applications in Government Services


Topics

Economic | Legal and regulatory


70% of Norwegian state agencies use AI in daily work, but municipalities and benefit assessment tools need further development

Explanation

While AI adoption is widespread among state agencies for tasks like job advertisements and case processing, there’s still significant potential for expansion, particularly at the municipal level and in developing better tools to assess AI benefits across different sectors and government levels.


Evidence

Survey of 200 state agencies showed 70% use AI daily, mostly generative AI for designing job advertisements, case processing, analytical work, and recruitment procedures. Norway has 400+ municipalities with much greater potential for AI adoption.


Major discussion point

AI Applications in Government Services


Topics

Economic | Development


Leadership competence, legal frameworks, and employee training are crucial for trustworthy AI implementation in government

Explanation

Successful AI implementation requires managers who understand both opportunities and risks, proper legal basis for data access, and comprehensive employee training. Norway emphasizes building competence within agencies and ensuring dialogue between management and employee representatives.


Evidence

60% of state organizations implement measures to increase employee competence, 43% created internal AI guidelines, and Norway requires legal basis for accessing personal data for AI training purposes.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Legal and regulatory | Development


Agreed with

– Jungwook Kim
– Seong Ju Park

Agreed on

Data security and governance are critical for trustworthy AI in government


Norway is implementing the EU AI Act and investing in Norwegian language foundational models and computing infrastructure

Explanation

Norway is creating a comprehensive AI governance framework by implementing the EU AI Act alongside national strategies and investments. The government has ambitious goals for AI adoption across public agencies while building supporting infrastructure including language-specific models and computing resources.


Evidence

Norway aims for 80% of public agencies to use AI by 2025 and 100% by 2030; invested 40 million kroner in Norwegian and Sami language foundational models; establishing AI Norway for experience sharing and regulatory sandbox testing.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Legal and regulatory | Infrastructure


Disagreed with

– Anne Rachel

Disagreed on

Pace and approach to AI implementation


M

Moderator

Speech speed

99 words per minute

Speech length

1453 words

Speech time

874 seconds

Japan’s leadership in proposing OECD AI principles in 2016 and current efforts to make comprehensive principles into practical policies

Explanation

Japan initiated international discussions on AI principles at the OECD in 2016, leading to the comprehensive OECD AI principles. Now Japan is working with other countries to translate these high-level principles into practical policies and actionable guidance for governments and stakeholders.


Evidence

Japan proposed international discussion to OECD on AI principles in 2016, which became the foundation for the OECD AI principles. Japan is now collaborating with Costa Rica, Korea and others, backed by OECD Secretariat, to make the comprehensive principles into practical policies.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Legal and regulatory | Development


Agreed with

– Marlon Avalos
– Lucia Rossi
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


J

Jungwook Kim

Speech speed

124 words per minute

Speech length

887 words

Speech time

428 seconds

Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building

Explanation

Kim outlined Korea’s approach to AI adoption in government through three key pillars: innovation (requiring changes in data formats, infrastructure, and citizen-centric services), inclusion (addressing digital divides and enhancing accessibility), and investment (strategic resource allocation for AI development and deployment).


Evidence

Korea is ranked as one of the leading countries in OECD Digital Government Index. The approach focuses on machine-readable data, innovative infrastructure adaptation, and brand new citizen-centric AI public services, while addressing digital divides by gender, region, income, and education.


Major discussion point

AI Applications in Government Services


Topics

Development | Economic


Data security, system security, and agile AI governance are essential for protecting citizens’ personal data and rights

Explanation

Kim emphasized that public sector AI use requires top priority on data security due to the accumulation of detailed personal data in government systems. This includes securing citizens’ rights to their personal data, protecting against system vulnerabilities, and establishing agile governance measures to address AI safety issues in real-time.


Evidence

Public bodies process a lot of detailed personal data requiring explicit consent for utilization, systems are vulnerable to hacking and malicious functions, and Korea has established AI safety and governance measures based on their experience with privacy breaches and citizen safety issues.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Human rights principles | Legal and regulatory


Agreed with

– Katarina de Brisis
– Seong Ju Park

Agreed on

Data security and governance are critical for trustworthy AI in government


Investment in AI adoption requires strategic resource allocation across innovation, inclusion, and infrastructure development

Explanation

Kim argued that successful AI adoption in government requires substantial and strategic investment across multiple areas. The three pillars of innovation, inclusion, and investment are interconnected, requiring governments to spend resources wisely and strategically to achieve effective AI deployment in public services.


Evidence

Korea’s experience shows that AI adoption requires huge resources to develop and deploy AI services in the public sector, and strategic investment is needed across data development, infrastructure adaptation, and capability building.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Economic | Development


S

Seong Ju Park

Speech speed

125 words per minute

Speech length

2095 words

Speech time

1003 seconds

AI use cases are unevenly distributed across government functions, with emphasis on automation and personalization of processes

Explanation

OECD research analyzing 200 AI use cases across 11 government functions found uneven distribution, with policy functions most represented being those in the public eye. Over half of the use cases focus on automating, streamlining, and personalizing government processes and services, particularly in justice, public services, and civic participation.


Evidence

Analysis of 200 use cases across 11 government functions covering policy functions, key government processes, and service and justice. Slightly more than half seek automation and personalization, while four out of 10 use cases enhance decision-making and forecasting.


Major discussion point

AI Applications in Government Services


Topics

Legal and regulatory | Economic


AI in government carries higher risks than private sector use, including ethical, operational, exclusion, and public resistance risks

Explanation

Government AI use differs significantly from private sector applications due to higher stakes and potential for serious harm to individuals and society. These risks can undermine public trust in government, legitimacy of AI use, and democratic values, requiring continuous consideration of potential future risks.


Evidence

Five identified risks: ethical risk, operational risk, exclusion risk, public resistance, and widened gaps between public and private sector capacities. Government AI use has potential dangers that could seriously harm individuals’ lives and society as a whole.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Human rights principles | Legal and regulatory


Agreed with

– Katarina de Brisis
– Jungwook Kim

Agreed on

Data security and governance are critical for trustworthy AI in government


The OECD framework provides guidance on stakeholder engagement, enabling environments, and guardrails for responsible AI use

Explanation

The OECD has developed an evolving framework organized around three sections to support government AI efforts: level of engagement (involving different stakeholders), enablers (policy actions for solid enabling environment), and guardrails (policy levers for responsible and trustworthy AI use).


Evidence

The framework includes stakeholder engagement from public, private, academia, and users; enablers covering governance, capabilities, collaborations and partnerships; and guardrails ranging from soft laws and guidance to legislation and oversight bodies.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Legal and regulatory | Development


Agreed with

– Marlon Avalos
– Lucia Rossi
– Moderator

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


Agreements

Agreement points

International collaboration is essential for AI development, especially for developing countries

Speakers

– Marlon Avalos
– Anne Rachel
– Jibu Elias

Arguments

International collaboration is essential for developing countries, requiring customization, learning, and evidence-based approaches


Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Summary

All three speakers from developing countries emphasized that successful AI implementation requires international cooperation, context-sensitive approaches, and community involvement rather than top-down or rushed implementations


Topics

Development | Legal and regulatory


Infrastructure and connectivity challenges are major barriers for developing countries

Speakers

– Marlon Avalos
– Anne Rachel

Arguments

Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges


Summary

Both speakers highlighted infrastructure limitations as fundamental barriers to AI adoption, with Avalos experiencing connectivity issues during the session and Anne Rachel providing specific statistics about African connectivity challenges


Topics

Infrastructure | Development


Community-centered and inclusive approaches are crucial for responsible AI development

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Summary

Both speakers emphasized the importance of involving local communities, especially marginalized groups, in AI development and ensuring that solutions address real local needs and contexts


Topics

Development | Human rights principles


Data security and governance are critical for trustworthy AI in government

Speakers

– Katarina de Brisis
– Jungwook Kim
– Seong Ju Park

Arguments

Leadership competence, legal frameworks, and employee training are crucial for trustworthy AI implementation in government


Data security, system security, and agile AI governance are essential for protecting citizens’ personal data and rights


AI in government carries higher risks than private sector use, including ethical, operational, exclusion, and public resistance risks


Summary

All three speakers agreed that government AI implementation requires robust governance frameworks, data protection measures, and comprehensive risk management approaches due to the sensitive nature of government data and services


Topics

Human rights principles | Legal and regulatory


Practical implementation tools and frameworks are needed to translate AI principles into action

Speakers

– Marlon Avalos
– Lucia Rossi
– Moderator
– Seong Ju Park

Arguments

Costa Rica initiated the toolkit based on their national AI strategy experience, recognizing that developing countries need practical tools to implement OECD principles


The toolkit will provide self-assessment tools and region-specific guidance through co-creation workshops to help countries bridge AI divides


Japan’s leadership in proposing OECD AI principles in 2016 and current efforts to make comprehensive principles into practical policies


The OECD framework provides guidance on stakeholder engagement, enabling environments, and guardrails for responsible AI use


Summary

Multiple speakers agreed on the need for practical tools and frameworks to help countries implement high-level AI principles, with the OECD toolkit representing a collaborative effort to bridge the gap between principles and practice


Topics

Legal and regulatory | Development


Similar viewpoints

Both speakers emphasized the potential of young people and marginalized communities to drive AI innovation when given proper support and tools, highlighting examples of student-led innovations and the importance of capacity building for young populations

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Sociocultural


Both speakers from developed countries shared experiences of successful government AI implementations with measurable benefits, emphasizing the importance of systematic approaches to AI adoption across multiple government sectors

Speakers

– Katarina de Brisis
– Jungwook Kim

Arguments

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Topics

Economic | Development


Both speakers highlighted how AI systems often fail to serve non-Western populations effectively due to bias and lack of local data representation, emphasizing the need for locally developed and culturally appropriate AI solutions

Speakers

– Anne Rachel
– Jibu Elias

Arguments

Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Topics

Human rights principles | Development


Unexpected consensus

The importance of taking time for proper AI implementation rather than rushing

Speakers

– Anne Rachel
– Jungwook Kim

Arguments

Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Explanation

It was unexpected to see both a developing country representative (Anne Rachel) and a developed country representative (Jungwook Kim) agree on the importance of patient, gradual AI implementation. This consensus suggests that even advanced countries recognize AI adoption as a long-term journey requiring careful planning rather than rapid deployment


Topics

Development | Sociocultural


The universal challenge of measuring AI benefits in government

Speakers

– Katarina de Brisis
– Seong Ju Park

Arguments

70% of Norwegian state agencies use AI in daily work, but municipalities and benefit assessment tools need further development


AI use cases are unevenly distributed across government functions, with emphasis on automation and personalization of processes


Explanation

Despite Norway’s advanced AI implementation, both speakers acknowledged that even leading countries struggle with measuring AI benefits and achieving even distribution across government functions. This suggests that assessment and scaling challenges are universal, not just issues for developing countries


Topics

Economic | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated strong consensus on several key areas: the need for international cooperation and practical implementation tools, the importance of inclusive and community-centered approaches, the critical role of data governance and security in government AI, and the recognition that AI implementation is a gradual process requiring patience and proper planning. There was also agreement on the challenges of infrastructure, capacity building, and the need for context-sensitive solutions.


Consensus level

High level of consensus with complementary perspectives from different regions and development stages. The agreement spans both technical and social aspects of AI implementation, suggesting a mature understanding of AI governance challenges across different contexts. This consensus provides a strong foundation for international cooperation and the development of practical tools like the OECD AI principles implementation toolkit.


Differences

Different viewpoints

Pace and approach to AI implementation

Speakers

– Anne Rachel
– Katarina de Brisis

Arguments

Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Norway is implementing the EU AI Act and investing in Norwegian language foundational models and computing infrastructure


Summary

Anne Rachel advocates for a patient, time-intensive approach emphasizing the African saying ‘Europeans have watches, we have time’ and warns against rushing AI implementation without proper context understanding. In contrast, Katarina presents Norway’s very ambitious timeline with 80% of public agencies using AI by 2025 and 100% by 2030, representing a rapid deployment approach.


Topics

Development | Sociocultural


Unexpected differences

Infrastructure challenges as demonstration vs. systematic barrier

Speakers

– Marlon Avalos
– Anne Rachel

Arguments

Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges


Explanation

While both speakers address infrastructure challenges, Avalos uses his technical difficulties as a real-time demonstration of connectivity issues, suggesting these are manageable obstacles that can be worked around. Anne Rachel presents infrastructure limitations as fundamental systematic barriers requiring substantial structural changes. This represents an unexpected difference in framing the same core issue – whether infrastructure challenges are symptomatic problems or foundational barriers to AI adoption.


Topics

Infrastructure | Development


Overall assessment

Summary

The discussion shows remarkably high consensus on core principles (inclusion, context-sensitivity, international cooperation) but reveals subtle yet significant differences in implementation philosophy and pace


Disagreement level

Low to moderate disagreement level with high strategic implications. While speakers largely agree on goals, their different approaches to timing, community engagement, and implementation strategies could lead to significantly different outcomes in AI policy development. The disagreements are more about methodology and pace rather than fundamental objectives, but these differences could be crucial for policy effectiveness and adoption success in different regional contexts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the potential of young people and marginalized communities to drive AI innovation when given proper support and tools, highlighting examples of student-led innovations and the importance of capacity building for young populations

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Sociocultural


Both speakers from developed countries shared experiences of successful government AI implementations with measurable benefits, emphasizing the importance of systematic approaches to AI adoption across multiple government sectors

Speakers

– Katarina de Brisis
– Jungwook Kim

Arguments

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Topics

Economic | Development


Both speakers highlighted how AI systems often fail to serve non-Western populations effectively due to bias and lack of local data representation, emphasizing the need for locally developed and culturally appropriate AI solutions

Speakers

– Anne Rachel
– Jibu Elias

Arguments

Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Topics

Human rights principles | Development


Takeaways

Key takeaways

The OECD AI Principles Implementation Toolkit, initiated by Costa Rica, will provide practical self-assessment tools and region-specific guidance to help countries implement AI principles through co-creation workshops


Responsible AI development must be inclusive, locally-rooted, and community-centered, with marginalized communities serving as co-creators rather than just end-users


Developing countries face significant challenges including infrastructure limitations, connectivity issues (only 22% of Africans have broadband access), data scarcity, and fragmented policy frameworks


AI applications in government services show practical benefits, with Norway demonstrating success in healthcare, tax administration, and police services, while 70% of Norwegian state agencies already use AI


Trustworthy AI governance requires leadership competence, legal frameworks, employee training, and addressing higher risks in government use compared to private sector applications


International cooperation and knowledge sharing through regional workshops and platforms are essential for bridging AI divides and promoting inclusive AI ecosystems


AI implementation is a long journey with moving targets, requiring strategic investment in innovation, inclusion, and infrastructure development


Resolutions and action items

OECD will launch a comprehensive report on governing with AI and create a dedicated hub for AI in the public sector on oecd.ai


Regional co-creation workshops will be organized, starting with ASEAN countries in Thailand, followed by workshops with African, Central American, and South American countries


Norway aims for 80% of public agencies to use AI by 2025 and 100% by 2030, with investments in Norwegian language foundational models and computing infrastructure


Norway will implement the EU AI Act and establish AI Norway as an arena for sharing experience and regulatory sandbox testing


OECD will conduct a global data collection exercise on AI policies and use cases to be presented through the OECD AI Policy Observatory


Unresolved issues

Many AI use cases remain at piloting stage with governments struggling to scale pilots into wider systems or services


Governments need better tools and methodologies to assess the costs and benefits of AI implementation in the public sector


Inadequate data, skills, and infrastructure in the public sector continue to constrain AI adoption


The need for more actionable guidelines and navigation of rigid regulatory environments remains challenging


Capacity building and workforce development cannot keep pace with the rapid advancement of AI technology


Data bias issues persist, with facial recognition systems performing poorly on African populations and only 2% of African-generated data being used locally


Suggested compromises

Taking time to develop context-appropriate solutions rather than rushing implementation without proper understanding of local needs


Balancing ambitious AI adoption goals with the need for proper training, legal frameworks, and safety measures


Using modular and flexible guidance approaches that can adapt to different resource settings and local contexts


Combining international best practices with local innovation and community-led initiatives


Establishing public-private partnerships to share the burden of AI development and implementation costs


Thought provoking comments

Even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge.

Speaker

Marlon Avalos


Reason

This comment was particularly insightful because it reframed the AI development challenge from a Global South perspective. Rather than positioning Costa Rica as disadvantaged, Avalos acknowledged their relative strengths while emphasizing that if even well-positioned countries struggle, the challenges are systemic rather than just resource-based. This created a foundation for genuine international collaboration rather than a donor-recipient dynamic.


Impact

This comment established the legitimacy and urgency of the OECD AI Principles Implementation Toolkit initiative. It shifted the discussion from theoretical policy frameworks to practical, experience-based solutions and set the tone for other speakers to share their ground-level challenges and innovations.


Don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it.

Speaker

Jibu Elias


Reason

This comment was profoundly thought-provoking because it challenged the fundamental approach to AI development and deployment. It shifted focus from technical capabilities to human agency and democratic participation in technology design. The distinction between ‘end users’ and ‘co-creators’ reframes the entire AI governance conversation around empowerment rather than consumption.


Impact

This comment elevated the entire discussion by introducing a philosophical framework that connected all subsequent speakers’ examples. It provided a lens through which the audience could evaluate all AI initiatives – whether they truly involve communities as co-creators or merely as beneficiaries.


We do say Europeans have watches, we have time. So I’m just saying this to plead for, you know, taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything, because people do not understand what it is we’re trying to do or where is it that we’re trying to get to.

Speaker

Anne Rachel Ng


Reason

This culturally grounded metaphor was exceptionally insightful because it challenged the prevailing narrative of ‘catching up’ in AI development. It reframed the perceived disadvantage of slower adoption as potentially advantageous, emphasizing that contextual appropriateness and community understanding are more valuable than speed. This perspective counters the technology determinism often present in AI discussions.


Impact

This comment provided a powerful counter-narrative to the urgency often associated with AI adoption. It influenced the discussion by validating deliberate, community-centered approaches and gave other speakers permission to discuss the importance of local context and inclusive processes over rapid deployment.


We found that some functions face particular barriers or complexities, such as particularly stricter rules on data access and sharing, and then stricter requirements for thorough audit trails in public integrity.

Speaker

Seong Ju Park


Reason

This observation was insightful because it revealed that the uneven distribution of AI use cases in government isn’t just about technical capacity or resources, but about institutional and regulatory complexity. It highlighted how governance structures themselves can create barriers to AI adoption, suggesting that policy reform may be as important as technical development.


Impact

This comment shifted the second segment’s focus from success stories to implementation challenges, preparing the ground for more nuanced discussions about the barriers governments face and the need for adaptive governance frameworks.


AI has changed many aspects of our lives, how we communicate, how we seek information. And this is affecting governments as well. This is accelerating digital transformation of public sector, changing how governments work, how government design and deliver policies and services. And it also changed the expectations and needs of the citizens and businesses that they serve.

Speaker

Seong Ju Park


Reason

This comment was thought-provoking because it positioned AI not just as a tool for government efficiency, but as a transformative force that changes the fundamental relationship between governments and citizens. It suggested that AI adoption creates new expectations and needs, implying that governments must evolve not just their tools but their entire approach to public service.


Impact

This framing influenced the entire second segment by establishing that AI in government isn’t just about automation or efficiency gains, but about fundamental transformation of governance relationships. It set up the subsequent discussions about trust, accountability, and citizen engagement.


So it’s moving targets. Then we need agile measures to take care of the AI safety issues… those ones should be narrated clearly in the AI safety and governance in one specific country.

Speaker

Jungwook Kim


Reason

This comment was insightful because it acknowledged the fundamental challenge of governing rapidly evolving technology while emphasizing the need for country-specific approaches. The ‘moving targets’ metaphor captured the dynamic nature of AI governance challenges, while the emphasis on national narratives recognized that governance solutions must be culturally and institutionally grounded.


Impact

This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptive governance frameworks rather than one-size-fits-all solutions. It connected the theoretical framework discussions with practical implementation challenges.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional narratives about AI development and governance. Rather than focusing solely on technical capabilities or resource gaps, the speakers introduced themes of community agency, cultural context, institutional complexity, and adaptive governance. The comments created a progression from recognizing shared challenges (Avalos) to reimagining development approaches (Jibu, Anne Rachel) to understanding implementation complexities (Park, Kim). This elevated the conversation beyond typical policy discussions to address fundamental questions about power, participation, and the purpose of AI in society. The speakers’ insights collectively argued for a more democratic, contextual, and deliberate approach to AI governance that prioritizes community needs and local contexts over rapid technological adoption.


Follow-up questions

How can we better measure the cost and benefits of AI implementation in the public sector?

Speaker

Katarina de Brisis


Explanation

Many governments struggle to make business cases for scaling up AI efforts due to unknown costs and benefits, making it difficult for policymakers to justify investments


How can we develop better tools and methodologies to assess benefits from AI across various sectors and government levels?

Speaker

Katarina de Brisis


Explanation

While there are documented cases of AI benefits, there’s a need for systematic methodological frameworks to evaluate AI impact across different government functions


How can governments scale AI pilots into wider systems and services?

Speaker

Seong Ju Park


Explanation

Many AI use cases in government remain at piloting stage and struggle to scale up, representing a significant implementation challenge


How can we ensure AI systems work effectively for diverse populations, particularly addressing bias in facial recognition and medical devices for people of different ethnicities?

Speaker

Anne Rachel Ng


Explanation

Current AI systems often perform poorly on African populations due to training on non-representative data, as demonstrated by the oximeter example during COVID-19


How can we develop more actionable guidelines for AI implementation in government?

Speaker

Seong Ju Park


Explanation

There’s a large room for improvement in providing practical, implementable guidance rather than high-level principles


How can we address the infrastructure challenges, particularly for landlocked countries with limited broadband access?

Speaker

Anne Rachel Ng


Explanation

Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating significant connectivity barriers for AI adoption


How can we better coordinate cross-ministerial collaboration for AI policy implementation?

Speaker

Anne Rachel Ng


Explanation

AI implementation requires coordination across multiple government ministries (finance, interior, defense, data protection) but this coordination is often lacking


How can we develop AI governance frameworks that are agile enough to keep pace with rapidly evolving AI technology?

Speaker

Jungwook Kim


Explanation

AI is a moving target requiring real-time and proactive governance measures, but current governance structures may not be agile enough


How can we ensure inclusive AI development that truly involves marginalized communities as co-creators rather than just end users?

Speaker

Jibu Elias


Explanation

Trust in AI systems requires involving communities in the development process, not just as recipients of the technology


How can we address the capacity building challenge when the pace of AI development exceeds the speed at which human capacity can be developed?

Speaker

Anne Rachel Ng


Explanation

With very young populations in developing countries, there’s a mismatch between the speed of AI advancement and the time needed to build adequate workforce capacity


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #7 Advancing Data Governance Together Across Regions

Open Forum #7 Advancing Data Governance Together Across Regions

Session at a glance

Summary

This discussion focused on advancing data governance across regions, bringing together policymakers and civil society leaders from West Africa, Eastern Partnership, and Western Balkans to explore common challenges and share best practices. The session was moderated by Wairagala Wakabi from CIPESA and hosted by Dr. Ismaila Ceesay, Minister of Information from The Gambia, who outlined his country’s comprehensive digital transformation strategy including national data protection policies and alignment with ECOWAS and African Union frameworks.


Commissioner Milan Marinovic from Serbia emphasized the critical balance between digital advancement and personal data protection, proposing the creation of a global e-association of data protection authorities to facilitate international cooperation. He stressed that digitalization and data protection must develop in parallel, comparing their relationship to natural complementary forces. Regional experts highlighted varying approaches across different areas, with Folake Olagunju from ECOWAS describing West Africa’s focus on harmonization without homogenization, emphasizing multi-stakeholder engagement and evidence-based policymaking.


Dr. Olga Kyryliuk from Southeastern Europe described her region as having “high digital ambition” but facing challenges due to regulatory divides between EU member states and non-EU countries seeking accession. Civil society representatives from Armenia and Kyrgyzstan shared their experiences with digital transformation, emphasizing the importance of civic tech voices in building trust and ensuring inclusive governance. A recurring theme throughout the discussion was the need for harmonization of legal frameworks while respecting national sovereignty and cultural differences.


The panelists identified several practical next steps for strengthening inter-regional cooperation, including establishing continental data governance frameworks, creating controlled test environments for interoperable platforms, and developing formal cooperation channels between data protection agencies. The discussion concluded with emphasis on the critical importance of building cross-border trust, ensuring transparent oversight, and balancing multiple human rights including privacy and freedom of information in the evolving digital landscape.


Keypoints

## Major Discussion Points:


– **National Data Governance Framework Development**: Countries across different regions (The Gambia, Serbia, Armenia, Kyrgyzstan) are actively developing comprehensive national data protection and governance frameworks, with many aligning their legislation to GDPR standards and regional frameworks like ECOWAS and African Union policies.


– **Regional Harmonization vs. National Sovereignty**: A central tension emerged around balancing the need for harmonized cross-border data governance standards while preserving national data sovereignty, with speakers emphasizing “harmonization not homogenization” and the importance of mutual recognition frameworks.


– **Cross-Border Data Protection Authority Cooperation**: Significant focus on strengthening cooperation between Data Protection Authorities (DPAs) globally, including proposals for new international associations and formal cooperation channels for audits, incident response, and enforcement coordination.


– **Multi-Stakeholder Engagement and Civil Society Role**: Strong emphasis on the critical importance of involving civil society, private sector, academia, and citizens in data governance processes, with civic tech organizations serving as essential bridges between governments and citizens to ensure transparency and accountability.


– **Balancing Human Rights in Data Governance**: Discussion of the complex challenge of protecting privacy rights while preserving freedom of information and expression, with several countries adopting integrated approaches that combine data protection and access to information oversight under unified commissions.


## Overall Purpose:


The discussion aimed to foster inter-regional dialogue on data governance best practices, challenges, and cooperation mechanisms between policymakers and civil society leaders from West Africa, Eastern Partnership, Western Balkans, and other regions. The session sought to identify common approaches for building digital cooperation, sharing lessons learned, and developing actionable steps for strengthening international collaboration on data governance standards and frameworks.


## Overall Tone:


The discussion maintained a consistently collaborative and constructive tone throughout. Speakers demonstrated mutual respect and genuine interest in learning from each other’s experiences. The tone was professional yet accessible, with participants openly sharing both successes and challenges. There was a notable spirit of cooperation, with multiple speakers building upon each other’s ideas and offering concrete proposals for future collaboration. The atmosphere became increasingly solution-oriented as the session progressed, culminating in specific actionable recommendations and offers for continued partnership between regions and organizations.


Speakers

**Speakers from the provided list:**


– **Wairagala Wakabi** – Executive Director of CIPESA (Collaboration on International ICT Policy for Eastern and Southern Africa), Session Moderator


– **Dr. Ismaila Ceesay** – Minister of Information from The Gambia


– **Milan Marinovic** – Commissioner for Access to Public Information of Importance and Personal Data Protection of Serbia (appointed in 2019), former judge


– **Olga Kyryliuk** – Chair of the South Eastern European IGF, expert in digital governance, Internet freedom and international law


– **Meri Sheroyan** – Co-founder of Digital Armenia NGO, IT expert specializing in digital transformation in the public sector


– **Tattugal Mambetalieva** – Director of Civil Initiative on Internet Policy (Kyrgyzstan), initiator and founder of Kyrgyz Forum on Information Technology and Central Asian Forum on Internet Governance


– **Folake Olagunju** – Acting Director of Digital Economy at the Economic Committee of West African States Commission (participated online)


– **Audience** – Multiple audience members who asked questions during the session


**Additional speakers:**


None identified beyond those in the provided speakers names list.


Full session report

# Inter-Regional Data Governance Dialogue: Sharing Experiences and Building Cooperation


## Session Overview


This inter-regional dialogue brought together policymakers and civil society representatives from West Africa, Eastern Partnership, Western Balkans, and Central Asia to discuss data governance challenges and share regional experiences. The session was moderated by Wairagala Wakabi from CIPESA and hosted by Dr. Ismaila Ceesay, Minister of Information and Communication Infrastructure from The Gambia.


## Opening Remarks


Dr. Ismaila Ceesay welcomed participants and outlined The Gambia’s digital transformation priorities, emphasizing a whole-of-government approach to data governance. He highlighted three key areas: institutional capacity building, legal reforms including the Data Protection Bill 2023 currently in parliament, and statistical system reform. Dr. Ceesay acknowledged significant challenges including capacity gaps, institutional fragmentation, and digital divide issues affecting rural populations.


Moderator Wairagala Wakabi structured the discussion around key questions about regional approaches to data governance, cross-border cooperation mechanisms, and practical steps for advancing inter-regional collaboration.


## National and Regional Perspectives


### ECOWAS Regional Framework


Folake Olagunju from ECOWAS described West Africa’s approach as “harmonisation not homogenisation,” explaining that ECOWAS revised its Supplementary Act on Personal Data Protection to support cross-border data flows while respecting individual country contexts. She emphasized a “whole-of-society” methodology involving government, civil society, private sector, academia, and citizens in policy development processes.


### Serbia’s Institutional Model


Commissioner Milan Marinovic from Serbia’s Commissioner for Information of Public Importance and Personal Data Protection described data protection as “one of the most threatened fundamental human rights in today’s era of rapid development of modern technologies.” He proposed creating a global E-association of Data Protection Authorities (DPAs) and highlighted Serbia’s “two-in-one system” that combines data protection and access to information oversight under a unified commission.


### Armenia’s Civil Society Perspective


Meri Sheroyan from Digital Armenia emphasized the role of civic tech organizations as bridges between governments and citizens. She described Armenia’s efforts to build comprehensive legal and technical frameworks for digital transformation, including e-governance platforms and data governance projects, while highlighting the importance of civil society in building public trust.


### Kyrgyzstan’s Distinctive Approach


Tattugal Mambetalieva from Kyrgyzstan explained that her country deliberately avoids data centralization and localization, stating that “centralization of data has risks for data protection and localization of data creates additional burden to business.” This approach differs from neighboring countries like Kazakhstan and Uzbekistan, demonstrating diverse policy choices within the region.


### Southeastern European Coordination


Dr. Olga Kyryliuk, Chair of the South Eastern European IGF, described her region as having “high digital ambition” but facing challenges due to regulatory differences between EU member states operating under GDPR and non-EU countries still seeking compliance. She emphasized the role of Internet Governance Forums in facilitating dialogue across different regulatory environments.


## Key Themes and Common Challenges


### Harmonization While Respecting Sovereignty


Multiple speakers emphasized the importance of regional cooperation that creates interoperability without imposing identical solutions. The ECOWAS model of harmonization rather than homogenization was cited as an example of balancing common standards with national sovereignty.


### Multi-Stakeholder Engagement


All participants stressed the importance of inclusive stakeholder engagement, though with different emphases. While The Gambia focused on whole-of-government approaches, ECOWAS emphasized whole-of-society participation, and Armenia highlighted the critical role of civil society organizations.


### Capacity Building Needs


Speakers from all regions identified capacity building and institutional strengthening as persistent challenges requiring sustained attention and resources.


### Balancing Rights and Innovation


Participants discussed the need to balance data protection with other rights including freedom of information and expression, as well as supporting digital innovation and economic development.


## Audience Engagement


The session included questions from the audience, including an inquiry about the SOLID protocol and linguistic AI in the context of indigenous language preservation. Dr. Ceesay acknowledged the complexity of language preservation in Africa, noting the continent has over 2,000 languages with some countries having 56 to 200 languages each.


## Action Items and Commitments


In their final one-minute responses, panelists made specific commitments:


– **Commissioner Marinovic** committed to contacting DPAs worldwide within the week to propose his E-association concept


– **Dr. Ceesay** committed to finalizing The Gambia’s data protection legislation by year-end and implementing the planned merger of access to information and data protection oversight functions


– **Folake Olagunju** outlined ECOWAS plans to establish controlled test environments for member states to trial interoperable platforms in sectors such as health, education, and identity systems


– **Dr. Kyryliuk** offered to host a side meeting during CDIG’s October meeting in Athens to advance inter-regional dialogue


– **Meri Sheroyan** emphasized continuing to pilot small-scale cross-border data-sharing initiatives in specific sectors


– **Tattugal Mambetalieva** highlighted the need for intergovernmental agreements on data exchange in Central Asia


## Key Takeaways


The dialogue demonstrated both shared challenges and diverse approaches to data governance across regions. While all participants agreed on fundamental principles such as the importance of multi-stakeholder engagement and the need to balance various rights and interests, their implementation strategies reflect different regional contexts and priorities.


The session highlighted the value of inter-regional dialogue for sharing experiences and identifying potential areas for cooperation, while respecting the diversity of approaches needed to address local contexts and constraints. The concrete commitments made by participants suggest potential for continued collaboration and mutual learning across regions.


The discussion reinforced that effective data governance requires not only technical and legal frameworks but also sustained institutional capacity building, inclusive stakeholder engagement, and mechanisms for regional cooperation that respect national sovereignty while enabling cross-border collaboration.


Session transcript

Wairagala Wakabi: Hello, good afternoon, dear audience, it is my pleasure to moderate this session today, and I’ll begin by introducing myself. My name is Wakabi and I am the Executive Director of CIPESA, which is the Collaboration on International ICT Policy for Eastern and Southern Africa, a think tank that works on issues at the intersection of technology, human rights, governance, and livelihoods. Today, we are bringing together notable speakers from across various regions to discuss data governance in line with the IGF sub-theme of building digital cooperation. The session aims to contribute to inter-regional dialogue among policymakers and civil society leaders from West Africa, from the Eastern Partnership, and the Western Balkans to leverage common knowledge. I am from East Africa myself, which wasn’t mentioned among those regions, so there’s also some insights that will come out of there. On this note, to kick us off, I would like to invite our host, who is the Minister of Information from The Gambia, to share his welcome remarks. Dr. Ismail Asise, please take the floor.


Dr. Ismaila Ceesay: Thank you very much, Dr. Wakabi, thank you for that introduction. Excellencies, distinguished delegates, ladies and gentlemen, it is a great honor to join you today for this very important discussion on advancing data governance across regions as we collectively seek pathways. In an increasingly digital world, data is a critical enabler of development, innovation and rights. For The Gambia, harnessing data responsibly is key to driving economic growth, improving service delivery, and protecting the dignity and rights of our people. The Gambia is embracing the digital age with ambition and purpose. We recognize that digital transformation is not just a matter of technological advancement. For us, it is a catalyst for inclusive growth, innovation and good governance. Our national broadband policy, our digital ID initiatives and e-government platforms are all part of a comprehensive strategy to bridge the digital divide, empower citizens and modernize our economy. We have made significant strides in putting data governance at the core of our digital development agenda. We are currently implementing our national data protection and privacy policy, grounded in principles of accountability, transparency and human rights. Steps are also underway to establish an independent data protection authority, which will oversee the enforcement of data governance principles and build trust with citizens, businesses and regional partners. We are also committed to finishing the development of the Gambian national data governance policy, supported by the African Union and the European Union. The Gambia is actively engaged in regional frameworks on the ECOWAS and the African Union, including alignment with the EU data policy framework. We recognize interoperability, regulatory harmonization and mutual trust are essential for effective cross-border data flows in Africa and beyond. We believe that effective cross-border data governance can unlock tremendous value, facilitating trade, strengthening regional integration and enabling secure data flows across borders. We believe that International Cooperation must be fair, inclusive and development-oriented. We are fully aware that no country can do this alone. Advancing data governance across borders requires trust, coordination and shared values. As countries in the Global South, we seek equitable participation in shaping global digital rules, and we emphasize the need for capacity support, infrastructure investments and data governance models that reflect our local realities. The Gambia stands ready to work with partners on the continent and globally to build a data governance ecosystem that is secure, rights-respecting and fit for the digital age. Let us advance together, bridging borders and building trust in the digital world. Thank you.


Wairagala Wakabi: Thank you, sir, for outlining the Gambia’s efforts in its digital development agenda and also outlining its commitment to cooperative data governance. As Dr. Sisay has touched on, the domestic as well as cross-border assisted governed data effectively is crucial, and so to explore common challenges and valuable experiences from different regions, we are going to hear from our panelists and dive deeper into the varying contexts that can enable us to be able to accelerate responsible, future-ready and rights-based data governance globally. I will therefore introduce our panelists today, beginning next to Dr. Sisay, Milan Marinovic, who was appointed Commissioner for Access to Public Information of Importance and Personal Data Protection of Serbia in 2019. Previously, Mr. Marinovic served as judge in different courts. He has authored various publications and participated in various working groups, drafting and amending legislation in Serbia. Next to him, we have Dr. Olga Kiriliuk, who currently serves as chair of the South Eastern European IGF, leading multi-scope cooperation across 18 countries in the region. She’s internationally recognized as an expert in digital governance, Internet freedom and international law with over 12 years of experience at the intersection of technology, policy and human rights. And to my left, we have Meri Sheroyan, the co-founder of Digital Armenia, an NGO focused on advancing digital transformation through inclusive, user-centered approaches. As an IT expert, she is specializing in digital transformation in the public sector and public administration systems. And she has extensive experience working within government institutions as well as with the development institutions. To the extreme left, we have Tatu Mambetalieva, the Director of Civil Initiative on Internet Policy based in Kyrgyzstan. She’s also the initiator and founder of the public platform Kyrgyz Forum on Information Technology, the annual Central Asian Forum on Internet Governance, which is a regional initiative of the Global Internet Governance Forum created under the auspices of the UN. We also have a participant online who has not been able to join us, and that is Folake Olagunju, the Acting Director of Digital Economy and post at the Economic Committee of West African States Commission, where she leads the Digitalization Directorate. We will now hear from our panelists and set the stage and get a sense of the state of data governance in The Gambia and Serbia. We’ll start with Dr. Sese first. As The Gambia continues to develop its digital infrastructure and data policies, where are the country’s priorities and challenges in developing and implementing effective national data governance frameworks and how they align with the broader strategies of the African Union and the ECOS?


Dr. Ismaila Ceesay: Thank you very much once again, Mr. Moderator. As for our priorities, our number one priority is institutional capacity building. Now, as The Gambia is advancing the development of a comprehensive national data… Digital Governance Framework to support digital government, evidence-based policymaking, and public service delivery. This initiative is supported by UNDESA and includes a series of stakeholder consultations and capacity-building workshops led by the Ministry of Communication and Digital Economy of The Gambia. Our other priorities also focus on legal and regulatory reforms. For example, we have the data protection and privacy legislation, which is currently in parliament. This is building on the National Data Protection and Privacy Policy of 2019. The Gambia has formulated the Data Protection and Privacy Bill 2023, which is currently before the National Assembly. The bill provides a robust legal framework covering data subject rights, controller and processor responsibilities, transborder data flows, processing principles, safeguards, enforcement mechanisms, and sanctions. Under these reforms, we also have the statistical system reform. This is under the National Strategy for the Development of Statistics. The 2025 Statistics Act is being revised to strengthen coordination across the national statistical system. This reform aligns with the National Development Plan 2023-2027, Agenda 2063, the ECOWAS Regional Statistical Strategy, and the UN SDGs. We also have the national data policy reforms, with support from GIZ and UNDESA. The national data policy has been validated and is pending cabinet submission. It aims to harmonize data governance across sectors and establish a foundation for secure, inclusive, and rights-based use. Another priority is the whole-of-government approach. The MOCDE, which is the Ministry Responsible for Digital Economy, is spearheading cross-sectoral coordination to ensure that data governance is embedded across ministries, departments, and agencies. Once adopted, the policy will address data protection, cyber security, open data, and access to information, while balancing freedom of expression with the mitigation of online The National Data Policy is a cornerstone of the Gambia’s broader digital transformation agenda, aligning with the Digital Transformation Strategy 2024-2028, Digital Economy Master Plan 2024-2034, and Government Open Data Strategy 2024-2027. It supports the NDP, SDGs, and Agenda 2063 by promoting data availability, accessibility, and interoperability to drive innovation, transparency, and inclusive development. Our challenges, particularly the persistent ones, one is capacity gaps. Many ministries, departments, and agencies lack the technical and analytical capabilities to manage and utilize data effectively. A second challenge is fragmentation. The national data ecosystem remains siloed with inconsistent standards for data collection, storage, and sharing. Another challenge we are facing is the digital divide, inequities in digital access and literacy, particularly across rural and undeserved populations. This limits inclusive participation in data-driven governance. And finally, our alignments with EU and ECOWAS strategies. The Gambia’s data governance reforms are closely aligned with the African Union’s data policy framework, which emphasizes data sovereignty, cross-border data flows, and inclusive digital economies. At the regional level, the Gambia is also actively engaged in the ECOWAS Supplementary Act on Personal Data Protection, which is expected to be endorsed by heads of state in the upcoming summit. These efforts underscore the Gambia’s commitment to regional harmonization and digital trust.


Wairagala Wakabi: Thank you very much. That’s a handful of measures that have been implemented to advance data governance, in spite of the challenges, and it would be good here if the challenges are also shared across regions. But I have a follow-up question. The Gambia also recently launched a five-year strategic plan to strengthen good governance. Its pillars include to to improve transparency and access to information to boost public participation and strengthen institutional capacity and good governance. Could you please describe to us what is the role of the Ministry of Information that you lead in building public trust around the governance?


Dr. Ismaila Ceesay: While the Ministry of Digital Economy leads on technical and regulatory aspects, the Ministry of Information, which I lead, plays a critical role in fostering public trust and civic engagement. One of the things we do, and which is our mandate, is public awareness and digital literacy activities. The Ministry is responsible for sensitizing citizens on their data rights, the value of open data, and the safeguards in place to protect personal information. This includes campaigns to demystify data governance and promote responsible digital citizenship. Our initiative and activities we do focus on transparency and access to information. As a key pillar of the 2025-2029 strategic plan, the Ministry is expected to champion proactive disclosure of government-held data, thereby reinforcing transparency and accountability in public institutions. We also engage in media engagement and narrative framing. By collaborating with public and private media, the Ministry also shapes inclusive narratives that build confidence in digital reforms, counter misinformation and disinformation, and promote calm and stability during periods of digital transition. And finally, we also engage in stakeholder dialogue and inclusion. The Ministry serves as a bridge between government, civil society, and the public, facilitating participatory dialogue to ensure that data governance policies reflect citizen concerns and uphold democratic values.


Wairagala Wakabi: Thank you very much. We’ll hear now from Commissioner Marinovic of Serbia, which has equally made significant progress in developing a rights-based data governance framework with a particular emphasis on the protection of personal data. Commissioner, what have been the recent institutional challenges of balancing compatibility between digital and data systems with the protection of fundamental rights?


Milan Marinovic: Thank you, Mr. Vakabi. Dear all, greetings from Serbia to everyone. First of all, I want to thank GIZ for the invitation to participate in such an important event. Also, with GIZ support, we plan to raise capacities of policy makers and other policy makers and IT experts in the field of data privacy in Serbia. In the early beginning, let me share with you one of my experiences. Every time I find myself at such a large and important event dedicated to digitalization and the use of modern technologies, I, as someone who deals with the protection of personal data, feel like a cat at a dog’s exhibition. It is an extraordinary pleasure and honor, but also a responsibility to be with you today at this fantastic forum. Protection of personal data, as well as the right to privacy in general, is one of the most threatened fundamental human rights in today’s era of rapid development of modern technologies, widespread digitalization and enormous use of artificial intelligence. That is why it is extremely difficult to find the appropriate balance between digital and data systems and the protection of personal data. Difficult, but not impossible. What is most important in creating that balance? Parallel, balanced development of both sides of the same story. This means that the accelerated development of digitalization in all areas of life must be accompanied by the development of personal data protection systems. Digitalization in general, and artificial intelligence in particular, cannot exist without data processing, especially personal data. They feed and depend on data. The processing of data is certainly necessary and useful, and it will be more and more in the future. But as the processing of personal data grows, so must grow protection of this data. Just as a day cannot exist without night, summer without winter, so the processing of personal data cannot exist without its protection. There is a strong link between the processing and protection of personal data. This implies many things, of which I will mention only those which, in my opinion, are the most important. First, strengthening the system and the measures for the protection of personal data. Second, strengthening data protection authorities around the world. Third, strengthening cooperation and collaboration between data protection authorities from all over the world. Fourth, establishing and strengthening the communication and cooperation of the regulatory bodies with the most important controllers and processors of personal data, such as big tech companies and social networks. And fifth, last but not the least, raising the level of awareness of citizens about the importance of personal data protection.


Wairagala Wakabi: Thank you so much, Commissioner. I think all DPAs and many of us are always grappling with best ways in which we can be able to balance those two elements. And you’ve said parallel balanced development of… both is the key. But you also mentioned the issue of a deeper cooperation between DPAs in different countries. In your role, where you sit, what kind of cross-border and inter-regional cooperation is happening between different data protection authorities?


Milan Marinovic: Speaking of cross-border and inter-regional cooperation between data protection authorities, I would like to take this unique opportunity to introduce to you an initiative that I promoted this spring at the Privacy Symposium in Venice. My idea is to form an association of DPAs named E-association of DPAs from all over the world on a global level in an online format. I call this future association E-association of DPAs and my idea is that all regulators, regardless of their status in the country they are from, have the opportunity to exchange practices in the field of personal data protection, to exchange their experience, provide mutual legal assistance and solve common problems in a simple, easy and efficient way on bilateral and multilateral level. As a first step in the realization of this idea, I plan next week to send to all DPAs in the world email in which I will explain the idea of creating an association and ask them did they support this idea and if they would like to be members of the future association. Depending on the answer, the activities we will undertake will also depend.


Wairagala Wakabi: Thank you very much. Great initiative. We hope you will be also partnering and associating with other actors, academia, civil society, etc. and they will not feel like cats and dogs exhibitions. No, I hope so. So thank you our distinguished speakers for those valuable insights into national approaches to foster regulated and inclusive data governance with many lessons learned and a couple of common challenges. We would now like to invite our regional experts to contribute to this discussion by bringing their experience from West Africa and Southeastern Europe. We are going to begin with Folake Olagunju who is online but was introduced. In the region, Folake, there is a lack of reliable data and this can hamper evidence-based policy making that is necessary for well-founded decision making. How is the economic community for West Africa contributing norm-setting and coordination among its members facilitate cross-border data flow and what lessons can be shared with other regional blocks that are willing to follow suit?


Folake Olagunju: Thank you very much Wakani for giving me the floor and I must apologize for the noise. I’m at a conference center so it’s a bit hectic here. Very valid point. We do know that data is something we all struggle with. It’s not just a West African issue. But for us at the ECWAS Commission, we’re looking to ensure that all the policy making we actually do is anchored based on an evidence-based approach. How do we do this? We try and prioritize the data that we get and ensure that there’s inclusive engagement. We always ensure there are many… Member States are right with us from the very beginning all the way to the end. It was interesting that the Minister from The Gambia spoke about the Supplementary Act on Data Protection within West Africa. That is something we’ve just revised and we’re trying to ensure that it is adopted. That process actually went through from Member States all the way through to the Council of Ministers. But before we did that, we actually made sure we do studies with different stakeholder groups across West Africa. So you’ve got your civil society, you’ve got your private sector, every voice matters. Because when you talk about data, it involves every single person. So it’s not just about a whole of government. I understand why The Gambia is doing a whole of government, but for us at the regional perspective, we’re looking at a whole of society because this is absolutely vital. Now one of the things we’ve done with the revision of the Supplementary Act for the Data Protection within ECOWAS is to look at how we can support cross-border data flow. And this is inter-, intra- and across-borders because this is very, very important. It’s about harmonisation at the regional level, but not homogenisation. So yes, we need to harmonise because we’re a regional bloc, we have similarities, but then it needs to be homogeneous in a certain extent so that it’s tailored to the different nuances of each member country. Stakeholder consultation remains absolutely key, and it’s at the cornerstone of everything that we do at the ECOWAS Commission. We need to ensure that whatever we do is data-driven, and decisions need to have inclusive research, we need to ensure we’ve got academia, we need to ensure civil society for accountability, we need to ensure private sector because they bring the money to the table. We need governments because they are the ones that would actually operationalise whatever it is we do at the regional level. We’re also trying to ensure that what we do aligns with the continental frameworks that we have. The Minister spoke about… not just Malaga Convention, but he also spoke about the ADPF. We look at continental frameworks as well. We’re not working in silos. We ensure that what we do is actually of value to our member states, but also puts them in a right position to be able to actually interact with other regions, like you’ve rightly said, Comestas, SADC, and globally across. We’re looking to align all our standards as well, because this is absolutely very important. So that’s what we’re doing at the moment in terms of harmonization, ensuring that we have evidence, frameworks that are backed up with evidence. Like you rightly said, again, data, not easy to find, but I think if you’re able to actually include people across, what’s the word I’m looking, a plethora of people in the process, you will actually see that at the end of the day, you get that buy-in, and hopefully operationalization becomes a dot. Thank you.


Wairagala Wakabi: Thank you very much. And as a follow-up, how does ECOWAS support the creation of favorable conditions for data governance in the region, and what stakeholders does it take to effectively implement the strategies?


Folake Olagunju: That’s an interesting question. So one of the things we’re looking to do at the moment is actually have a regional instrument in place that will talk about open data. Now, why do we need open data? We’re trying to ensure that all the frameworks that we put in place at the regional level will do three things. Encourage transparency, promote interoperability, because that is absolutely key, and last but not least, but I think the most vital, is responsible data sharing. So data is only as good as who has it and who is willing to share it and how it’s used. So we’re doing that at the regional level. We’re also looking at certain data priorities in the digital sector development strategy that we’ve got, and this is over five years. What we’re trying to do is to ensure that we can define sensitive and non-sensitive data categories for our member countries. What we find is when you ask someone to share data, they’re a bit reluctant because they don’t know which one needs to be, which data needs to be sovereign and which data can be shared. And I think if we’re able to actually elaborate a little bit more on this, this will actually help. Also, we’re looking at technical and infrastructural standards. I know the Honorable Minister from The Gambia mentioned connectivity. That is something we’re also looking at because without connectivity, how do you even begin to share data or even have the conversations that would allow you to, you know, get data and use data? We’re looking at how we can help member countries transform from a more, I don’t want to say analog government to a more interactive government. So we’re looking at quite a number of member countries have static information portals. So we’re trying to see how we can actually elevate those portals so that they become more interactive for their member countries. And it will actually bring more data and it will actually encourage innovation. Because if you’ve got data, you can also innovate. Like I said earlier on, it has to be across, across what’s the word I’m looking at, multi-stakeholder where the IGF, multi-stakeholder collaboration. So we need private sector, private sector are the big guns. They will actually help us build our data driven solutions. We need governments and ICT regulators to actually adapt and adopt these regulations that we’re putting in place and show that they’re domesticated at the national level. We need academia. They’re the ones that will tell us what we need to be looking at two, three years from now. Last but not least, we need our partners. We can’t do it without them. It’s not always about reinventing the wheel. You can actually take what has been done in a different region, bring it here and tailor it to the new nuances of West Africa. And then I want to say we definitely cannot do it without the citizens. If the citizens don’t use data, if the citizens don’t understand the need for data or the citizens.


Wairagala Wakabi: Thank you so much, Folake. Much appreciated. That’s what’s happening in Western Africa. So let’s move on and hear from Southeastern Europe. Olga, that region navigates national data ecosystems with broader regional dynamics. How would you describe the current state of data governance in the region? What are the most prominent dynamics within the region? Thank you for the question. When talking about my region, I like to describe Southeastern


Olga Kyryliuk: Europe as a region with high digital ambition. Also, what makes the region truly unique is that it remains divided between the countries that are operating under the EU regulatory framework such as GDPR, for example, Croatia, and the countries who are still in the process of securing full institutional and legal compliance, such as North Macedonia. This regulatory divide has real consequences, especially when it comes to cross-border trust and data sharing. While the EU member states are benefiting from structured oversight and shared enforcement mechanisms, for the neighboring non-EU countries, even those whose laws quite closely mirror the EU standards, it is often still a challenge because very often they are still considered as third countries in terms of data protection guarantees and safeguards. This status itself introduces friction into the data flows, especially when it comes to public health, education, and digital services where cooperation is supposed to be seamless and smooth. As you can see, the region is caught in between fragmentation and convergence. Fragmentation still defines the legal space, the institutional capacity, and the technical infrastructure. There is also a growing convergence of ambition. We know that there are almost all countries in the region who are having either the EU accession ambition or they are trying to integrate into the global digital markets. This is why they are trying to take the example of the European Union and to standardize and harmonize their laws and their enforcement practices in the sphere of data protection and data governance with the European Union model. This moment also presents both a challenge and an opportunity for our region. When we talk about the challenge, this usually comes to bridging the digital-legal divide which stalls the cooperation. So it’s really very important to ensure that the legal frameworks really talk to each other and there are no major discrepancies. But there is also the opportunity which lies in building shared regional trust frameworks which go beyond the simple compliance mechanisms. I think so far our region is doing quite a good job in trying to adopt the legal frameworks which are according to the best safeguarding practices in terms of data governance and data protection. There is of course quite a long way to go for some countries compared to others because, as I said, the region is not uniform but this is also what is making the region unique and an interesting example for sharing the practices and the case studies with other regions in the world.


Wairagala Wakabi: Thank you very much. I hear a couple of similarities from your region, Eastern Europe and Western Africa. Issues around harmonization and compliance mechanisms, issues around interoperability We are at the IGF, so we cannot not ask about the role of the IGF. Where you sit, you have the regional IGF CDIG. How is it contributing to harmonizing data governance frameworks? Have there been any successful models from the region that I imagine that could serve as a template for others?


Olga Kyryliuk: I believe that IGFs and CDIG in particular have a crucial role to play in this whole process. First of all, we are contributing by identifying shared priorities across the region. We are connecting the in-country stakeholders from across the region and we bring them to the same room and facilitate the dialogue between the stakeholders. As the next step, we also help to improve trust between counterparts from neighboring countries and help them improve coordinating with each other beyond the borders of their nation states. So, of course, CDIG, as any IGF initiative, is not the space that can create the loss, but we are definitely the space that can create the opportunity where the better loss and better cooperation can be shaped and where the new initiatives with some practical value can take the beginning. I would also say that for fragmented regions like ours, usually the very fact of creating the habit of cooperation is an important first step to trusted cooperation throughout the years and I think this is what the initiatives like CDIG are doing. Also, as I mentioned, there is the imagined practice in our region of shaping the convergence between different countries and I think this is important also to have this culture of different stakeholders talking to each other. Of course, during the CDIG meetings which are happening on the annual basis, we repeatedly have the sessions which are touching from different perspectives the issues of data governance and data protection and we usually get a lot of proposals on these specific topics which means that this is something which resonates with the stakeholders in the region and which is truly important to them. And also, for the upcoming meeting this year in October that we will be hosting in Athens, we also have been partnering with the Council of Europe and will be hosting a pre-event to the main meeting gathering the representatives of the media regulatory authorities from Western Balkans which is also a good example to start with some more trusted conversation where they feel more comfortable to share the challenges that they are experiencing on the daily basis but then, of course, they will join the main meeting and will talk to other stakeholders and there will be also the panel hosted so that this can truly shift to the multi-stakeholder conversation. So, I would say this is probably not the solution for everything having the space like IEGF but this is obviously a good beginning where the good initiatives could start.


Wairagala Wakabi: Thank you for sharing these inputs, very insightful in regional challenges from Western Africa, from Southeastern Europe. The examples illustrate the importance of the work that regional organizations are doing in facilitating data governance among states. We have looked at national and regional perspectives on data governance and would like to get into the conversation. We will begin with Meri Sheroyan. The digital code recently adopted in Kyrgyzstan aims to create a favorable environment for digital services and data processing. From your perspective, how has the national approach to data governance evolved over the recent years and what opportunities and challenges does civil society have when engaging in data policy and implementation processes?


Tattugal Mambetalieva: Thank you. At the regional level, Kyrgyzstan is the first to use an integration gateway for secure and transparent data exchange between the state bodies and the business. This innovative approach is part of Kyrgyzstan’s recent digital code which set a standard for data handling, focusing on legality, minimization of data collection, accuracy and integrity to build a better digital environment. Kyrgyzstan doesn’t use centralization and localization of data. Centralization of data has risks for data protection and localization of data creates additional burden to business. This approach differs from many neighboring countries like Kazakhstan and Uzbekistan where data centralization and localization of data is used. Therefore, challenges for civil society, risks on data protection and ethical use still remain. Very well.


Wairagala Wakabi: Thank you for that. As a follow-up, what opportunities do you see for civil society to bridge regional and global data governance efforts? Thank you.


Tattugal Mambetalieva: For Central Asia countries, Central Asia countries Data exchanges are economically interdependent, making data exchange crucial for interaction. However, cross-border data exchange raises concerns about ensuring adequate data security. Civil society must primarily monitor the arrangement of data exchange to ensure countries guarantee transparency, accountability and inclusivity.


Wairagala Wakabi: Thank you so much. I will now move to Meri quickly. Armenia is navigating digital transformation. Coming from the non-government sector, why is it important to bring civic tech voices into public processes and what role are they playing today in advancing robust data frameworks?


Meri Sheroyan: Thank you very much for the question. You are completely right. Armenia is going toward digital transformation and has made notable progress in recent years by launching e-governance platforms, digitizing public services, initiating important data governance projects. Currently, the country is working on building both legal and technical frameworks that need to support these transformations. These frameworks aim to define how public information is accessed, to set the standards for data collection and processing, as well as to regulate the use and management of databases. But from my perspective, I think that these efforts not only depend on technological advancements or standards or rules or protocols, but also on inclusive and participatory governance. That’s why I think that civic tech voices into public policy processes are essential. Armenia builds trust in public institutions. It needs the insights and oversight of actors. that actually serves as a bridge between citizens and public institutions. And civic tech organizations such as non-profits, such as watchdog groups or data advocates or digital right defenders play a crucial role in the process. And I think our involvement does not only include just monitoring digital projects but also to flag the ethical concerns, to identify the data misuse or to address barriers of the excess of data. And in areas like practically in procurement, in budget or beneficiary transparency, beneficiary ownership platforms that Armenia has, these are the transparency tools that have shown the greatest impact when they are complemented by the engagement and oversight of the public. I can say just for our organizational perspective and experience, we’re just not doing the monitoring but we go beyond simply evaluating an impact and we do outreach projects, we do education for citizens so they can understand how their data is used, why digital systems matter and how government platforms can improve public services for everyone. So in short, the civic tech voices are not just contributors but are essential partners to build digital systems that are ethical, that are inclusive and also serve for the public.


Wairagala Wakabi: Thank you very much. Could we briefly also maybe look at some of the capacity gaps that organizations you work with face in leveraging data for sector initiatives?


Meri Sheroyan: For someone who worked many years in public sector, then in international organization and now serving from civil society, I see maybe the issues more crystal clear and maybe I can state one issue that is important, has the importance. I think the lack of clear data strategy maybe is the main challenge and I think that without a unified vision of a roadmap on how data supports the missions, the efforts somehow become fragmented in public institutions. So the weak data governance I think often results to unclear ownership and inconsistent data quality controls. So as we run out of the time, I’ll just be short for this question.


Wairagala Wakabi: We have time. No worries. So thanks everybody. A lot of insights. Before we go to the public to give us some comments and questions, we would like for each participant to use just one minute to give something actionable. Considering the many common challenges that we’ve discussed, what practical steps can your regions take in the next 12 months to strengthen inter-regional, international cooperation on data governance, especially around areas like standard setting, data interoperability and oversight mechanisms? We’ll take this, I think, the same way we went, beginning with the Minister and then the Commissioner and then Olga.


Dr. Ismaila Ceesay: Well, thank you very much. I think one of the practical steps that we can consider is to establish a continental data governance framework so that we can finalize and promote adoption of the EU data policy framework across all member states. This will create a shared baseline for data protection, cross-border data flows, but also interoperability across the continent. Another thing we can also consider is to harmonize national data protection laws across the continent so we can encourage countries to align with continental standards like the Malabu Convention but also internationally with the GDPR style protections. This will reduce fragmentation but also promote easier cross-border collaboration and trust in African data systems. Another thing to consider is to engage in global standard setting bodies to increase African representation in ISO, IEEE, UN bodies, for example, the ITU and others. This will ensure Africa’s interest and realities are reflected in global data standards and regulatory frameworks. And then perhaps we can also consider building regional oversight and coordination mechanisms to create or empower sub-regional data governance hubs. This will help us oversee policy compliance, technical cooperation, joint investigations on cross-border data breaches, but also encourage shared accountability and mutual learning. Thank you.


Wairagala Wakabi: Thank you, Minister. Commissioner?


Milan Marinovic: Thank you. In the next 12 months, in order to strengthen regional and international cooperation in the field of data governance, in our region of the Western Balkans, we plan to hold multilateral and bilateral meetings with DPAs from the region and with relevant representatives of executive authorities, IT companies and other companies. As a good example of those multilateral meetings, I can tell that there is an initiative from 2017 on the initiative of Slovenia, which gathers all DPAs from former Yugoslavia. And it is a very interesting combination because we have two member states of the EU, Slovenia and Croatia, and four which are not members of the EU as Bosnia and Herzegovina, Montenegro, North Macedonia and Serbia. But from these four, Serbia and recently Bosnia and Herzegovina has a law of personal data protection which is complied with the GDPR and police directive of the EU. Montenegro and North Macedonia has not yet. So it is one particular meeting. And the second is the meeting of data protection authorities of Bosnia and Herzegovina, Montenegro and Serbia on our initiative, how to solve the problem which we have with the META and X according to changing of their private data.


Olga Kyryliuk: I think my job is now much easier, responding to this question after commissioning, because I don’t actually need to reinvent the wheel. I would just align with the idea of having inter-regional dialogue on cross-border data sharing between the data protection authorities. What I can offer from my side, as long as we are going to host our annual meeting in October, which is still pretty much time until that moment, I can suggest to have some side meeting or run the session with the DPAs during the CDIG meeting, so that we can also bring this conversation to the regional community. This will be another step in developing this idea and making sure that what we have mentioned over here is not just staying at the level of ideas, but we actually take the follow-up action on what we are discussing here. I also think that one of the things that could be done is some kind of mapping of the regulatory bottlenecks in cross-border data sharing. This can show us what are still the challenges in terms of regulatory frameworks, infrastructure and interoperability. Then, from there, different DPAs in different regions could take those findings and recommendations to ensure further alignment through bilateral and multilateral meetings.


Wairagala Wakabi: Thank you so much. Olga, we’ll go to Folake.


Folake Olagunju: Thank you very much. I’m going to piggyback on the Honourable Minister from Gambia’s words. He’s already spoken about harmonisation and alignment and all that. If that is taking place in the Gambia, by default, hopefully it means it will have moved to Senegal and then hopefully moved to Sierra Leone. The three countries have done all the harmonization the Honorable Minister was talking about. What I would like to see is a setup of a controlled test environment where we can actually get all these member states, the public agencies of member states, to actually trial an interoperable platform. Now, if we’re able to do this for certain sectors, such as health, education, identity systems, and it works, we will be able to then take those lessons and scale up to a regional event. Thank you.


Wairagala Wakabi: Excellent. We’ll now… Okay. Good.


Tattugal Mambetalieva: First of all, I support all proposals, and currently at all international platforms we’re advising an initiative to create an intergovernmental agreement on data exchange among Central Asia countries, open for other countries to join. And this is because data is the new oil, and issues of access are crucial, not only within a country, but also at the regional level.


Wairagala Wakabi: Thank you so much. And finally, Meri?


Meri Sheroyan: For Armenia, what I can say is that the country uses international experience in incorporating many initiatives, for instance, in interoperability, like using X-Road, like an Estonian model. And I think that many practical exercises should be done. So it could be like piloting small-scale data-sharing initiatives to understand whether the cross-border public service delivery works or not. And it could be in different areas, and starting with, like, consular or migration or environmental areas. So this would lead to understand…


Wairagala Wakabi: Thank you so much panelists for those great ideas on joint initiatives on what is relevant work on in the future. Would like now to invite any comments or questions and if whoever has any question or comment please, there is a mic over there, please get there and shoot your question or comment. We have one or two videos, going to ask questions, any others please go ahead and ask. You may mention your name and where you come from. If you want a particular individual to answer the question, you may also direct it to them.


Audience: Thank you all and very excellent panelists. And all the points actually you raised is, I mean, to the critical in the points of the data governance, cross-border governance. So there’s one new protocol and a new framework about, it’s called SOLID and social linked data actually can help to address, can help address all the issues and related to the cross-border governance. So currently, because all the panelists actually from, you know, the emerging countries and emerging countries currently also need the language to be supported by large language models and all the language and the culture can be preserved. and the other is that there is a question about the language data sets. Can you tell us how you observed if the language data sets can be owned by your own country but also can be cross-border and with solid protocol and the LingoAI? So LingoAI is working on the whole solution and can address the issues you raised. Actually the proposal was invented by the founding father of the World Wide Web called Sir Tim Berners-Lee and he joined IGF three times. So I would like to know the deployment or awareness of your countries and to the new protocol of our next generation web. And this protocol was invented to take care of the data control and the data ownership and data sovereignty and cross-border issues. And I’m not sure whether your nation, your country or region have adopted or have the awareness of a solid protocol. Thank you.


Wairagala Wakabi: Thank you so much. The question we have received, anybody is welcome to respond to it. While speaking about not only the element of awareness of the protocol but what kind of initiatives are underway in order to promote data ownership but also encourage cross-data flows. Who is willing to give us a comment? Yes, Commissioner.


Milan Marinovic: As I know, Serbia didn’t adopt that protocol yet. But when I heard how good is the protocol for data protection, I’m sure that Serbia will adopt soon.


Wairagala Wakabi: Thank you. Excellent. Other responses?


Meri Sheroyan: Maybe I can add something. I think that Armenia or any other country localize sensitive data such as biometric information or health records. And also for Armenian cases, I know that government working on distinguish between the sensitive data and less sensitive data. And I think that having different kind of protocols or standards internationally recognized could also impact on the cross-border data sharing. But first of all, for countries that are in a process of implementation and adoption of data governance frameworks, first of all, need to distinguish between sensitive and less sensitive data. And then move forward on adopting international standardization. And I am hopeful that countries like Armenia that are landlocked or emerged will step forward to this initiative to make it possible the cross-border public service delivery across country and out of the country.


Audience: Singapore Internet Governance Forum. So I’m the coordinator and the co-founder for SGIGF. And SGIGF like to work with every countries and the representatives and to help to promote and the solid protocol and the lingual AI to help actually protect the data. And the culture for all the emerging countries. Okay, thank you.


Wairagala Wakabi: Thank you so much. Useful contextual information. We know where you’re coming from. And I think many of us will be willing to reach out to you. The minister has a response to that as well.


Dr. Ismaila Ceesay: I think the issue with language is a bit complex because Africa has over 2,000 languages. Some countries have 56 languages. Some have 200 languages. So for us, just like Serbia, we haven’t really considered this yet. As a small country, 2.5 million people, we have almost 11 to 12 different languages, which are totally different. So how we really harmonize this with the language, with 2,000 languages, it’s difficult. Yes, I mean, because of the colonial history, we have French Africa, we have Spanish Africa, Portuguese Africa, English Africa. Perhaps this is something we can consider. Using those languages. But not our indigenous languages.


Audience: Yes. Lingua AI is actually designed for the indigenous languages. Because, you know, when AI becomes popular, becomes a commodity, and everyone currently in emerging countries, almost, is using it. All use English as a language and to prompt and to get the, you know, generative AI result. And gradually, the indigenous language will be forget, and especially the culture build on the languages. So, for larger companies, if they want to support indigenous languages, and they are going to collect the data, in this centralized way, the data will be owned by the centralized company. And after the fine-tuning the large network model, the data will be continuously collected to the larger companies. So, the data will run out of your countries, and your people and the country don’t own the data. This is called digital colonization. So, the new protocol and the solid and the lingual AI is helping to anti, you know, this kind of a digital colonization.


Wairagala Wakabi: Thank you so much for that clarification. Okay, thank you very much. Data colonization and data sovereignty are key issues in our conversation from where many of us come from. So, it’s good to know there is something that is addressing that. We will reach out to you, but we do have another comment. Thank you, sir.


Audience: Hi, good afternoon. I think my question might be a little premature looking at the landscape in our country, but I will go ahead and ask anyway. Where there is no IGF, no local IOS, where do you suggest this conversation starts in terms of thinking about regulations and guidance and protocol for cross-border data protection? Should it start with the regulator for the sector? Should it be emanating from civil society? Suggestions, I’m open to hear. Some quick guidelines in the two seconds we probably have. Thank you.


Wairagala Wakabi: Would you mind telling us where you’re from?


Audience: The Bahamas.


Wairagala Wakabi: Lucky you. But we have IGF and you don’t, so, you know. All right, we’ll begin with Olga. She has a response.


Olga Kyryliuk: I think it’s not really the problem that you don’t have yet a dedicated space because the dialogue can be created just from the desire to have the conversation. And very often you can have a much more open and trusted dialogue once you talk to stakeholders who are actually having the decision-making and policy-making power. Sometimes even having the decision-making power, but sometimes they might not have the full awareness or might not be that much in full capacity to execute and enforce. And sometimes just some small support and push from outside might be the beginning of a good positive change inside the country. So, I would say if you want something specifically from the DPA, go to DPA. If you want from someone else, go to them. Start maybe from bilateral one-to-one meetings. And once they feel more comfortable to talk to other stakeholders, then you can extend this dialogue.


Wairagala Wakabi: Thank you very much. Other panelists? Yes, please.


Milan Marinovic: Only a few words. It must be multilateral, not bilateral. So, when I said multilateral, it means data protection authorities, stakeholders, executive bodies, all, and civil society. Thank you very much.


Wairagala Wakabi: We do have another question. Please go ahead.


Audience: Thank you. Good afternoon. I’m Joseph. I’m here for the Wikimedia Foundation. I was very interested in the Serbian Commissioner’s comment about privacy as a human right, which, of course, we completely agree that it is. But, of course, there are many other human rights, the right to freedom of information and expression. And I’d like to ask the entire panel very broadly how, through this process of harmonizing regional data protection laws and implementing such new laws, how we can ensure that all human rights are respected throughout this process and that the right to privacy does not come at the expense of any other potential right.


Wairagala Wakabi: So, we are going, thanks for that question, what we are going to do is that we are going to couple it up with another related questions. Namely, in many countries there is a diversity of legal systems and institutional maturity is different. How can we move also towards mutual recognition of data protection frameworks without undermining national data sovereignty? I would like you to reflect on that for one minute, even as we all answer the question from the participant from Wikipedia. We have one and a half minutes. Tie in your last word as well, please. We will go… This time, let’s start from my left. Then, you know, move on.


Meri Sheroyan: Okay, maybe I can start. I think there is a blurred line between protecting digital rights and the expression of freedom of information and sometimes government need to deal with that, not to ban the freedom of information while also considering how to protect them and how to protect their rights in Internet because in recent years the Internet gives us a broad mass of information which can lead to fake news, which can lead to disinformation and for the government it’s important to underline this line and protect their rights but also not to violate the freedom of information. And concerning the question, I think there should be formal cooperation channels between different countries for data protection agencies in different countries so that they can set clear protocols for audits, incident responses or enforcement coordination, etc. So my perspective is that these formal cooperation channels could lead to the national digital sovereignty and to implement data protection frameworks.


Wairagala Wakabi: Thanks, Mary. Satya, the same for you.


Tattugal Mambetalieva: Continue our previous discussion about synchronization and harmonization of approaches between countries are crucial. We need to create an environment of trust and organize a transparent data exchange making it clear who is using the data and for what purpose, I think.


Wairagala Wakabi: Thank you so much. We’ll go to Olga and then the Commissioner.


Olga Kyryliuk: So as a lawyer I don’t see the mutual recognition of data protection frameworks as a threat to national sovereignty but it’s rather an issue of legal interoperability. So we often don’t need to create the identical laws but what we really need is to create the trustworthy equivalence and to create the trust which is cross-border trust so that whenever the data is shared there are also some safeguards in place and responsibility which comes for the breach of mishandling of data. But also I would say that it’s important to ensure that there is transparent oversight and independent enforcement whenever it comes to handling the personal data. So once that is in place it’s just a matter of… for Dialogue and Trust between the Borders and between the Nation-States.


Wairagala Wakabi: Thanks, Olga.


Milan Marinovic: When we speak about sovereignty and data protection authority, it is possible because any law which is based as a law in Serbia on GDPR and police directive have exceptions of the principles. So, if there is a national security in question, you have exceptions of ordinary data protection authority. We have two models of data protection authorities, ordinary and specially which made the bodies in that situation like organized crime, national security, etc. And according to the question of the Wikimedia, I must say something. Exist states in Europe and in the world which have two-in-one system, two bodies which protect two rights, two human rights, personal data protection and free access to information of public importance. It is a situation in Serbia. So, I think that it is a good situation because you can measure in any particular case what is stronger, personal data protection or right to know of the public.


Wairagala Wakabi: Thank you, Commissioner. We’ll go to Folake for one minute and then we’ll end with the Minister.


Folake Olagunju: Thank you very much. Obviously, we all agree that building trust is required around data. For me, I know sovereignty matters. Thank you so much. And we’ll end with Amin.


Dr. Ismaila Ceesay: Yes, I think we were able to solve the problem by we currently have the access to information commission, which has been operationalized. And once we pass the data protection law by the end of this year, we are going to merge these two commissions. So they can be able to fulfill that role of balancing each other like the commissioner from Sabia has said. So that we are going to have one commission responsible for access to information, but also oversight over data protection. And my final words would be, three words will summarize what I’ve been saying. That is harmonization, harmonization, harmonization. We need to harmonize legal and regulatory frameworks and legally binding EU-wide data governance charter, aligned with the Malabu Convention, but also with the GDPR principles and Global Digital Compact. And finally, we need to create uniform standards for consent, privacy, cross-border flows and AI ethics. Thank you.


Wairagala Wakabi: Thank you, Dr. Cisse. Thank you, Commissioner Marinović. Thank you, Meri, Folake, Olga and Tato. Ladies and gentlemen, please join me.


D

Dr. Ismaila Ceesay

Speech speed

135 words per minute

Speech length

1602 words

Speech time

709 seconds

The Gambia prioritizes institutional capacity building, legal reforms, and whole-of-government approach with data protection legislation currently in parliament

Explanation

The Gambia is developing a comprehensive national digital governance framework with support from UNDESA, focusing on building institutional capacity and implementing legal reforms. The country has formulated the Data Protection and Privacy Bill 2023 which is currently before the National Assembly and provides a robust legal framework covering various aspects of data protection.


Evidence

Data Protection and Privacy Bill 2023 currently in parliament, National Data Protection and Privacy Policy of 2019, National Strategy for the Development of Statistics with 2025 Statistics Act revision, national data policy supported by GIZ and UNDESA


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Legal and regulatory | Development


Agreed with

– Tattugal Mambetalieva
– Meri Sheroyan

Agreed on

Capacity building and institutional development are critical priorities


The Gambia faces challenges including capacity gaps, fragmentation, and digital divide inequities across rural populations

Explanation

Despite progress in data governance, The Gambia encounters persistent challenges in implementing effective frameworks. Many government ministries and agencies lack technical capabilities, the national data ecosystem remains siloed with inconsistent standards, and there are significant inequities in digital access particularly affecting rural and underserved populations.


Evidence

Many ministries, departments, and agencies lack technical and analytical capabilities; national data ecosystem remains siloed with inconsistent standards; inequities in digital access and literacy particularly across rural and undeserved populations


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Development | Legal and regulatory


The Gambia aligns with African Union data policy framework and ECOWAS Supplementary Act on Personal Data Protection

Explanation

The Gambia’s data governance reforms are closely aligned with continental and regional frameworks to ensure harmonization and facilitate cross-border cooperation. The country is actively engaged in ECOWAS initiatives and follows African Union guidelines while also considering alignment with EU standards for broader international cooperation.


Evidence

African Union’s data policy framework emphasizing data sovereignty and cross-border data flows, ECOWAS Supplementary Act on Personal Data Protection expected to be endorsed by heads of state, alignment with EU data policy framework


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Development


Agreed with

– Folake Olagunju
– Olga Kyryliuk

Agreed on

Need for harmonization of data governance frameworks across regions


Ministry of Information plays critical role in fostering public trust through digital literacy, transparency, and stakeholder dialogue

Explanation

While the Ministry of Digital Economy handles technical aspects, the Ministry of Information focuses on building public trust and civic engagement in data governance. This includes sensitizing citizens about their data rights, promoting transparency through proactive disclosure of government data, and facilitating dialogue between government, civil society, and the public.


Evidence

Public awareness and digital literacy activities, transparency and access to information as key pillar of 2025-2029 strategic plan, media engagement and narrative framing, stakeholder dialogue and inclusion


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Development | Sociocultural


Need to establish continental data governance framework and increase African representation in global standard setting bodies

Explanation

As a practical step for strengthening international cooperation, there should be efforts to finalize and promote adoption of continental data policy frameworks across all African member states. Additionally, increasing African representation in global bodies like ISO, IEEE, and UN organizations will ensure Africa’s interests are reflected in global data standards.


Evidence

EU data policy framework, Malabu Convention, GDPR style protections, ISO, IEEE, UN bodies like ITU


Major discussion point

International Cooperation and Standard Setting


Topics

Legal and regulatory | Development


T

Tattugal Mambetalieva

Speech speed

80 words per minute

Speech length

272 words

Speech time

201 seconds

Kyrgyzstan adopted a digital code creating favorable environment for digital services using integration gateway for secure data exchange between state bodies and business

Explanation

Kyrgyzstan has implemented an innovative approach through its digital code that establishes standards for data handling with focus on legality, minimization, accuracy and integrity. The country uses an integration gateway system that enables secure and transparent data exchange between government bodies and businesses, setting a regional standard.


Evidence

Digital code focusing on legality, minimization of data collection, accuracy and integrity; integration gateway for secure and transparent data exchange between state bodies and business


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Legal and regulatory | Economic


Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection

Explanation

Unlike Kazakhstan and Uzbekistan which use data centralization and localization approaches, Kyrgyzstan has chosen a different path that avoids these practices. This approach reduces risks for data protection and creates less additional burden for businesses, though challenges for civil society regarding data protection and ethical use still remain.


Evidence

Differs from neighboring countries like Kazakhstan and Uzbekistan where data centralization and localization is used; centralization has risks for data protection and localization creates additional burden to business


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Legal and regulatory | Human rights


Disagreed with

Disagreed on

Data localization and centralization approaches


Civil society must monitor data exchange arrangements to ensure transparency, accountability and inclusivity

Explanation

Given that Central Asian countries are economically interdependent and require data exchange for interaction, civil society has a crucial role in oversight. They must primarily monitor cross-border data exchange arrangements to ensure countries guarantee proper safeguards and maintain democratic principles in data governance.


Evidence

Central Asia countries are economically interdependent, making data exchange crucial for interaction; cross-border data exchange raises concerns about ensuring adequate data security


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Human rights | Legal and regulatory


Agreed with

– Dr. Ismaila Ceesay
– Meri Sheroyan

Agreed on

Capacity building and institutional development are critical priorities


Central Asia countries need intergovernmental agreement on data exchange due to economic interdependence

Explanation

There is an initiative being advised at international platforms to create an intergovernmental agreement on data exchange among Central Asian countries, with openness for other countries to join. This is driven by the recognition that data is valuable like oil and access issues are crucial not only within countries but also at the regional level.


Evidence

Data is the new oil, and issues of access are crucial, not only within a country, but also at the regional level


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Economic


F

Folake Olagunju

Speech speed

169 words per minute

Speech length

1291 words

Speech time

456 seconds

ECOWAS revised the Supplementary Act on Data Protection to support cross-border data flow through harmonization rather than homogenization

Explanation

ECOWAS has revised its Supplementary Act on Data Protection with extensive stakeholder consultation across West Africa to support cross-border data flows. The approach focuses on harmonization at the regional level while avoiding homogenization, allowing for tailored solutions that respect the different nuances of each member country while maintaining regional coherence.


Evidence

Studies with different stakeholder groups across West Africa including civil society and private sector; whole of society approach rather than just whole of government; harmonisation at regional level but not homogenisation


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Development


Agreed with

– Dr. Ismaila Ceesay
– Olga Kyryliuk

Agreed on

Need for harmonization of data governance frameworks across regions


ECOWAS prioritizes inclusive engagement ensuring all member states participate from beginning to end with whole-of-society approach

Explanation

ECOWAS emphasizes evidence-based policy making through inclusive engagement that involves all member states throughout the entire process. Rather than just a whole-of-government approach, they adopt a whole-of-society perspective that includes civil society, private sector, academia, governments, and citizens, recognizing that data governance affects everyone.


Evidence

Member States are right with us from the very beginning all the way to the end; studies with different stakeholder groups across West Africa including civil society and private sector; whole of society approach because data involves every single person


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Development | Human rights


Agreed with

– Meri Sheroyan
– Milan Marinovic

Agreed on

Multi-stakeholder engagement is crucial for effective data governance


Disagreed with

– Dr. Ismaila Ceesay

Disagreed on

Scope of stakeholder engagement approach


Countries must distinguish between sensitive and less sensitive data categories to facilitate responsible data sharing

Explanation

ECOWAS is working on defining sensitive and non-sensitive data categories for member countries to address reluctance in data sharing. When organizations are asked to share data, they are often hesitant because they don’t know which data needs to be sovereign and which can be shared, so clearer categorization will help facilitate responsible data sharing.


Evidence

When you ask someone to share data, they’re a bit reluctant because they don’t know which one needs to be, which data needs to be sovereign and which data can be shared


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Legal and regulatory | Human rights


Controlled test environments for member states to trial interoperable platforms in sectors like health and education

Explanation

As a practical step for the next 12 months, ECOWAS proposes setting up controlled test environments where member states’ public agencies can trial interoperable platforms. If successful trials in sectors such as health, education, and identity systems work, the lessons learned can be scaled up to regional implementation.


Evidence

Trial an interoperable platform for certain sectors, such as health, education, identity systems; if it works, take those lessons and scale up to a regional event


Major discussion point

International Cooperation and Standard Setting


Topics

Infrastructure | Development


O

Olga Kyryliuk

Speech speed

137 words per minute

Speech length

1337 words

Speech time

585 seconds

Southeastern Europe faces regulatory divide between EU member states operating under GDPR and non-EU countries still seeking compliance

Explanation

The Southeastern European region is characterized by a regulatory divide where some countries like Croatia operate under EU frameworks such as GDPR, while others like North Macedonia are still working toward full institutional and legal compliance. This creates challenges for cross-border trust and data sharing, as non-EU countries are often still considered third countries despite having laws that closely mirror EU standards.


Evidence

Countries operating under EU regulatory framework such as GDPR (Croatia) vs countries still in process of securing full compliance (North Macedonia); non-EU countries considered as third countries in terms of data protection guarantees


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Human rights


IGFs like CDIG contribute by identifying shared priorities and facilitating dialogue between stakeholders across regions

Explanation

Internet Governance Forums, particularly CDIG (Central and Eastern European Dialogue on Internet Governance), play a crucial role in harmonizing data governance frameworks by connecting stakeholders from across the region and facilitating dialogue. While IGFs cannot create laws, they create opportunities for better cooperation and help improve trust between counterparts from neighboring countries.


Evidence

Connecting in-country stakeholders from across the region and bringing them to the same room; help improve trust between counterparts from neighboring countries; create opportunity where better cooperation can be shaped


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Legal and regulatory | Development


Mutual recognition of data protection frameworks is about legal interoperability rather than threat to national sovereignty

Explanation

The mutual recognition of data protection frameworks should be viewed as a matter of legal interoperability rather than a threat to national sovereignty. What is needed is trustworthy equivalence and cross-border trust with safeguards and responsibility for data breaches, along with transparent oversight and independent enforcement for personal data handling.


Evidence

Don’t need to create identical laws but need to create trustworthy equivalence and cross-border trust; transparent oversight and independent enforcement whenever it comes to handling personal data


Major discussion point

Human Rights and Digital Sovereignty


Topics

Legal and regulatory | Human rights


Agreed with

– Dr. Ismaila Ceesay
– Folake Olagunju

Agreed on

Need for harmonization of data governance frameworks across regions


M

Meri Sheroyan

Speech speed

120 words per minute

Speech length

828 words

Speech time

410 seconds

Armenia is building legal and technical frameworks for digital transformation including e-governance platforms and data governance projects

Explanation

Armenia has made notable progress in digital transformation by launching e-governance platforms, digitizing public services, and initiating important data governance projects. The country is currently working on building both legal and technical frameworks that define how public information is accessed, set standards for data collection and processing, and regulate database use and management.


Evidence

Launching e-governance platforms, digitizing public services, initiating important data governance projects; frameworks aim to define how public information is accessed, set standards for data collection and processing, regulate use and management of databases


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Legal and regulatory | Development


Agreed with

– Milan Marinovic
– Audience

Agreed on

Balancing data protection with innovation and other rights is a fundamental challenge


Civic tech voices are essential partners building trust in public institutions and serving as bridge between citizens and government

Explanation

Civic tech organizations such as non-profits, watchdog groups, data advocates, and digital rights defenders play a crucial role in Armenia’s digital transformation by serving as bridges between citizens and public institutions. Their involvement goes beyond monitoring to include flagging ethical concerns, identifying data misuse, addressing access barriers, and educating citizens about data use and digital systems.


Evidence

Non-profits, watchdog groups, data advocates, digital right defenders; involvement includes monitoring, flagging ethical concerns, identifying data misuse, addressing barriers of access; outreach projects and education for citizens


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Human rights | Development | Sociocultural


Agreed with

– Dr. Ismaila Ceesay
– Tattugal Mambetalieva

Agreed on

Capacity building and institutional development are critical priorities


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas

Explanation

Armenia incorporates international experience in initiatives like interoperability, using models such as Estonia’s X-Road system. As a practical step forward, the country should pilot small-scale data-sharing initiatives to test whether cross-border public service delivery works effectively in areas such as consular services, migration, or environmental management.


Evidence

Using X-Road, like an Estonian model; piloting small-scale data-sharing initiatives in consular or migration or environmental areas


Major discussion point

International Cooperation and Standard Setting


Topics

Infrastructure | Legal and regulatory


M

Milan Marinovic

Speech speed

111 words per minute

Speech length

1037 words

Speech time

556 seconds

Parallel balanced development of digitalization and personal data protection systems is essential, as they cannot exist without each other

Explanation

The accelerated development of digitalization in all areas of life must be accompanied by the development of personal data protection systems. Just as natural opposites like day and night or summer and winter cannot exist without each other, the processing of personal data cannot exist without its protection, creating a strong interdependent link between processing and protection.


Evidence

Just as a day cannot exist without night, summer without winter, so the processing of personal data cannot exist without its protection; digitalization and AI feed and depend on data


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Human rights | Legal and regulatory


Agreed with

– Meri Sheroyan
– Audience

Agreed on

Balancing data protection with innovation and other rights is a fundamental challenge


Protection of personal data is one of the most threatened fundamental human rights in the era of rapid technological development and AI

Explanation

In today’s era of rapid development of modern technologies, widespread digitalization, and enormous use of artificial intelligence, the protection of personal data and the right to privacy in general have become among the most threatened fundamental human rights. This makes it extremely difficult but not impossible to find the appropriate balance between digital systems and data protection.


Evidence

Era of rapid development of modern technologies, widespread digitalization and enormous use of artificial intelligence; extremely difficult to find appropriate balance between digital and data systems and protection of personal data


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Human rights | Legal and regulatory


Proposed E-association of DPAs worldwide to enable exchange of practices and mutual legal assistance in simple online format

Explanation

The proposal is to form a global association of Data Protection Authorities (DPAs) in an online format that would allow all regulators, regardless of their status or country, to exchange practices in personal data protection, share experiences, provide mutual legal assistance, and solve common problems efficiently on bilateral and multilateral levels.


Evidence

All regulators regardless of status have opportunity to exchange practices, provide mutual legal assistance and solve common problems in simple, easy and efficient way; plan to send email to all DPAs worldwide to explain the idea


Major discussion point

International Cooperation and Standard Setting


Topics

Legal and regulatory | Human rights


Agreed with

– Folake Olagunju
– Meri Sheroyan

Agreed on

Multi-stakeholder engagement is crucial for effective data governance


Two-in-one system protecting both personal data and free access to public information allows balancing competing rights

Explanation

Some states in Europe and worldwide have a two-in-one system where two bodies protect two different human rights: personal data protection and free access to information of public importance. This system, as implemented in Serbia, allows for measuring in any particular case which right is stronger – personal data protection or the public’s right to know.


Evidence

Serbia has two-in-one system with bodies protecting personal data protection and free access to information of public importance; can measure in any particular case what is stronger, personal data protection or right to know of the public


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Legal and regulatory


W

Wairagala Wakabi

Speech speed

121 words per minute

Speech length

1998 words

Speech time

984 seconds

Data governance is crucial for building digital cooperation and requires inter-regional dialogue among policymakers and civil society

Explanation

The session aims to contribute to inter-regional dialogue among policymakers and civil society leaders from West Africa, Eastern Partnership, and Western Balkans to leverage common knowledge on data governance. This approach recognizes that effective data governance requires collaboration across regions and stakeholder groups.


Evidence

Session bringing together speakers from various regions to discuss data governance in line with IGF sub-theme of building digital cooperation


Major discussion point

International Cooperation and Standard Setting


Topics

Development | Legal and regulatory


Domestic and cross-border data governance are both essential for responsible, future-ready and rights-based global frameworks

Explanation

Effective governance of data both domestically and across borders is crucial for accelerating responsible, future-ready and rights-based data governance globally. This requires exploring common challenges and valuable experiences from different regional contexts.


Evidence

Need to explore common challenges and valuable experiences from different regions to accelerate responsible, future-ready and rights-based data governance globally


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Human rights | Legal and regulatory


A

Audience

Speech speed

116 words per minute

Speech length

667 words

Speech time

343 seconds

SOLID protocol and LingoAI can address cross-border data governance issues while preserving indigenous languages and preventing digital colonization

Explanation

The SOLID protocol, invented by Tim Berners-Lee, is designed to address data control, ownership, sovereignty and cross-border issues. LingoAI specifically supports indigenous languages to prevent digital colonization where larger companies collect language data centrally, causing countries to lose ownership of their cultural and linguistic data.


Evidence

SOLID protocol invented by founding father of World Wide Web Tim Berners-Lee; LingoAI designed for indigenous languages to prevent digital colonization where data runs out of countries to larger companies


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Sociocultural | Legal and regulatory


Multi-stakeholder dialogue should start with direct engagement of decision-makers even without formal IGF structures

Explanation

In countries without established IGF or local governance structures, conversations about data protection regulations should begin with direct bilateral engagement with stakeholders who have decision-making power. The dialogue can start from the desire to have conversations and gradually expand to include more stakeholders once trust is built.


Evidence

Question from The Bahamas about where to start conversations in absence of IGF or local governance structures


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Development | Legal and regulatory


Human rights must be balanced in data protection implementation to ensure privacy doesn’t come at expense of freedom of information and expression

Explanation

While privacy is a fundamental human right, the implementation of data protection laws and harmonization of regional frameworks must ensure that all human rights are respected. The right to privacy should not come at the expense of other rights such as freedom of information and expression.


Evidence

Question from Wikimedia Foundation about ensuring all human rights are respected throughout harmonization process


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Legal and regulatory


Agreed with

– Milan Marinovic
– Meri Sheroyan

Agreed on

Balancing data protection with innovation and other rights is a fundamental challenge


Agreements

Agreement points

Need for harmonization of data governance frameworks across regions

Speakers

– Dr. Ismaila Ceesay
– Folake Olagunju
– Olga Kyryliuk

Arguments

The Gambia aligns with African Union data policy framework and ECOWAS Supplementary Act on Personal Data Protection


ECOWAS revised the Supplementary Act on Data Protection to support cross-border data flow through harmonization rather than homogenization


Mutual recognition of data protection frameworks is about legal interoperability rather than threat to national sovereignty


Summary

All speakers agree that regional harmonization of data governance frameworks is essential, but emphasize that harmonization should not mean homogenization – allowing for local adaptations while maintaining interoperability


Topics

Legal and regulatory | Development


Multi-stakeholder engagement is crucial for effective data governance

Speakers

– Folake Olagunju
– Meri Sheroyan
– Milan Marinovic

Arguments

ECOWAS prioritizes inclusive engagement ensuring all member states participate from beginning to end with whole-of-society approach


Civic tech voices are essential partners building trust in public institutions and serving as bridge between citizens and government


Proposed E-association of DPAs worldwide to enable exchange of practices and mutual legal assistance in simple online format


Summary

Speakers consistently emphasize that effective data governance requires involvement of all stakeholders including government, civil society, private sector, academia, and citizens rather than top-down approaches


Topics

Development | Human rights | Legal and regulatory


Balancing data protection with innovation and other rights is a fundamental challenge

Speakers

– Milan Marinovic
– Meri Sheroyan
– Audience

Arguments

Parallel balanced development of digitalization and personal data protection systems is essential, as they cannot exist without each other


Armenia is building legal and technical frameworks for digital transformation including e-governance platforms and data governance projects


Human rights must be balanced in data protection implementation to ensure privacy doesn’t come at expense of freedom of information and expression


Summary

There is consensus that data protection cannot be implemented in isolation but must be balanced with digital innovation, economic development, and other fundamental rights like freedom of expression


Topics

Human rights | Legal and regulatory


Capacity building and institutional development are critical priorities

Speakers

– Dr. Ismaila Ceesay
– Tattugal Mambetalieva
– Meri Sheroyan

Arguments

The Gambia prioritizes institutional capacity building, legal reforms, and whole-of-government approach with data protection legislation currently in parliament


Civil society must monitor data exchange arrangements to ensure transparency, accountability and inclusivity


Civic tech voices are essential partners building trust in public institutions and serving as bridge between citizens and government


Summary

All speakers recognize that effective data governance requires significant investment in building institutional capacity, technical capabilities, and oversight mechanisms


Topics

Development | Legal and regulatory


Similar viewpoints

Both speakers advocate for institutional approaches that balance data protection with transparency and access to information, with dedicated bodies handling both responsibilities

Speakers

– Dr. Ismaila Ceesay
– Milan Marinovic

Arguments

Ministry of Information plays critical role in fostering public trust through digital literacy, transparency, and stakeholder dialogue


Two-in-one system protecting both personal data and free access to public information allows balancing competing rights


Topics

Human rights | Legal and regulatory


Both emphasize the need for practical, incremental approaches to data sharing that start with clear categorization and small-scale pilots before scaling up

Speakers

– Folake Olagunju
– Meri Sheroyan

Arguments

Countries must distinguish between sensitive and less sensitive data categories to facilitate responsible data sharing


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas


Topics

Legal and regulatory | Infrastructure


Both speakers highlight how their regions face challenges from regulatory fragmentation and different approaches to data governance among neighboring countries

Speakers

– Tattugal Mambetalieva
– Olga Kyryliuk

Arguments

Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection


Southeastern Europe faces regulatory divide between EU member states operating under GDPR and non-EU countries still seeking compliance


Topics

Legal and regulatory | Human rights


Unexpected consensus

Digital colonization and indigenous language preservation

Speakers

– Audience
– Dr. Ismaila Ceesay

Arguments

SOLID protocol and LingoAI can address cross-border data governance issues while preserving indigenous languages and preventing digital colonization


Need to establish continental data governance framework and increase African representation in global standard setting bodies


Explanation

There was unexpected alignment between the audience member’s technical solution (SOLID protocol) and the Minister’s call for African representation in global standards, both addressing concerns about digital sovereignty and preventing external control over local data and cultural assets


Topics

Human rights | Sociocultural | Legal and regulatory


Practical implementation through pilot projects and controlled environments

Speakers

– Folake Olagunju
– Meri Sheroyan
– Olga Kyryliuk

Arguments

Controlled test environments for member states to trial interoperable platforms in sectors like health and education


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas


IGFs like CDIG contribute by identifying shared priorities and facilitating dialogue between stakeholders across regions


Explanation

Unexpectedly, speakers from different regions converged on the same practical approach of starting with small-scale pilots and controlled environments rather than attempting large-scale implementations immediately


Topics

Infrastructure | Development | Legal and regulatory


Overall assessment

Summary

The discussion revealed strong consensus on fundamental principles of data governance including the need for harmonization (not homogenization), multi-stakeholder engagement, capacity building, and balancing protection with innovation. Speakers consistently emphasized practical, incremental approaches over ambitious large-scale implementations.


Consensus level

High level of consensus on principles and approaches, with speakers from different regions facing similar challenges and converging on similar solutions. This suggests that despite different regulatory environments, there are universal principles and practical approaches that can guide effective data governance across regions. The consensus provides a strong foundation for inter-regional cooperation and knowledge sharing.


Differences

Different viewpoints

Data localization and centralization approaches

Speakers

– Tattugal Mambetalieva

Arguments

Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection


Summary

Kyrgyzstan explicitly chose not to use data centralization and localization approaches, differing from neighboring countries like Kazakhstan and Uzbekistan. This represents a fundamental disagreement on data governance strategy within the Central Asian region.


Topics

Legal and regulatory | Human rights


Scope of stakeholder engagement approach

Speakers

– Dr. Ismaila Ceesay
– Folake Olagunju

Arguments

The Gambia is spearheading cross-sectoral coordination to ensure that data governance is embedded across ministries, departments, and agencies


ECOWAS prioritizes inclusive engagement ensuring all member states participate from beginning to end with whole-of-society approach


Summary

While The Gambia focuses on a ‘whole-of-government’ approach primarily targeting government institutions, ECOWAS advocates for a broader ‘whole-of-society’ approach that includes civil society, private sector, academia, and citizens from the beginning.


Topics

Development | Human rights


Unexpected differences

Language preservation in data governance

Speakers

– Dr. Ismaila Ceesay
– Audience

Arguments

The issue with language is a bit complex because Africa has over 2,000 languages. Some countries have 56 languages. Some have 200 languages. So for us, just like Serbia, we haven’t really considered this yet


SOLID protocol and LingoAI can address cross-border data governance issues while preserving indigenous languages and preventing digital colonization


Explanation

An unexpected disagreement emerged around the feasibility and priority of preserving indigenous languages in data governance frameworks. While the audience member emphasized the importance of preventing digital colonization through language preservation, the Minister from The Gambia expressed skepticism about the practical implementation given Africa’s linguistic diversity.


Topics

Human rights | Sociocultural | Legal and regulatory


Overall assessment

Summary

The discussion revealed relatively low levels of fundamental disagreement among speakers, with most conflicts centered around implementation approaches rather than core principles. Main areas of disagreement included data localization strategies, stakeholder engagement scope, and practical approaches to cross-border cooperation mechanisms.


Disagreement level

Low to moderate disagreement level. The speakers generally agreed on fundamental principles of data governance, human rights protection, and the need for regional cooperation. Disagreements were primarily tactical rather than strategic, focusing on ‘how’ rather than ‘what’ or ‘why’. This suggests a mature policy environment where stakeholders share common goals but may have different preferred pathways to achieve them. The implications are positive for international cooperation, as the shared foundation provides a basis for compromise and collaborative solutions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for institutional approaches that balance data protection with transparency and access to information, with dedicated bodies handling both responsibilities

Speakers

– Dr. Ismaila Ceesay
– Milan Marinovic

Arguments

Ministry of Information plays critical role in fostering public trust through digital literacy, transparency, and stakeholder dialogue


Two-in-one system protecting both personal data and free access to public information allows balancing competing rights


Topics

Human rights | Legal and regulatory


Both emphasize the need for practical, incremental approaches to data sharing that start with clear categorization and small-scale pilots before scaling up

Speakers

– Folake Olagunju
– Meri Sheroyan

Arguments

Countries must distinguish between sensitive and less sensitive data categories to facilitate responsible data sharing


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas


Topics

Legal and regulatory | Infrastructure


Both speakers highlight how their regions face challenges from regulatory fragmentation and different approaches to data governance among neighboring countries

Speakers

– Tattugal Mambetalieva
– Olga Kyryliuk

Arguments

Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection


Southeastern Europe faces regulatory divide between EU member states operating under GDPR and non-EU countries still seeking compliance


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

Harmonization of data governance frameworks across regions is critical, but should focus on harmonization rather than homogenization to respect local contexts and nuances


Parallel balanced development of digitalization and data protection systems is essential – they cannot exist without each other and must grow together


Multi-stakeholder engagement involving government, civil society, private sector, academia, and citizens is fundamental to successful data governance implementation


Cross-border data flows require building trust frameworks and legal interoperability rather than identical laws across jurisdictions


Regional organizations like ECOWAS, African Union, and regional IGFs play crucial roles in facilitating dialogue and coordination between member states


Capacity building, institutional strengthening, and bridging digital divides remain persistent challenges across all regions discussed


Data protection authorities need stronger international cooperation mechanisms to address cross-border data governance challenges effectively


Resolutions and action items

Commissioner Marinovic to send emails to all DPAs worldwide next week proposing creation of an E-association of DPAs for global cooperation


CDIG to host a side meeting or session with DPAs during their October meeting in Athens to advance inter-regional dialogue


ECOWAS to establish controlled test environments for member states to trial interoperable platforms in sectors like health, education, and identity systems


The Gambia to finalize data protection legislation by end of year and merge access to information commission with future data protection authority


Central Asia countries to develop intergovernmental agreement on data exchange with openness for other countries to join


Armenia to pilot small-scale cross-border data-sharing initiatives in consular, migration, or environmental areas


African countries to establish continental data governance framework and increase representation in global standard-setting bodies like ISO, IEEE, and ITU


Unresolved issues

How to effectively handle indigenous language preservation and data sovereignty concerns in the context of AI and large language models


Balancing privacy rights with freedom of information and expression rights in harmonized frameworks


Addressing the regulatory divide between EU member states and non-EU countries in Southeastern Europe for seamless data cooperation


Managing the complexity of over 2,000 languages across Africa in data governance frameworks


Establishing clear protocols for distinguishing between sensitive and non-sensitive data categories across different jurisdictions


Creating adequate safeguards against digital colonization while enabling beneficial cross-border data flows


Developing capacity and infrastructure in countries without existing IGFs or mature institutional frameworks


Suggested compromises

Two-in-one system combining data protection and access to information oversight in single authority to balance competing rights (as implemented in Serbia and planned for The Gambia)


Using colonial languages (English, French, Spanish, Portuguese) as interim solution for African language data governance while working toward indigenous language solutions


Creating trustworthy equivalence rather than identical laws for mutual recognition of data protection frameworks


Establishing formal cooperation channels between countries’ data protection agencies with clear protocols for audits and enforcement coordination


Starting with bilateral one-to-one meetings between stakeholders before expanding to multilateral dialogue in countries without established frameworks


Mapping regulatory bottlenecks in cross-border data sharing to identify specific areas for targeted bilateral and multilateral cooperation


Thought provoking comments

Protection of personal data, as well as the right to privacy in general, is one of the most threatened fundamental human rights in today’s era of rapid development of modern technologies… Just as a day cannot exist without night, summer without winter, so the processing of personal data cannot exist without its protection.

Speaker

Milan Marinovic (Commissioner, Serbia)


Reason

This philosophical framing elevated the discussion from technical compliance to fundamental human rights, using powerful metaphors to illustrate the inseparable relationship between data use and protection. It challenged the common view that privacy and innovation are in tension.


Impact

This comment shifted the entire tone of the discussion from technical implementation to rights-based approaches. It influenced subsequent speakers to frame their responses in terms of balancing rights rather than just regulatory compliance, and set up the foundation for later discussions about balancing privacy with other human rights like freedom of information.


My idea is to form an association of DPAs named E-association of DPAs from all over the world on a global level in an online format… I plan next week to send to all DPAs in the world email in which I will explain the idea of creating an association and ask them did they support this idea.

Speaker

Milan Marinovic (Commissioner, Serbia)


Reason

This was a concrete, actionable proposal that moved beyond theoretical discussion to practical implementation. It demonstrated how regional cooperation could scale to global cooperation and showed initiative in creating new institutional frameworks.


Impact

This proposal energized the discussion and influenced other panelists to think more concretely about actionable steps. It led to Olga offering to host a side meeting during CDIG, showing how one concrete proposal can catalyze additional collaborative initiatives.


It’s not just about a whole of government. I understand why The Gambia is doing a whole of government, but for us at the regional perspective, we’re looking at a whole of society because this is absolutely vital… It’s about harmonisation at the regional level, but not homogenisation.

Speaker

Folake Olagunju (ECOWAS)


Reason

This distinction between ‘whole of government’ and ‘whole of society’ approaches was intellectually significant, recognizing that data governance affects everyone, not just government entities. The harmonization vs. homogenization distinction was particularly nuanced, acknowledging the need for common standards while respecting local contexts.


Impact

This comment broadened the scope of the discussion to include all stakeholders and influenced how other speakers conceptualized inclusive governance. It also provided a framework for thinking about regional cooperation that respects sovereignty while enabling interoperability.


Kyrgyzstan doesn’t use centralization and localization of data. Centralization of data has risks for data protection and localization of data creates additional burden to business. This approach differs from many neighboring countries like Kazakhstan and Uzbekistan where data centralization and localization of data is used.

Speaker

Tattugal Mambetalieva (Kyrgyzstan)


Reason

This was a bold counter-narrative to the common assumption that data localization equals data sovereignty. It challenged conventional wisdom by arguing that decentralization might actually be better for both privacy and business, offering a different model from regional neighbors.


Impact

This comment introduced complexity to the discussion about data sovereignty approaches and showed that there isn’t one-size-fits-all solution. It prompted reflection on different models and their trade-offs, contributing to a more nuanced understanding of policy options.


So, the data will run out of your countries, and your people and the country don’t own the data. This is called digital colonization. So, the new protocol and the solid and the lingual AI is helping to anti, you know, this kind of a digital colonization.

Speaker

Audience member (Singapore IGF)


Reason

The introduction of ‘digital colonization’ as a concept was provocative and reframed data governance as an anti-colonial struggle. This connected historical power dynamics to contemporary digital issues, particularly relevant for the Global South participants.


Impact

This comment resonated strongly with the moderator and several panelists, as evidenced by the moderator’s response: ‘Data colonization and data sovereignty are key issues in our conversation from where many of us come from.’ It added a critical perspective that connected technical discussions to broader issues of global power and equity.


I, as someone who deals with the protection of personal data, feel like a cat at a dog’s exhibition.

Speaker

Milan Marinovic (Commissioner, Serbia)


Reason

This humorous but insightful metaphor captured the tension that privacy advocates often feel in technology-focused discussions. It acknowledged the challenge of being the ‘voice of caution’ in innovation-driven environments while doing so with self-awareness and humor.


Impact

This comment created a moment of levity that made the discussion more relatable and human. It also established Marinovic as someone who could balance serious concerns with approachable communication, which may have made his subsequent technical proposals more palatable to the audience.


Overall assessment

These key comments fundamentally shaped the discussion by elevating it from a technical policy exchange to a more philosophical and rights-based dialogue. Marinovic’s human rights framing and metaphors set a tone that influenced how other speakers approached the topic, while his concrete proposal for a global DPA association provided a practical anchor for the theoretical discussions. The ‘whole of society’ vs ‘whole of government’ distinction broadened the scope of consideration, and the digital colonization concept added critical depth about power dynamics. Together, these comments created a multi-layered conversation that balanced philosophical foundations, practical proposals, inclusive approaches, and critical perspectives on global digital governance. The discussion evolved from individual country reports to collaborative problem-solving, with participants building on each other’s insights to develop more nuanced and actionable approaches to cross-border data governance.


Follow-up questions

How can regions effectively balance harmonization with homogenization when developing cross-border data governance frameworks?

Speaker

Folake Olagunju


Explanation

This addresses the challenge of creating unified regional standards while respecting individual country nuances and sovereignty


What specific mechanisms can be established to create mutual recognition of data protection frameworks between countries with different legal systems and institutional maturity levels?

Speaker

Wairagala Wakabi


Explanation

This explores how countries can work together despite having different levels of development in their data protection systems


How can the proposed E-association of DPAs be structured and implemented to facilitate global cooperation among data protection authorities?

Speaker

Milan Marinovic


Explanation

This follows up on the Commissioner’s initiative to create a global online association of data protection authorities for knowledge sharing and cooperation


What are the practical steps for implementing controlled test environments for interoperable platforms across member states in different sectors?

Speaker

Folake Olagunju


Explanation

This addresses the need for pilot programs to test cross-border data sharing in sectors like health, education, and identity systems


How can countries with over 2,000 indigenous languages effectively implement language-preserving AI and data governance protocols?

Speaker

Dr. Ismaila Ceesay and Singapore IGF representative


Explanation

This explores the challenge of preserving linguistic diversity while implementing modern data governance frameworks


What is the level of awareness and potential for adoption of the SOLID protocol across different regions for addressing data sovereignty and digital colonization?

Speaker

Singapore IGF representative


Explanation

This investigates how emerging technologies can help countries maintain control over their data while enabling cross-border flows


How can countries without established IGFs or local internet governance structures initiate data governance conversations and which stakeholders should lead this process?

Speaker

Participant from The Bahamas


Explanation

This addresses the practical challenge of starting data governance initiatives in countries with limited existing infrastructure


How can data protection laws be designed to ensure all human rights are respected, particularly balancing privacy rights with freedom of information and expression?

Speaker

Joseph from Wikimedia Foundation


Explanation

This explores the complex challenge of protecting multiple human rights simultaneously without one undermining another


What specific criteria should be used to distinguish between sensitive and non-sensitive data categories at both national and regional levels?

Speaker

Folake Olagunju and Meri Sheroyan


Explanation

This addresses the need for clear categorization systems to facilitate appropriate data sharing while maintaining security


How can small-scale pilot initiatives for cross-border public service delivery be designed and implemented in areas like consular services, migration, and environmental cooperation?

Speaker

Meri Sheroyan


Explanation

This explores practical approaches to testing cross-border data sharing through specific use cases


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #376 Elevating Childrens Voices in AI Design

WS #376 Elevating Childrens Voices in AI Design

Session at a glance

Summary

This workshop, titled “Elevating Children’s Voices in AI Design,” brought together researchers, experts, and young people to discuss the impact of artificial intelligence on children and how to make AI development more child-centric. The session was sponsored by the Lego Group and included participants from the Family Online Safety Institute, the Alan Turing Institute, and the UN’s Center for AI and Robotics. The discussion began with powerful video messages from young people across the UK, who emphasized that AI should be viewed as a tool to aid rather than replace humans, while highlighting concerns about privacy, environmental impact, and the need for ethical development.


Stephen Balkam from the Family Online Safety Institute presented research showing that, unlike previous technology trends, teens now believe their parents know more about generative AI than they do. The research revealed that while parents use AI mainly for analytical tasks, teens focus on efficiency-boosting activities like proofreading and summarizing. Both groups expressed concerns about job loss and misinformation, though they remained optimistic about AI’s potential for learning and scientific progress. Maria Eira from UNICRI shared findings from a global survey indicating a lack of awareness among parents about how their children use AI for personal purposes, and noted that parents who regularly use AI themselves tend to view its impact on children more positively.


Dr. Mhairi Aitken from the Alan Turing Institute presented research funded by the Lego Group showing that about 22% of children aged 8-12 use generative AI, with significant disparities between private and state-funded schools. The research found that children with additional learning needs were more likely to use AI for communication, and that children showed strong preferences for traditional tactile art materials over AI-generated alternatives. Key concerns raised by children included bias and representation in AI outputs, environmental impacts, and exposure to inappropriate content. The discussion concluded that AI systems are not currently designed with children in mind, echoing patterns from previous technology waves, and emphasized the need for greater transparency, child-centered design principles, and critical AI literacy rather than just technical understanding.


Keypoints

## Major Discussion Points:


– **Children’s Current AI Usage and Readiness**: Research reveals that children aged 8-12 are already using generative AI (22% reported usage), but AI systems are not designed with children in mind. This creates a fundamental mismatch where children are adapting to adult-designed systems rather than having age-appropriate tools available to them.


– **Parental Awareness and Communication Gaps**: Studies show significant disconnects between parents and children regarding AI use. While parents are aware of academic uses, they often don’t know about more personal uses like AI companions. Parents who regularly use AI themselves tend to view its impact on children more positively, highlighting the importance of parental AI literacy.


– **Equity and Access Concerns**: Research identified stark differences in AI access and education between private and state-funded schools, with children in private schools having significantly more exposure to and understanding of generative AI. This points to growing digital divides that could exacerbate existing educational inequalities.


– **Children’s Rights and Ethical Considerations**: Young people expressed sophisticated concerns about AI bias, environmental impact, and representation in AI outputs. Children of color became upset when not represented in AI-generated images, sometimes choosing not to use the technology as a result. There’s a strong call for children’s voices to be included in AI development and policy decisions.


– **Design and Safety Challenges**: The discussion emphasized that AI systems need to be designed with children’s wellbeing from the start, not retrofitted later. Key concerns include inappropriate content exposure, emotional dependency on AI companions, and the need for transparency about how AI systems work and collect data.


## Overall Purpose:


The workshop aimed to elevate children’s voices in AI design and development by presenting research on how AI impacts children, sharing direct perspectives from young people, and advocating for child-centric approaches to AI development. The session sought to demonstrate that children have valuable insights about AI and should be meaningfully included in decision-making processes about technologies that will significantly impact their lives.


## Overall Tone:


The discussion maintained a consistently serious yet optimistic tone throughout. It began with powerful, articulate messages from young people that set a respectful, non-patronizing approach to children’s perspectives. The research presentations were delivered in an academic but accessible manner, emphasizing both opportunities and concerns. The panel discussion became increasingly collaborative and solution-focused, with participants building on each other’s insights. The presence of young participants (like 17-year-old Ryan) reinforced the workshop’s commitment to including youth voices, and the session concluded on an empowering note with the quote “the goal cannot be the profits, it must be the people,” emphasizing the human-centered approach needed for AI development.


Speakers

**Speakers from the provided list:**


– **Online Participants** – Young people from across the UK sharing their views on generative AI (names not disclosed for safety reasons)


– **Dr. Mhairi Aitken** – Senior Ethics Research Fellow at the Alan Turing Institute, leads the children and AI program


– **Leanda Barrington‑Leach** – Executive Director of the Five Rights Foundation


– **Participant** – Multiple unidentified participants asking questions from the audience


– **Maria Eira** – AI expert at the Center for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI)


– **Adam Ingle** – Representative from the Lego Group, workshop moderator and convener


– **Stephen Balkam** – Founding CEO of the Family Online Safety Institute (FOSI)


– **Mariana Rozo‑Paz** – Representative from DataSphere Initiative


– **Joon Baek** – Representative from Youth for Privacy, a youth NGO focused on digital privacy


– **Co-Moderator** – Online moderator named Lisa


**Additional speakers:**


– **Ryan** – 17-year-old youth ambassador of the OnePAL Foundation in Hong Kong, advocating for digital sustainability and access


– **Elisa** – Representative from the OnePile Foundation (same organization as Ryan)


– **Grace Thompson** – From CAIDP (asked question online, mentioned by moderator)


– **Katarina** – Law student in the UK studying AI law (asked question online)


Full session report

# Elevating Children’s Voices in AI Design: A Comprehensive Workshop Report


## Executive Summary


The workshop “Elevating Children’s Voices in AI Design,” sponsored by the Lego Group, brought together leading researchers, policy experts, and young people to address the critical gap between children’s experiences with artificial intelligence and their representation in AI development decisions. The session featured participants from the Family Online Safety Institute, the Alan Turing Institute, and UNICRI (United Nations Interregional Crime and Justice Research Institute), alongside direct contributions from young people across the UK and internationally.


The discussion revealed a fundamental challenge: whilst children are already using generative AI at significant rates, AI systems are not designed with children’s needs, safety, or wellbeing in mind. This pattern mirrors previous technology rollouts where child safety considerations were retrofitted rather than built in from the start. The workshop established that children possess sophisticated understanding of AI’s implications and valuable insights for its development, emphasizing the need for meaningful youth participation in AI governance.


## Opening Perspectives: Children’s Voices on AI


The workshop opened with compelling video messages from young people across the UK who articulated sophisticated perspectives on AI’s potential and risks. These participants emphasized that AI should be viewed as a tool to aid rather than replace humans, stating: “AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans.”


The young participants demonstrated remarkable awareness of complex issues surrounding AI development. They highlighted concerns about privacy, describing it as “a basic right, not a luxury,” and showed deep understanding of environmental impacts, noting that “AI training requires massive resources including thousands of litres of water and extensive GPU usage.” They asserted their right to meaningful participation in AI governance: “Young people like me must be part of this conversation. We aren’t just the future, we’re here now.”


Their perspectives on education were particularly nuanced, advocating that “AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills.” This position demonstrated their understanding that prohibition is less effective than education in preparing young people for an AI-integrated world.


The session also referenced the Children’s AI Summit, which produced a “Children’s Manifesto for the Future of AI” featuring contributions from young people including Ethan (16) and Alexander, Ashvika, Eva, and Mustafa (all 11).


## Research Findings: Current State of Children’s AI Use


### Family Online Safety Institute Research


Stephen Balkam from the Family Online Safety Institute (FOSI), a 501c3 charitable organization, presented research that revealed an unusual pattern in technology adoption. For the first time, teenagers reported that their parents knew more about generative AI than they did, primarily because parents were learning AI for workplace purposes.


The research revealed distinct usage patterns between generations. Parents primarily used AI for analytical tasks related to their professional responsibilities, whilst teenagers focused on efficiency-boosting activities such as proofreading and summarizing academic work. However, concerning trends emerged showing that students were increasingly using generative AI to complete their work entirely rather than merely to enhance it.


Both parents and teenagers expressed shared concerns about job displacement and misinformation, though they remained optimistic about AI’s potential for learning and scientific progress. Data transparency emerged as the top priority for both groups when considering AI companies.


Stephen also conducted an interactive demonstration with the audience, showing AI-generated versus real images, including examples from Google’s new Veo video generator, to illustrate the increasing sophistication of AI-generated content and the challenges this poses for detection.


### UNICRI Global Survey Insights


Maria Eira from UNICRI’s Centre for AI and Robotics shared findings from a survey published three days prior to the workshop, covering 19 countries across Europe, Asia, Africa, and the Americas. The research revealed significant communication gaps between parents and children regarding AI use. While parents demonstrated awareness of their children’s academic AI applications, they often remained unaware of more personal uses, such as AI companions or seeking help for personal problems.


The research identified a crucial correlation: parents who regularly used generative AI themselves felt more positive about its impact on their children’s development. This finding suggested that familiarity with technology shapes attitudes toward children’s use.


Eira’s research also highlighted the need for separate legislative frameworks specifically targeting children’s AI rights, recognizing that children cannot provide the same informed consent as adults and face unique vulnerabilities in AI interactions.


### Alan Turing Institute Children and AI Research


Dr. Mhairi Aitken presented research on children’s direct experiences with AI, funded by the Lego Group. The study found that approximately 22% of children aged 8-12 reported using generative AI, with three out of five teachers incorporating AI into their work. However, the research revealed stark disparities in access and understanding between private and state-funded schools, pointing to emerging equity issues.


The research uncovered particularly significant findings regarding children with additional learning needs, who showed heightened interest in using AI for communication and support. This suggested AI’s potential for inclusive education, though Dr. Aitken emphasized that development must be grounded in understanding actual needs rather than technology-first approaches.


When given choices between AI tools and traditional materials for creative activities, children overwhelmingly chose traditional tactile options. They expressed that “art is actually real” whilst feeling they “couldn’t say that about AI art because the computer did it, not them.” This preference revealed children’s sophisticated understanding of authenticity and creativity.


The research also documented concerning issues with bias and representation in AI outputs. Children of color became upset when not represented in AI-generated images, sometimes choosing not to use the technology as a result. Similarly, children who learned about the environmental impacts of AI models often decided against using them.


## Panel Discussion and Key Themes


### Design and Safety Challenges


The panel discussion revealed that AI systems fundamentally fail to consider children’s needs during development. Stephen Balkam noted that this pattern repeats previous web technologies where safety features were retrofitted rather than built in from the start. Dr. Aitken emphasized that the burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions.


Particular concerns emerged around AI companions and chatbots, with evidence that young children were forming emotional attachments to these systems and using them for therapy-like conversations. This raised questions about potential dependency and isolation from real community connections.


### Educational Impact and Equity


The research revealed troubling equity gaps in AI access and education. Children in private schools demonstrated significantly more exposure to and understanding of generative AI compared to their peers in state-funded schools, suggesting that AI could exacerbate existing educational inequalities.


However, the discussion also highlighted AI’s potential for supporting inclusive education, particularly for children with additional learning needs who showed interest in using AI for communication support.


### Privacy, Transparency, and Rights


Data protection emerged as a fundamental concern across all speakers. The young participants’ assertion that privacy is a basic right was echoed by researchers who emphasized the need for transparency about AI system operations and data collection practices. Stephen Balkam noted the ongoing challenge of balancing safety and privacy, observing that more safety potentially requires less privacy.


## International Youth Participation


The workshop included international youth participation, notably from 17-year-old Ryan, a youth ambassador of the OnePAL Foundation in Hong Kong, who asked specifically about leveraging generative AI for supporting people with disabilities. Elisa from the OnePile Foundation raised questions about power imbalances between children and AI systems. Zahra Amjed was scheduled to join as a young representative but experienced technical difficulties.


## Areas of Consensus and Ongoing Challenges


Participants agreed on several fundamental principles:


– AI systems must be designed with children’s needs and safety in mind from the outset


– Children must be meaningfully included in AI decision-making processes


– Transparency about data practices and privacy protection are essential requirements


– AI shows significant potential for supporting children with disabilities and additional learning needs


– Environmental responsibility must be considered in AI development


However, several challenges remained unresolved. Maria Eira noted that long-term impacts of AI technology on children remain unclear with contradictory research results. The challenge of creating AI companions that support children without fostering dependency remained unaddressed, and questions about global implementation of AI literacy programs require continued attention.


## Emerging Action Items and Recommendations


The discussion generated several concrete initiatives:


**Immediate Initiatives**: UNICRI announced the launch of AI literacy resources, including a 3D animation movie for adolescents and parent guide, at the upcoming AI for Good Summit.


**Industry Responsibilities**: Technology companies were called upon to provide transparent explanations of AI decision-making processes, algorithm recommendations, and system limitations.


**Educational Integration**: Rather than banning AI in schools, participants advocated for integration with strong emphasis on critical thinking and fact-checking skills.


**Research and Development**: The discussion highlighted needs for funding research on AI literacy programs and designing AI tools with children’s needs prioritized from the start.


**Legislative Approaches**: Participants called for separate legislation specifically targeting children’s AI rights and protections, recognizing children’s unique vulnerabilities in AI interactions.


## Conclusion


The workshop established that the question is not whether children are ready for AI, but whether AI is ready for children. Current systems fail to meet children’s needs, rights, and developmental requirements, necessitating fundamental changes in design approaches, regulatory frameworks, and industry practices.


As Maria Eira emphasized, echoing the sentiment of young participants: “the goal cannot be the profits, it must be the people.” This principle encapsulates the fundamental shift required in AI development—from technology-first approaches toward human-centered design prioritizing children’s rights, wellbeing, and meaningful participation.


The workshop demonstrated that when children’s voices are genuinely heard and valued, they contribute essential perspectives that benefit not only young people but society as a whole. Moving forward, the emphasis must be on meaningful youth participation in AI governance, transparent and child-friendly AI systems, critical AI literacy education, and regulatory approaches that protect children’s rights while respecting their agency.


Session transcript

Adam Ingle: Hi, everyone. Thank you for joining this panel session workshop called Elevating Children’s Voices in AI Design. Sponsored by the Lego Group and also participating is the Family Online Safety Institute, the Almond Turing Institute, and the Center for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute. We’ve got a excellent workshop for you today, where you hear all about insights from the latest research on the impact of AI on children, and also hear from young people themselves about their experiences and their hopes. So this is just a quick run of the show. We’re going to start with a message from the children about their views on generative AI, and then we’re going to hear some of the latest research from Stephen Bolcom, who’s the founding CEO of the Family Online Safety Institute. Maria Ira, who’s an AI expert at the Center for AI and Robotics in the UN. Fari Aitken, a Senior Ethics Research Fellow at the Almond Turing Institute. Then we’ll move on to a panel discussion and questions. Please feel free to ask questions. We want to take them from the audience, both in the room and online. We’ll also have a young person, Zahra Amjed, join us to share her insights and ask the panel questions herself. But without further ado, let’s get away, and we’re going to start with this video message from young people across the UK. We’re not disclosing names just for safety reasons, but please play the message and the video when you’re ready.


Online Participants: AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans. That’s why we must view AI as a tool to aid us, not to replace us. Right now, students are memorising facts by adaption. while AI is outpacing that system entirely. Rather than banning AI in schools, we should teach students how to use it efficiently. Skills like fact-checking, critical thinking, and quantum engineering aren’t optional anymore, they’re essential. We need to prepare students for a world where AI is everywhere, teaching them to use it efficiently while not relying on itself. I feel that AI can help humanity in the future, but it also can harm, so it must be used in an ethical manner. I find AI really fun, but sometimes it’s not safe for children because it gives bad advice. Privacy is not a luxury, it’s a basic right. The data that AI collects is valuable, and if it’s not protected, it can be used to hurt the very people it’s supposed to help. If gold cannot be profit, it must be people. LLMs include thousands of litres of water during training, and GPT-3 require over 10,000 GPUs over 15 days. Hundreds of LLMs are being developed, and their environmental impact is immense. But all powerful tools, AI must be managed responsibly, or it’s promised to become a problem. The choices that government and AI developers make today will not just affect the technology, but our lives, our communities, and the world that we leave for our next generation. Young people like me must be part of this conversation. We aren’t just the future, we’re here now. Our voices, our experiences, and our hopes must matter in shaping this technology. I think adults should listen to children more because children have lots of good ideas, as well as adults, with AI. Artificial intelligence is a rising tide, but tides follow the shape of the land, so we must shape that land. We must set the direction, and we must act. to decide, together, the kind of world that we want to build. Because if we don’t, that tide may wash away everything that we value most. Fairness, privacy, truth, and even trust. AI holds this incredible promise, but that promise will only be fulfilled if we build it with trust, with care, with respect, and with a clear vision of the kind of world that we want to create, together. Thank you.


Adam Ingle: Well, thank you so much to all the young people there that put together those pretty powerful messages. I mean, from our perspective at the Lego Group, and also I know from all my co-panelists, this is all about elevating children’s voices and not being patronizing to their views, making sure they’re part of decision-making. And it’s great to see such eloquent young people who have real ideas about the future of AI, and we’re here to kind of discuss them more. I’m gonna pass over to Stephen Balkin now to talk about his latest research from the Family Online Safety Institute about the impact of AI on children.


Stephen Balkam: Well, thank you very much, Adam, and thank you for convening us and bringing us here. Really appreciate it. For those of you who are not familiar, FOSI, the Family Online Safety Institute, we are a 501c3 charitable organization based in the United States, but we work globally. Our mission is to make the online world safer for kids and their families. And we work in what we call the three Ps of policy, practices, and parenting. So that’s enlightened public policy, good digital practices, and digital parenting, which is probably the most difficult part of this, where we try to empower parents to confidently navigate the web with their kids. And the web increasingly is AI. infused, shall I say. I want to begin by just saying that two years ago, in 2023, we conducted a three-country study called Generative AI Emerging Habits, Hopes, and Fears. And at the time, we believe it was the first survey done around generative AI, given that ChatGPT had emerged only a few months before. And we talked to parents and teens in the U.S., in Germany, and in Japan, and some of the results surprised us. And you can see in the slide, and I’ll talk to those data points. First thing that we found which surprised us was that teens thought that their parents knew more about generative AI than they did. With previous trends, particularly in the early days of the web, and then web 2.0, and social media, kids were always way ahead of their parents in terms of the technology. But in this case, a large, sizable share of teens in all three countries recorded that their parents had a better understanding than they did. And we dug a little deeper and found that, of course, many of the parents were struggling to figure out how to use gen AI at work, or at the very least, try to figure it out before gen AI took over their jobs. But anyway, that was the first interesting trend. Parents, for their part, said that they used it mainly for analytical tasks, such as using gen AI platforms as a search engine and as a language translator. And that’s only increased over the last couple of years. Teens mostly were looking for it for efficiency boosting tasks, such as proofreading and summarizing long texts. to make them shorter and faster to read. And we’ve already seen some interesting developments in those two years where ChatGPT is actually, instead of just being used to proofread and analyze their work, teens and young people are increasingly using Gen AI to do their work for them, their essays, their homework, whatever. In terms of concerns, job loss was the number one concern for both parents and teens, and also the spread of false information, which has only been accelerating since we did that study. Other concerns, loss of critical thinking skills was the parents’ number three, whereas kids were more concerned about new forms of cyberbullying, again, which is something we’ve been seeing since we did that study. There was a lot of excitement, too. I mean, obviously concerns, but parents and teens both shared an optimism that Gen AI will, above all else, help them learn new things. Very excited also for AI’s potential to bring progress in science and healthcare, and to free up time to reducing boring tasks as well as progress in education. But then when we asked them about who was responsible for making sure that teens had a safe Gen AI experience, interestingly enough, parents believed that they were the most, had to take the greatest responsibility for ensuring their teen’s safety. And this was particularly true in the United States where, I’m afraid to say, we have less trust in our government to guide and to pass laws. Other countries were more heavily reliant on their own governments. and tech companies. And then we asked the question, what do parents and teens want to learn? And what are the topics that would help them navigate these conversations and address their concerns about Gen AI more broadly? And top of the list was transparency of data practices. And secondly, steps to reveal what’s behind Gen AI and how data is sourced and whether it can be trusted was a key element. Another area they felt that industry should take note of, that data transparency is top of mind for parents and teens, and that companies should take strides to be more forthcoming about how users’ data is being collected and used, which I think is something that we’ll hear more about in the next presentation. And then fast forward to this year, we conducted what we call the Online Safety Survey, now titled Connected and Protected, in the United States in the end of 2024 and into 2025. And this was a survey more about online safety trends in general, but we did include questions about Gen AI in the research. And a basic question, do you think that AI will have a positive or negative impact on each of the following areas? And these areas were creativity, academics, media literacy, online safety, and cyberbullying. And in each of these categories, kids were more likely to be optimistic about AI’s impact on society. Think about that. Parents felt more optimistic than their parents that AI was going to have a positive impact. Now parents weren’t necessarily pessimistic. across the board, about half of parents thought that AI would have a positive impact, but 60% of both parents and kids thought AI would have a negative impact on cyberbullying. And this, of course, is where we see stitched together videos, a kid’s face put onto all sorts of awful graphic images that are then spread around the school. When it comes to online safety, parents and kids were split down the middle, with just over half of both groups reporting that AI would have a positive impact on online safety. And when comparing data from wave one of the survey with wave two, we saw that parents in the second wave were much more likely to say that their child had used Gen AI for numerous tasks, including help with school projects, image generation, brainstorming, and more. In the first wave of this survey, we asked participants to identify if images were real or AI generated. Each respondent was presented with three images from a lineup of six to ensure accurate data. Less than half of respondents correctly identified two or more images, and you’re going to see an example of that in a moment. Less than 10% of respondents correctly identified all three images. And we’ll see how well you guys do in a minute. On the bright side, over four or five respondents correctly identified at least one image. And again, this survey was done before Google’s video generator came out, Yeho, which is just mind boggling how fast the developments are in this space. And some of the videos and images that have come out of that video generator are quite astounding. So based on this study, Fossey recommends the following. That technology companies be much more transparent about AI technology, providing families with a clear explanation of why a Gen AI system produced a certain answer, why an algorithm is recommending certain content, and what the limitations of AI tools like chatbots are. Industries should also learn from past mistakes and design AI tools with children in mind, not as an afterthought. And industry needs to fund research and programs that will help children learn AI literacy so they are better able to discern real content from AI-generated content and make informed decisions based on that knowledge. So now I’m going to test you guys on these three images and have a look and just have a show of hands. I don’t know how we’re going to do this online. But how many of you think that the first image is real? Any takers for real? Okay. How many for AI-generated? All right. More real than AI. Okay. Second one. AI? Real? All right. And the last one, real? Or AI? All right. Well, you guys did pretty well. The first one is a real painting. I’ve got the actual citation for you if you want to find out who the artist was. And yes, the second two were both gen A, AI-generated. Interestingly enough in our study, more men than women thought number two was real. Maybe that was wishful thinking. You can make your own conclusions. I think 85 to 90% of women immediately saw that she was not real. And if you look closely, her earrings don’t match, which again, I didn’t see that. So, anyway, back to you.


Adam Ingle: Thanks, Stephen. I performed poorly on that test, I will admit. So next up we’ve got Maria, and she’s an AI expert at the United Nations… Sorry, it’s a complex acronym. The United Nations Interregional Crime and Justice Research Institute and their Center for AI and Robotics. Maria, please take it away. She’s joining us online.


Maria Eira: Hello, everyone. Can you hear me and see my slides? Everything is working? Yes. Perfect. Thank you so much, Adam. And good afternoon, everyone. First of all, I would like to thank you, Adam, and the Lego Group for the invitation to be part of this very interesting workshop. So I work at the Center for AI and Robotics of UNICRE. Indeed, it’s a complex, long term for a UN research institute that focuses on reducing crime and violence around the world. And the center has a particular mandate to understand how AI contributes to both reduce crime and also to be… How it can also be used by malicious actors for criminal purposes, for example. And so now I will present you a project that we have together with Walt Disney to promote AI literacy among children and parents. And we focus on AI, but particularly on generative AI. So to start this project, we were trying to understand the parental perspectives on the use and impact of gen AI on adolescents, a little bit as Palsy was doing. So we distributed a survey worldwide and received… The survey was targeting parents and we received replies from 19 countries across Europe, Asia, Africa and the Americas. So, we just published this paper three days ago. The paper includes all the conclusions from this survey. It’s free access and you can access it via the QR code, but I already brought you here the main conclusions from this survey. So we had two main conclusions. So the first one, we understood that there is a lack of awareness from parents and the low communication between parents and their children on how adolescents are using generative AI tools. And we were targeting parents of adolescents of 13 to 17 years old. And so on the left, we have a graph, I don’t know if you can see it, but I will describe a little bit. So this graph is parents’ insights on teenagers’ generative AI use across different activities. And so on the first smaller graph we have, the activity is to search or get information about the topic. And so we can see that more than 80% of parents report that their kids are using generative AI to search information about the topic. And they are also using it quite often to help with school assignments. So for more academic purposes, we can see that parents are aware that their kids are using generative AI. However, for more personal uses, such as using generative AI as a companion or to ask for help to personal or health problems, we can see that the most popular reply was either I disagree. So they feel that their kids never use generative AI for these more personal purposes. And the second most popular reply was, I don’t know. So, this confirms a little bit, although we were saying right now that parents are becoming more aware. But still, we can see that as a worldwide distribution, a lot of parents still don’t know if their children are using generative AI for more personal uses. The second conclusion is we can see here on the graph on the right. And so, we started by, it’s basically, we understood that parents who use, I’m already giving the conclusion. So, parents who use generative AI tools feel more positive about the impact that this technology can have on their children’s development. And so, we can see on the graph on the right, so we have started by dividing parents according to their familiarity with generative AI tools. And so, we divide it into regular users, the ones who use generative AI every day or a few times per week, sporadic users, the ones who use generative AI a few times per month or less, and unfamiliar audience who never tried or never heard about this technology. And so, we can see that the regular users, so the yellow bars here, so feel much more positive on the impact that the technology can have on critical thinking, on their career, on their social work, and also on their general impact that this technology can have on kids’ development. And so, the child and familiar parents, so the blue ones here, were negative in all these fields. So, this shows that when parents are familiar with the technology, when they use the technology, they see it differently. And thinking… And viewing this technology in a positive way also helps children to use it in a more positive way and not fearing this technology so much. And so besides engaging with parents, we also engaged with children and we organized a workshop in a high school to collect the perspectives from the adolescents. And I brought here some interesting comments and feedback from children. So when we asked them where did they learn about generative AI, they mentioned France, they mentioned TikTok, my 20-year-old brother. So we can see that they are not learning how to use these tools in schools or from other trustworthy sources, let’s say. And when we asked them what’s one thing that adults should know about how teenagers are using generative AI, their replies were they use it to cheat in school, kids use AI to make everything, or adults should know more about it. And I think these were also very interesting to see their feedback. And it also helped us a lot to develop the main outcomes of this project. So we basically produced two AI literacy resources that will be launched in two weeks at the AI for Good Summit. So on the left, we have a 3D animation movie for adolescents that explains what AI is, how generative AI works, and very importantly, that not all the answers can be found in this chat box. And on the right, we have a guide for parents on AI literacy to support them in guiding their children to use this technology in a responsible way. So to communicate, so we focus a lot on the communication. which was something that we concluded from the initial survey, focusing on the communication about the potential risks and also to explore the benefits of this technology together to make parents engage with children and to learn together, because we are all learning on this. The technology is really advancing in a very fast pace, so we will all need to be on top of this development. So if you’d be interested, both resources will be available online soon, so if you’d like to receive them, just reach out to me. I’ll leave my email here. Also, if you have any other questions, I’m happy to reply. So thank you for your time and attention.


Adam Ingle: Thanks, Mariel. And now we have Varya Atkin, Senior Ethics Fellow at the Alan Turing Institute, to discuss research that LEGO Group was actually very proud to sponsor.


Dr. Mhairi Aitken: Thank you, Adam, and thank you for the invitation to join this discussion today. I’m really excited to be a part of this really important panel discussion. Yes, as Adam said, I’m a Senior Ethics Fellow at the Alan Turing Institute. The Turing Institute is the UK’s national institute for AI and data science, and at the Turing, I have the great privilege of leading a program of work on the topic of children and AI. The central driver, the central rationale behind all our work in the children and AI team at the Turing is the recognition that children are likely to be the group who will be most impacted by advances in AI technologies, but they’re simultaneously the group that are least represented in decision-making about the ways that those technologies are designed, developed, and deployed, and also in terms of policymaking and regulation relating to AI. We think that’s wrong. We think that needs to change. Children have a right to a say in matters that affect their lives, and AI is clearly a matter that is affecting their lives today and will increasingly do so in the future. So over the last four years, our team, the children and AI team at the Alan Turing Institute have been working on projects to develop and demonstrate approaches to meaningfully bring children and young people into decision-making processes around the future of AI technologies. So we’ve had a series of projects of a number of different collaborations, including with UNICEF. with the Council of Europe Steering Committee on the Rights of the Child, the Scottish Airlines and Children’s Parliament and most recently with the with the Lego Group. So I want to share some kind of headline findings from our most recent research which has looked at the impacts of generative AI use on children and particularly on children’s well-being and also share some messages from the Children’s AI Summit which was an event that we held earlier this year. So firstly from our recent research and this is a project that was supported by the Lego Group and looked at the impacts of generative AI use on children particularly children between the ages of 8 and 12. There were two work packages in this project, the first work package was a national survey so we surveyed around 800 children between the ages of 8 and 12 as well as their parents and carers and surveyed a thousand teachers across the UK. Now this research revealed that around a quarter of children, 22% of children between the ages of 8 and 12 reported using generative AI technologies and the majority of teachers, so three out of five teachers reported using generative AI in their work. But we found really stark differences between uses of AI within private schools and state-funded schools and this is in the UK context, with children in private schools much more likely both to use generative AI but also report having information and understanding about generative AI and this points to potentially really important issues around equity in access to the benefits of these technologies within education. We also found that children with additional learning needs or additional support needs were more likely to report using generative AI for communication and for connection and also from the teacher survey we find that there was significant interest in using generative AI to support children with additional learning needs. This was also a finding that came out really strongly in work package two of this research. Work package two was direct engagement with children between the ages of 9 and 11 through a series of workshops in primary schools in Scotland and throughout these workshops we found that children were really excited about the opportunity to learn about generative AI and they were really excited about the ways that generative AI could potentially be used to support them in education and again there was a strong interest particularly in the ways that generative AI could be used to support children with additional learning needs. But we found also that in these workshops where we invited children to take part in creative activities and we gave them the option of using either generative AI tools or more traditional tactile art materials, we found overwhelmingly that children chose to use traditional tactile hands-on art materials. You’ll see on the quote at the bottom, one of the sentiments that was expressed very often in these workshops was this feeling that art is actually real and children felt that they couldn’t say that about AI art because the computer did it, not them. And I think this reveals some really important insights into the choices that children make about using digital technologies and a reminder that those choices are not just about the digital technology, but about the alternative options available and the context and environments in which children are making those choices. Through the research, children also highlighted a number of really important concerns that they had around the impacts of generative AI. And I just want to flag some of these briefly now. One of the major themes that came out through this work was a concern around bias and representation in AI models and the outputs of AI models. Over the course of six full day workshops in schools in Scotland, we were using generative AI tools. And in this case, it was OpenAI’s ChatGPT and DALI to create a range of different outputs. And we found that each time children wanted an image of a person, it would by default create an image of a person that was white and predominantly a person that was male. Children identified this themselves and they were very concerned about this. They were very upset about this. But particularly for children of colour who were not represented through the outputs of these models, we found that children became very upset when they didn’t feel represented. And in many cases, children who didn’t feel represented by the outputs of models chose not to use generative AI in the future and didn’t want to use generative AI in the future. So it’s not just about the impact on individual children. It’s also about adoption of these tools and how representation feeds into that. Another big area of concern was the environmental impacts of generative AI. And this is something that we found has come out really consistently through all the work we’ve done engaging children and young people in discussions around AI. Where children have awareness or access to information about the environmental impacts of generative value models, they often choose not to use those models. And we found that in these workshops, that where children learnt about the environmental impact, particularly the water consumption of generative value models and the carbon footprint of generative value models, they chose not to use those models in the future. And they also pointed to this as an area in which they wanted policy makers and industry to take urgent action to address the environmental impacts of these models, but also to provide transparent, accessible information about the environmental impacts of those models. Finally, there were also big concerns around the ways that generative value models can produce inappropriate and sometimes potentially harmful outputs. And children felt that they wanted to make sure that there were systems in place to ensure that children had access to age-appropriate models and that wouldn’t risk exposure to harmful or inappropriate content. Now, finally, I just wanted to also share some messages from the Children’s AI Summit, which was an event that we held in February of this year. This was an event that my team at the Alan Turing Institute ran in partnership with Queen Mary University of London, and it was supported by the Lego Group, Elevate Great and EY. The event brought together 150 children and young people between the ages of 8 and 18 from right across the UK for a full day of discussions, exploring their hopes and fears around how AI might be used in the future, and also setting out their messages for what they wanted to see on the agenda at the AI Action Summit in Paris. From the Children’s AI Summit, we produced the Children’s Manifesto for the Future of AI, and I’d really urge, encourage you to look it up and have a read. It’s written entirely in the words of the children and young people who took part, and it sets out their messages for what they want world leaders, policymakers, developers to know when thinking about the future of AI. I just want to finish with a couple of quotes from the children and young people who took part in the Children’s AI Summit, and their message is really for you all here today about what needs to be taken on board when thinking about the role of children in these discussions. So firstly from Ethan, who is 16, and he says, Hear us, engage with us, and remember, AI may be artificial, but the consequences of your choices are all too real. And secondly, we have a quote from Alexander, Ashvika, Eva, and Mustafa, who were all aged 11, and they presented jointly at the Children’s AI Summit. And they said, we don’t want AI to make the world a place where only a few people have everything and everyone else has less. I hope you can make sure that AI is used to help everyone to make a safe, kind, and fair world. And I think that sums up the ethos of the Children’s AI Summit perfectly, and is also a mission that we really all need to get behind and make a reality. Thank you.


Adam Ingle: Thanks, Fahri, and to Stephen and Maria as well for just some really exciting research findings. We’re going to move towards a panel session now. So we’ll take questions from the audience, both in person and online. So if you’d like to think about some questions, feel free to then ask them. If you’re online, you can ask the online moderator, Lisa, who will ask those questions for you. I’ve got a few myself, though, and we’re actually waiting for Zahra, our young representative, to join. I think there’s been some technical difficulties there, so hopefully she’ll be joining us soon so we can hear directly from her. But to start things off, I think we heard a lot in the research. Kids are already using AI. Children are already using AI across multiple different contexts for multiple different purposes. I think I just want to take a step back and just ask, are children ready for AI, or is AI ready for children? Just as an open question to all the panellists here.


Dr. Mhairi Aitken: I’ll give that one a go. I mean, I think some of the big challenges that we’re finding so far is that these tools, we know that children of all ages are already interacting with AI on a daily basis. And that starts with infants, preschool kids, playing with smart toys and smart devices in the home, through to generated technologies and the ways that AI is also used online on social media. And a lot of the problems here is that these tools are being used by children and young people of all ages, but they’re not designed for children and young people. And we know that the ways that children interact with AI systems are often very different from how adults engage with those tools, or digital technologies more generally, and often very different from how the designers or developers of those systems anticipate that those tools might be used. And now I think there’s possibly a risk that we then put the kind of the burden or the expectation on children and young people themselves to kind of police those online interactions to take approaches to be safe online, whereas actually, the burden has to be on, you know, the developers, the policymakers and the regulators to make sure that those systems are safe, and that there is, there are age appropriate tools and systems available for children to access and benefit from.


Stephen Balkam: Yeah, this feels like deja vu all over again, I was very much involved in the web 1.0 back in the mid 90s. And it became very clear that the World Wide Web was not designed with kids in mind. And we had to sort of retrofit websites and create parental controls for the first time, but never really caught up. And then web 2.0 came along around 2005 2006. And sites like Myspace, and then Facebook, again, just took off first in colleges, then in high schools, then all the way down to elementary grade school level. Once again, not with kids in mind. And we’re just repeating that one more time with this AI revolution. And there’s a great deal of concern, particularly around the amount of what kids will do in terms of trusting chatbots, for instance, we’re seeing a lot of emotional attachments of quite young kids talking to chatbots, thinking that they are real, and sort of unloading their own personal thoughts to them. And for older teens, and for college based kids. the fact that they’re using Gen AI for doing their work, doing their homework, doing their projects and essays, meaning that they’re not developing critical thinking skills, but going straight to Gen AI for results. And for that, that probably is of greater concern.


Adam Ingle: Thank you, Steven. Maria, do you have any contributions to that question?


Maria Eira: Yeah, I agree, yeah, definitely with everyone that was said. Just adding that not just the AI systems are not ready or the kids are not ready for the AI, but the whole environment. So in terms of AI literacy, most of the people don’t really understand what is AI, how does it work, is it like a type of magic, but at the end of the day, it’s actually just computations and statistical models. And so it’s not just the technology that was not developed, but it’s the whole environment. So in terms of AI literacy in schools and so on.


Adam Ingle: Thank you. I’ve definitely got some more questions, but I can see we have someone in the audience that would like to ask a question. So please introduce yourself and ask the panel.


Mariana Rozo‑Paz: Thank you. Hi, everyone. I’m Mariana from DataSphere Initiative. I hope you can hear me well. Okay. So we are the DataSphere Initiative. We have a youth project that has been engaging young people for a couple of years. And I wanted to thank you all for the amazing presentations and the amazing work that you’re doing. And I think it’s actually very, very important that we have all of these stats, numbers, stories, experiences, and thank you also for starting with a video from children and closing up with quotes. And this introduction is just to say that we’re restarting a new phase in our project and we’re starting to focus on influencers. and not just kids that are becoming influencers, children that are being sometimes turned into influencers by their parents that have also mind-blowing stats. Adults that are becoming influencers and that are directly influencing children, not only to consume and buy their products or other products, but we’re also looking into AI agents as influencers in this digital space and that as I think one of the girls that was sharing her story was saying, it’s not just that they’re influencing or that are generally affecting their lives, digital lives, but it’s actually their very concrete lives and the relationships that they have with each other. So I just wanted to ask, and I think that Stephen was already mentioning a bit around the influence of other children and the maybe even like social media and if you had any questions or research done around how are influencers shaping the space and how children and youth are experiencing social media in general and then how did you start or if you started to ask about AI agents and how is that influencing particularly the relationships that they have in real life again? I think that was a bit of like a lot of questions, but thanks again so much.


Stephen Balkam: Yeah, I’ll try to respond to part of what you were saying. I mean, the technology is moving so fast that it’s incredibly hard for the research to keep up is number one. No, we haven’t yet asked about AI bots being an influencing factor, although we are anecdotally seeing kids, teens, young adults and adults using AI for therapy. I mean, literally talking through on hours at a time deep emotional issues that they have and getting responses from chat GPT and others in a way that is… Very positive and self-reinforcing, but also extremely, potentially dangerous in the sense that an artificial intelligence bot will not know, is not human, and will not be able to pick up on body cues and all the rest of it, and may not actually be able to challenge you in a way that a physical, a real human therapist will. One other point I’ll get to quickly, the whole influencing world, there’s new legislation that’s been popping up in the United States at least, that will at least compensate kids who’ve been part of a family, you know, vlog all their childhood, a bit like kid movie stars were back in the 30s. So now at least they’re getting compensation and a right to delete their videos that they had no true consent to be a part of when they turn 18. But there’s a broader societal question about monetizing our kids. We are not in favor of that, particularly because there’s no way that a 7, 8, 9-year-old can give consent. Yes, please film me every day and post this online so that I can go through college and you don’t have to pay, mom and dad. So anyway, maybe we’ll talk later because you had a lot of different points in there.


Dr. Mhairi Aitken: Maybe I could just pick up on, I guess, how this relates to the growth of AI companions and gender divide in this context. I suppose influence isn’t necessarily something that we’ve looked at so much in our research, which is mostly focused on 8 to 12-year-olds, not to say that they’re not already been influenced and many of them are beginning, certainly 12-year-olds, beginning to be on social media. But AI companions, I think, is an area that we really need to urgently get to grips with. There are more and more of these AI companions, AI personas that are clearly being marketed. towards children and young children and we don’t really yet know what the impacts of that might be but there’s growing research, we need more, we need more action to be taken on this, including AI companions that are marketed as addressing challenges of loneliness but then potentially creating a dependence or a connection to something that is very much kind of outside of society and community and potentially exacerbating those challenges which bring us a particular set of risks to address. In the Children’s AI Summit, which were again children and the Children’s AI Summit between the ages of 8 and 18 and among teenagers at Children’s AI Summit there was a lot of interest in potentially using AI companions to support children in terms of supporting them with mental health and there was a lot of interest in how that could be done but but unfortunately what would it mean to design that and develop these tools in ways that are age-appropriate that are safe, that have children’s well-being and children’s mental health as part of the design process, as a key element in the design process. At the moment the risk is that these tools are being developed and promoted that without children’s well-being and children’s interests in mind in the development process but they are increasingly being relied on and used for those purposes. So I think yeah it’s an area that we’re seeing a lot of interest from from children and young people but with a recognition that this needs to be done responsibly, safely and cautiously. Thanks.


Leanda Barrington‑Leach: Leander, I see you’ve got a question. Please. Hello everyone, I’m Leander Barrington-Leach, Executive Director of the Five Rights Foundation. Thank you so much for the presentations and for the research you’re doing which is absolutely fabulous. I could ask lots of things and I could comment on lots of things but I just wanted to take the opportunity given what you’re saying indeed about the importance of designing AI with children’s rights in mind from the start of raising awareness that there are regulatory and technical tools out there to do this. and in particular the Children and AI Design Code, which the Alan Turing Institute also contributed to, which was work that brought AI experts and children’s rights experts and many others together over a very long period of time to develop a technical protocol for innovation that puts children’s rights at the center. So I just wanted to draw awareness to this to say that we all agree that it’s so important, but to know that there are actually tools out there to make it happen. Thank you.


Co-Moderator: Thanks, Leander. Lisa, I think we’ve got an online question. We do indeed. Katarina, who is studying law in the UK, AI law specifically, you’re asking a question. Should AI ethics for children be separated from general AI ethics? That’s the first question. Second question, do you think there should be state-level legislation or policies for AI systems targeting specifically children? Thank you.


Adam Ingle: Maria, I’ll pass to you first if you want to answer either of those questions.


Maria Eira: Yes, sure. Thank you for your question. It’s very relevant indeed. And definitely, yes. Children should have separated legislation. Separate legislation should target children because children don’t have exactly the same consent. Let’s say, for example, the awareness of consent. There are several principles that cannot be fully applicable from adults to children, so we definitely need to have the child’s rights in mind when developing this legislation.


Adam Ingle: Thanks, Maria. Stephen or Mari, do you want to comment on, just one of you will, because we’ve got a few questions and I do want to get everyone to agree.


Dr. Mhairi Aitken: Yeah, I mean, I would agree that children have particular rights, they have particular needs, unique needs and experiences that should be addressed. I guess one other part of it is that if we design this well for children and if we get the regulatory requirements, policy requirements right for children, this benefits well beyond children as well. An AI system that’s designed well with children in mind is also going to have benefits in terms of other vulnerable users and wider user groups. So I think yes, there are unique perspectives, unique considerations that should be addressed, but the benefits go beyond that.


Adam Ingle: So before I go to other questions in the room, I just got really quick responses from the panel. Leander mentioned the age-appropriate AI design code, which is a tool to help companies think about how to build AI in a child rights and well-being way. What do you think are the research gaps? We’ve got tools like this. What is the one, to your mind, outstanding research gap that needs to be addressed before we can really be confident that there is a child-centric approach to AI development? Just a quick question. Maybe reflect on that as we take some other questions, and then I’ll come back later because I do want to think about the research gaps and a path forward to really understanding how to do this responsibly. So let’s take a question from this gentleman here.


Joon Baek: Hello, my name is June Beck from Youth for Privacy. We are a youth NGO focused on digital privacy. So we want to ask about a lot of children’s rights in AI. At least in the context of privacy, there has been some legislation, for example, where under the aim of protecting children’s data, for example, or safeguarding children online, there’s been concerns about those kind of laws having some privacy issues. I was wondering if, would there be some things that under the aim of protecting children when it comes to AI, for example, that could be some other kind of rights that could be in question or violated? So do you suppose there would be anything that we should be aware of?


Adam Ingle: So, you’re talking about the trade-off between protecting children’s rights and maybe some other issues that might be developing. Yeah. Stephen, why? Maria?


Stephen Balkam: Pretty much, you know, I went back to 1995. I mean, we’ve been struggling with the dichotomy between safety and privacy since the beginning of the web. In other words, the more safe you are, perhaps the more you’re giving up in terms of private information. Or the more private you are, maybe you’re not as safe as you could be. So trying to find a way that balances both has been at the core, certainly, of the work of my organization, but many others, and it is extremely hard for lawmakers to get that balance right. And then if you come from the U.S., you then have this other axis, which is called free expression, which adds another layer of complexity, too, because you want people to be private, you want people and kids to be safe, but you also want one of the five rights, by the way, is the right to say what you want to be able to say. So it’s just going to be something which I don’t think will ever completely get right. And we’re going to constantly have to compromise. But I don’t think it’s beyond our ability to reach those compromises.


Adam Ingle: Just noting time, I might move on to this gentleman here.


Participant: Hi, my name is Ryan, I’m 17 years old, and I’m a youth ambassador of the OnePAL Foundation in Hong Kong. So we’re advocating for digital sustainability and access in Hong Kong. So thank you for the wonderful presentations. My question is, AI for people with learning disabilities was raised at a significant prospect of AI by children from 8 to 12 years old. So how can generative AI be further leveraged for the support and inclusion of people with disabilities? Thank you.


Adam Ingle: Thank you. And I’m just wondering, depending from your research, Fari, if you want to elaborate.


Dr. Mhairi Aitken: Yeah, it’s come out really strongly from all the work we’ve done engaging children and young people, that this is an area where they’re really excited about the potential and they want to see AI developed in ways that will support children with additional learnings, additional support needs and with disabilities. And I guess what’s important, I mean particularly in the education context, supporting children with additional learning needs, there’s huge promise here and teachers in our study recognise that, children in our study recognise that, but again I think some of the challenges or current limitations is that there’s a lot of kind of edtech tools that are being pushed and promoted that are not necessarily beginning with a sound understanding of the challenges that they’re seeking to address or a sound understanding of the needs of children with additional learning needs. I think we need to start developing these technologies from that place, you know, if we want to develop something to support children with additional learning needs, it has to be grounded in a sound understanding of what those needs are and what the challenges are. And then maybe generative AI provides tools that provides a solution, but not always, not necessarily. I think we have to start with, you know, identifying the problems and challenges and develop those tools responsibly to effectively address those challenges. That requires having expertise from teachers, from children, from specialists in these areas to guide the development of those tools and technologies. But it’s definitely an area where there’s huge promise and where it could be used really effectively and really valuably.


Adam Ingle: Thank you. Great to have a youth representative at the IGF. I mean, my gosh, I was probably playing unsafe video games when I was 17, rather than going to international forums. So incredibly impressive. Lisa, you’ve got a question from online. I do indeed. So I have a question from Grace


Co-Moderator: Thompson from CAIDP, who’s asking, how is UNICRI, thank you, and the other entities represented in the panels working with national government officials on capacity building to school principals, counseling teams, and the entire ecosystem to prepare adults in protecting our children and adolescents?


Adam Ingle: Maria, I think that’s one for you.


Maria Eira: Yeah, sure. Thank you for your question, Grace. So as I was showing before, we are developing AI literacy resources for parents. So we will try to disseminate this as much as possible. So it’s basically recommendations for parents to guide their children on the use of this technology. So this is one thing. Then we are also trying to work with governments and particularly with the judges, law enforcement, to promote AI literacy, basically. So we do a lot of capacity building to law enforcement officers worldwide to explain what is AI, how to use AI in a responsible way. So we have guidelines developed with Interpol. So this is more on the law enforcement side. And we would love to explore more to other representatives from the government and try to implement AI literacy workshops and programs in schools. So we have started this workshop in a school in the Netherlands, which was also to collect adolescents’ perspectives, but we also had a component on explaining what AI is, what are the risks, what are the benefits, and some best practices to use it in the best way. And we would love to scale up this. And we are right now in conversations with the Netherlands and with other countries to see, to understand if we can really develop a full program that can be implemented in schools. But everything is being developed. The technology is really recent. Everyone is trying to be prepared for this. And, yeah, we are still working on that.


Adam Ingle: Thanks, Maria. We’ll take one final question from the room, and then I will do a quick lightning round among the panelists. to answer what’s one research area we still need to explore to move towards child-centric AI, and what’s one thing companies can do right now to make AI more appropriate for children. Quick answers to those two questions, but please, the lady here.


Participant: Hello, my name is Elisa. I’m also from the OnePile Foundation, just like Ryan. I see a big issue in children communicating with AI about their personal issues as children are in a way more vulnerable situation and position, and AI is the bigger person in that conversation. So my question is, how can we design AI so that it doesn’t increase that power imbalance between the child and the all-known AI? I didn’t quite get the end of that question. Sorry, just repeat your question. My question is, how can we design AI that the independency of the child is increased and that there is no power imbalance between the child and the AI? You want to try that?


Dr. Mhairi Aitken: Yeah, I mean, I think in all these interactions, one thing that’s absolutely crucial is the transparency around the nature of the AI system. Transparency also around how data might be being collected through those interactions and potentially being collected by the model or used to train future model or collected by the organization, the company developing and owning those models. And if I can tie it into your question around what’s needed, because I think it is actually related, it’s that kind of critical AI literacy. We hear a lot about the importance of AI literacy and increasing understandings of AI, but what I think is really important is it’s that critical literacy. So improving understandings of not just technically how these systems work, but actually the business models behind them. how it affects children’s rights and the impact that those systems have. So I think that that’s where we need more research but also what’s needed to enable children to make informed choices about how they use those systems.


Adam Ingle: Love to hear you tie it in the answer to both questions. They’ve already saved us a lot of time. Stephen, 15 seconds. Know what she said. That’s easy. Maria, one thing we can do in research or one thing companies can do right now?


Maria Eira: Yeah, so in research we are still understanding the long-term impact of this technology. We still don’t know and the literature also reflects this. We have very contradictory results. Some papers saying that AI can improve critical thinking. Others saying that AI can actually decrease critical thinking. I think we are still in a period where we are trying to understand exactly what will be the long-term impact of this technology. And so, yeah, again, what companies should do. I think the girl in the video in the beginning said exactly everything. So the goal cannot be the profits, it must be the people. So I think if companies really focus on the children and developing these tools, targeting and having the children in mind, we can actually develop good tools for everyone.


Adam Ingle: Thanks, Maria. The goal should not be the profit, it should be the people. I think that is a great lesson coming out of this session. That’s what we have time for. Thank you so much for joining us in the room and online. And please, if you’ve got any more questions, feel free to approach Stephen and Varya or get in contact with Maria. And thank you to all the young people that engaged with this session. And thank you from the LEGO Group as well. So we’ll end it there and we’ll see you soon. Bye. Thank you. Thank you. Thank you.


O

Online Participants

Speech speed

149 words per minute

Speech length

412 words

Speech time

165 seconds

Young people view AI as advantageous when used correctly but potentially devastating when misused

Explanation

Young people recognize AI as a powerful tool that can provide significant benefits when properly utilized, but they also acknowledge its potential for causing serious harm when misapplied. They emphasize the importance of viewing AI as a tool to aid humans rather than replace them.


Evidence

Students stated ‘AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans. That’s why we must view AI as a tool to aid us, not to replace us.’


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Human rights | Sociocultural


AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills

Explanation

Young people argue that instead of prohibiting AI use in educational settings, schools should integrate AI education that emphasizes essential skills like critical thinking, fact-checking, and responsible usage. They believe students need preparation for an AI-integrated world while learning not to become overly dependent on the technology.


Evidence

Students noted ‘Right now, students are memorising facts by adaption. while AI is outpacing that system entirely. Rather than banning AI in schools, we should teach students how to use it efficiently. Skills like fact-checking, critical thinking, and quantum engineering aren’t optional anymore, they’re essential.’


Major discussion point

Educational Impact and Equity Issues


Topics

Sociocultural | Human rights


Disagreed with

– Dr. Mhairi Aitken

Disagreed on

Approach to AI literacy and education


Privacy is a basic right, not a luxury, and AI data collection must be protected

Explanation

Young people emphasize that privacy should be considered a fundamental right rather than an optional benefit. They express concern about the valuable data that AI systems collect and the potential for this data to be misused to harm the very people it’s supposed to help.


Evidence

Students stated ‘Privacy is not a luxury, it’s a basic right. The data that AI collects is valuable, and if it’s not protected, it can be used to hurt the very people it’s supposed to help.’


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Balkam
– Dr. Mhairi Aitken

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


AI training requires massive resources including thousands of liters of water and extensive GPU usage

Explanation

Young people demonstrate awareness of the significant environmental costs associated with training AI models. They highlight the substantial resource consumption required for AI development, including water usage and computational power.


Evidence

Students noted ‘LLMs include thousands of litres of water during training, and GPT-3 require over 10,000 GPUs over 15 days. Hundreds of LLMs are being developed, and their environmental impact is immense.’


Major discussion point

Environmental and Ethical Concerns


Topics

Development | Sociocultural


Agreed with

– Dr. Mhairi Aitken

Agreed on

Environmental impacts of AI are significant concerns that influence children’s usage decisions


Young people must be part of AI conversations as they are affected now, not just in the future

Explanation

Young people assert their right to participate in current AI discussions and decision-making processes. They reject the notion that they are only stakeholders for the future, emphasizing that AI impacts their lives today and their voices should matter in shaping the technology.


Evidence

Students stated ‘Young people like me must be part of this conversation. We aren’t just the future, we’re here now. Our voices, our experiences, and our hopes must matter in shaping this technology.’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Dr. Mhairi Aitken
– Adam Ingle

Agreed on

Children must be meaningfully included in AI decision-making processes


Adults should listen to children more because they have valuable ideas about AI development

Explanation

Young people advocate for greater inclusion of children’s perspectives in AI development discussions. They believe that children possess valuable insights and ideas that should be considered alongside adult viewpoints when making decisions about AI technology.


Evidence

Students said ‘I think adults should listen to children more because children have lots of good ideas, as well as adults, with AI.’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


D

Dr. Mhairi Aitken

Speech speed

196 words per minute

Speech length

2780 words

Speech time

847 seconds

Around 22% of children aged 8-12 report using generative AI, with three out of five teachers using it in their work

Explanation

Research findings show that a significant portion of young children are already engaging with generative AI technologies, while the majority of teachers are incorporating these tools into their professional practice. This indicates widespread adoption across educational settings.


Evidence

National survey of around 800 children between ages 8-12, their parents and carers, and 1000 teachers across the UK


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Sociocultural | Human rights


Children are the group most impacted by AI advances but least represented in decision-making about AI development

Explanation

There is a fundamental disconnect between who is most affected by AI technology and who has input into its development. Children, despite being the demographic that will experience the greatest long-term impact from AI advances, have minimal representation in the decision-making processes that shape these technologies.


Evidence

Four years of research projects at the Alan Turing Institute’s children and AI team, including collaborations with UNICEF, Council of Europe, and Scottish Airlines and Children’s Parliament


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Online Participants
– Adam Ingle

Agreed on

Children must be meaningfully included in AI decision-making processes


Stark differences exist between AI use in private schools versus state-funded schools, pointing to equity issues

Explanation

Research reveals significant disparities in AI access and education between different types of schools. Children in private schools are much more likely to use generative AI and have better understanding of these technologies, creating potential inequalities in access to AI benefits.


Evidence

UK-based research showing children in private schools much more likely to both use generative AI and report having information and understanding about generative AI


Major discussion point

Educational Impact and Equity Issues


Topics

Development | Human rights | Sociocultural


The burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions

Explanation

Rather than placing responsibility on children to navigate AI systems safely, the primary obligation should rest with those who create and regulate these technologies. Children interact with AI systems differently than adults and often in ways not anticipated by developers.


Evidence

Recognition that children interact with AI systems differently from adults and often differently from how designers or developers anticipate those tools might be used


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Balkam
– Adam Ingle

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


Children with additional learning needs show particular interest in using AI for communication and support

Explanation

Research indicates that children with additional support needs or learning disabilities are more likely to utilize generative AI for communication purposes and connection. There is significant interest from both children and teachers in leveraging AI to support children with additional learning needs.


Evidence

Survey findings showing children with additional learning needs more likely to report using generative AI for communication and connection, plus teacher interest in using AI to support these children


Major discussion point

Educational Impact and Equity Issues


Topics

Human rights | Sociocultural


Agreed with

– Online Participants

Agreed on

AI has significant potential to support children with additional learning needs and disabilities


AI models consistently produce biased outputs, predominantly showing white and male figures

Explanation

When children used generative AI tools to create images of people, the systems defaulted to producing images of white, predominantly male individuals. This consistent bias in AI outputs was identified and caused concern among the children using these tools.


Evidence

Six full-day workshops in Scottish schools using OpenAI’s ChatGPT and DALL-E, where each time children wanted an image of a person, it would by default create an image of a person that was white and predominantly male


Major discussion point

Bias and Representation Issues


Topics

Human rights | Sociocultural


Children of color become upset and choose not to use AI when they don’t feel represented in outputs

Explanation

When AI systems fail to represent children of color in their outputs, these children experience emotional distress and subsequently choose to avoid using the technology. This lack of representation not only impacts individual children but also affects broader adoption patterns of AI tools.


Evidence

Observations from workshops showing children of colour becoming very upset when not represented, and in many cases choosing not to use generative AI in the future


Major discussion point

Bias and Representation Issues


Topics

Human rights | Sociocultural


Children who learn about environmental impacts of AI models often choose not to use them

Explanation

When children gain awareness of the environmental costs associated with generative AI models, including water consumption and carbon footprint, they frequently make the conscious decision to avoid using these technologies. This pattern has been consistent across multiple research engagements with children and young people.


Evidence

Consistent findings across all work engaging children and young people, where children with awareness of environmental impacts, particularly water consumption and carbon footprint of generative AI models, chose not to use those models


Major discussion point

Environmental and Ethical Concerns


Topics

Development | Human rights


Agreed with

– Online Participants

Agreed on

Environmental impacts of AI are significant concerns that influence children’s usage decisions


AI companions marketed to children raise concerns about dependence and isolation from real community

Explanation

The growing market of AI companions specifically targeted at children presents risks of creating unhealthy dependencies and potentially exacerbating social isolation. While these tools are often marketed as solutions to loneliness, they may actually increase disconnection from real human relationships and community engagement.


Evidence

Growing research on AI companions marketed as addressing challenges of loneliness but potentially creating dependence or connection outside of society and community


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


Transparency about AI system nature and data collection is crucial for child interactions

Explanation

For children to safely interact with AI systems, it is essential that they understand what they are interacting with and how their data might be collected or used. This transparency should include information about the AI system’s capabilities, limitations, and data practices.


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Online Participants
– Stephen Balkam

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding

Explanation

While technical AI literacy is important, children need a deeper understanding that includes the business models behind AI systems and how these technologies affect their rights. This critical approach goes beyond just understanding how AI works to understanding why it works the way it does and who benefits.


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Sociocultural


Disagreed with

– Online Participants

Disagreed on

Approach to AI literacy and education


More research is needed on AI’s role in supporting children with disabilities while ensuring proper understanding of their needs

Explanation

While there is significant promise for AI to support children with additional learning needs and disabilities, current development often lacks proper understanding of the specific challenges and needs these technologies should address. Research and development must be grounded in expertise from teachers, children, and specialists in these areas.


Evidence

Recognition that many edtech tools are being pushed without sound understanding of challenges they seek to address or needs of children with additional learning needs


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Development | Sociocultural


Agreed with

– Online Participants

Agreed on

AI has significant potential to support children with additional learning needs and disabilities


Designing AI well for children benefits other vulnerable users and wider user groups

Explanation

When AI systems are properly designed with children’s needs and rights in mind, the benefits extend beyond just children to other vulnerable populations and the general user base. Child-centric design principles create better, more inclusive AI systems overall.


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


M

Maria Eira

Speech speed

134 words per minute

Speech length

1688 words

Speech time

750 seconds

Parents who regularly use generative AI feel more positive about its impact on their children’s development

Explanation

Research shows a clear correlation between parents’ familiarity with generative AI technology and their attitudes toward its impact on their children. Parents who use AI regularly view it more positively across multiple areas including critical thinking, career development, and social work, while unfamiliar parents tend to be negative about AI’s impact.


Evidence

Worldwide survey from 19 countries showing regular users (yellow bars) feel much more positive about AI’s impact on critical thinking, career, social work, and general child development compared to unfamiliar parents (blue bars) who were negative in all fields


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Human rights | Sociocultural


There is a lack of awareness from parents and low communication between parents and children about AI use

Explanation

Research reveals significant gaps in parental understanding of how their adolescent children use generative AI, particularly for personal purposes. While parents are aware of academic uses, they often don’t know or disagree that their children use AI for more personal matters like companionship or health advice.


Evidence

Survey targeting parents of adolescents aged 13-17 showing over 80% of parents aware of AI use for information search and school assignments, but for personal uses like AI companions or health advice, most popular responses were ‘I disagree’ or ‘I don’t know’


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


Company goals should focus on people rather than profits when developing AI tools for children

Explanation

When developing AI technologies for children, companies should prioritize human welfare and child wellbeing over financial gains. This principle emphasizes the need for ethical development practices that put children’s needs and safety first.


Evidence

Reference to student comment from opening video: ‘the goal cannot be the profits, it must be the people’


Major discussion point

Environmental and Ethical Concerns


Topics

Human rights | Economic


Long-term impacts of AI technology on children remain unclear with contradictory research results

Explanation

Current research on AI’s effects on children shows conflicting findings, making it difficult to draw definitive conclusions about long-term impacts. Some studies suggest AI can improve critical thinking while others indicate it may decrease these skills, highlighting the need for more comprehensive research.


Evidence

Literature review showing contradictory results with some papers saying AI can improve critical thinking while others say AI can decrease critical thinking


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Sociocultural


Children should have separate AI legislation because they cannot give the same consent as adults

Explanation

Children require distinct legal protections regarding AI because they lack the same capacity for informed consent as adults. Several principles applicable to adults cannot be directly applied to children, necessitating specialized legislation that considers children’s unique vulnerabilities and developmental needs.


Evidence

Recognition that children don’t have the same awareness of consent and several principles cannot be fully applicable from adults to children


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


S

Stephen Balkam

Speech speed

139 words per minute

Speech length

2192 words

Speech time

941 seconds

Teens thought their parents knew more about generative AI than they did, contrary to previous technology trends

Explanation

Unlike previous technological developments where children typically led adoption, research found that teenagers believed their parents had better understanding of generative AI. This reversal occurred because many parents were learning AI tools for work purposes or to stay relevant in their careers.


Evidence

2023 three-country study (US, Germany, Japan) with parents and teens, showing large, sizable share of teens in all three countries recorded that their parents had better understanding, with parents struggling to use gen AI at work


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Sociocultural | Human rights


AI systems are not designed with children in mind, requiring retrofitting for safety like previous web technologies

Explanation

The development of AI technology is repeating the same pattern as previous internet technologies, where systems are created without considering children’s needs and safety, then require after-the-fact modifications. This pattern occurred with Web 1.0 in the mid-90s and Web 2.0 around 2005-2006, and is now happening again with AI.


Evidence

Historical examples of World Wide Web not designed with kids in mind requiring retrofitted parental controls, and social media sites like Myspace and Facebook expanding from colleges to elementary schools without child-focused design


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Cybersecurity


Agreed with

– Dr. Mhairi Aitken
– Adam Ingle

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


Students are increasingly using Gen AI to do their work rather than just proofread it, potentially impacting critical thinking development

Explanation

There has been a concerning shift in how students use generative AI, moving from using it as a tool for proofreading and summarizing to having it complete entire assignments. This trend raises concerns about students not developing essential critical thinking skills.


Evidence

Comparison between initial study findings where teens used AI for ‘proofreading and summarizing long texts’ versus current observations of ‘teens and young people increasingly using Gen AI to do their work for them, their essays, their homework’


Major discussion point

Educational Impact and Equity Issues


Topics

Sociocultural | Human rights


Data transparency is top priority for parents and teens regarding AI companies

Explanation

Research shows that both parents and teenagers prioritize understanding how AI companies collect, use, and source their data. They want companies to be more forthcoming about data practices and to provide clear explanations about how AI systems work and whether the information can be trusted.


Evidence

Survey results showing ‘transparency of data practices’ as top of list for what parents and teens want to learn, and ‘steps to reveal what’s behind Gen AI and how data is sourced and whether it can be trusted’ as key element


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Online Participants
– Dr. Mhairi Aitken

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


There’s an ongoing struggle to balance safety and privacy, with more safety potentially requiring less privacy

Explanation

The relationship between online safety and privacy creates a persistent dilemma where increasing one often means decreasing the other. This challenge has existed since the beginning of the web and becomes more complex when adding considerations like free expression rights.


Evidence

Reference to struggling with ‘the dichotomy between safety and privacy since the beginning of the web’ since 1995, with additional complexity from free expression rights in the US context


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Disagreed with

– Joon Baek

Disagreed on

Balance between safety and privacy in AI regulation


Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations

Explanation

There is growing concern about children, teens, and young adults developing emotional dependencies on AI chatbots, using them for extended therapeutic conversations. While these interactions can feel positive and self-reinforcing, they lack the human elements essential for proper mental health support.


Evidence

Anecdotal observations of ‘kids, teens, young adults and adults using AI for therapy, literally talking through on hours at a time deep emotional issues’ with responses from ChatGPT and others


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


L

Leanda Barrington‑Leach

Speech speed

173 words per minute

Speech length

181 words

Speech time

62 seconds

There are existing regulatory and technical tools like the Children and AI Design Code to implement child-centric AI development

Explanation

Regulatory and technical solutions already exist to address the need for child-focused AI development. The Children and AI Design Code represents a collaborative effort between AI experts, children’s rights experts, and other stakeholders to create practical protocols for innovation that prioritizes children’s rights.


Evidence

Reference to the Children and AI Design Code as work that ‘brought AI experts and children’s rights experts and many others together over a very long period of time to develop a technical protocol for innovation that puts children’s rights at the center’


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


A

Adam Ingle

Speech speed

169 words per minute

Speech length

1180 words

Speech time

418 seconds

The workshop aims to elevate children’s voices in AI design without being patronizing to their views

Explanation

The session is specifically designed to ensure children are part of decision-making processes regarding AI development. The approach emphasizes treating young people’s perspectives with respect and incorporating their real ideas about the future of AI rather than dismissing them as less valuable than adult opinions.


Evidence

Workshop called ‘Elevating Children’s Voices in AI Design’ with participation from young people sharing experiences and hopes, including video messages and panel participation


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Online Participants
– Dr. Mhairi Aitken

Agreed on

Children must be meaningfully included in AI decision-making processes


Children are already using AI and the question is whether children are ready for AI or AI is ready for children

Explanation

This fundamental question addresses the current reality that children are actively engaging with AI technologies across multiple contexts and purposes. The framing suggests examining whether the responsibility lies with preparing children for AI or ensuring AI systems are appropriately designed for children.


Evidence

Research findings showing kids are already using AI across multiple different contexts for multiple different purposes


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


Agreed with

– Dr. Mhairi Aitken
– Stephen Balkam

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


M

Mariana Rozo‑Paz

Speech speed

161 words per minute

Speech length

323 words

Speech time

119 seconds

AI agents as influencers are directly affecting children’s real-life relationships and experiences

Explanation

The emergence of AI agents functioning as influencers presents new challenges beyond traditional human influencers or children becoming influencers themselves. These AI agents are not just affecting children’s digital lives but are having concrete impacts on their real-world relationships and social interactions.


Evidence

DataSphere Initiative youth project research focusing on influencers, including AI agents as influencers in digital spaces affecting children’s concrete lives and relationships


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


There are concerning trends in children being turned into influencers by their parents with mind-blowing statistics

Explanation

Research reveals troubling patterns where parents are converting their children into influencers, raising ethical concerns about consent, exploitation, and the commercialization of childhood. The scale of this phenomenon appears to be significant based on emerging data.


Evidence

DataSphere Initiative research on children being turned into influencers by parents with ‘mind-blowing stats’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Economic


J

Joon Baek

Speech speed

173 words per minute

Speech length

124 words

Speech time

42 seconds

Privacy protection laws aimed at safeguarding children may inadvertently violate other rights

Explanation

There is concern that legislation designed to protect children’s data and ensure their online safety might create unintended consequences that compromise other fundamental rights. This highlights the complex balance required when creating protective measures for children in the AI context.


Evidence

Experience from Youth for Privacy NGO observing privacy issues in legislation aimed at protecting children’s data and safeguarding children online


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Disagreed with

– Stephen Balkam

Disagreed on

Balance between safety and privacy in AI regulation


P

Participant

Speech speed

150 words per minute

Speech length

203 words

Speech time

80 seconds

AI creates a power imbalance between children and AI systems that needs to be addressed through design

Explanation

Children are in a vulnerable position when communicating with AI about personal issues, as the AI appears to be the ‘bigger person’ or authority in the conversation. Design approaches should focus on increasing children’s independence and reducing this inherent power imbalance rather than reinforcing it.


Evidence

Recognition that children are in a more vulnerable situation and position when AI is the bigger person in conversations about personal issues


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


C

Co-Moderator

Speech speed

127 words per minute

Speech length

107 words

Speech time

50 seconds

There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks

Explanation

The question of whether AI ethics for children should be distinct from general AI ethics reflects recognition that children have unique needs, vulnerabilities, and rights that may not be adequately addressed by general AI governance frameworks. This suggests the need for specialized approaches to AI regulation and policy for children.


Evidence

Question from law student studying AI law specifically about separating children’s AI ethics from general AI ethics and state-level legislation for AI systems targeting children


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

AI systems are not designed with children in mind and require child-centric development from the start

Speakers

– Dr. Mhairi Aitken
– Stephen Balkam
– Adam Ingle

Arguments

The burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions


AI systems are not designed with children in mind, requiring retrofitting for safety like previous web technologies


Children are already using AI and the question is whether children are ready for AI or AI is ready for children


Summary

All speakers agree that current AI systems are developed without considering children’s needs and safety, repeating historical patterns from previous web technologies. They emphasize that responsibility should lie with developers and policymakers rather than children themselves.


Topics

Human rights | Legal and regulatory


Children must be meaningfully included in AI decision-making processes

Speakers

– Online Participants
– Dr. Mhairi Aitken
– Adam Ingle

Arguments

Young people must be part of AI conversations as they are affected now, not just in the future


Children are the group most impacted by AI advances but least represented in decision-making about AI development


The workshop aims to elevate children’s voices in AI design without being patronizing to their views


Summary

There is strong consensus that children should have meaningful participation in AI governance and development decisions, as they are currently affected by these technologies and have valuable perspectives to contribute.


Topics

Human rights | Sociocultural


Data transparency and privacy protection are fundamental concerns for AI systems used by children

Speakers

– Online Participants
– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Privacy is a basic right, not a luxury, and AI data collection must be protected


Data transparency is top priority for parents and teens regarding AI companies


Transparency about AI system nature and data collection is crucial for child interactions


Summary

All speakers emphasize that transparency about data practices and privacy protection are essential requirements for AI systems that children use, viewing privacy as a fundamental right rather than optional feature.


Topics

Human rights | Legal and regulatory


AI has significant potential to support children with additional learning needs and disabilities

Speakers

– Dr. Mhairi Aitken
– Online Participants

Arguments

Children with additional learning needs show particular interest in using AI for communication and support


More research is needed on AI’s role in supporting children with disabilities while ensuring proper understanding of their needs


Summary

There is agreement that AI shows promise for supporting children with additional learning needs, though this must be developed with proper understanding of their specific requirements and challenges.


Topics

Human rights | Sociocultural


Environmental impacts of AI are significant concerns that influence children’s usage decisions

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI training requires massive resources including thousands of liters of water and extensive GPU usage


Children who learn about environmental impacts of AI models often choose not to use them


Summary

Both young people and researchers recognize the substantial environmental costs of AI development and note that awareness of these impacts influences children’s decisions about using AI technologies.


Topics

Development | Human rights


Similar viewpoints

Both speakers advocate for distinct legal and ethical frameworks for children’s AI use, recognizing that children have unique vulnerabilities and cannot provide the same informed consent as adults.

Speakers

– Maria Eira
– Co-Moderator

Arguments

Children should have separate AI legislation because they cannot give the same consent as adults


There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks


Topics

Human rights | Legal and regulatory


Both experts express concern about children developing unhealthy emotional dependencies on AI systems, particularly AI companions and chatbots used for personal or therapeutic purposes.

Speakers

– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations


AI companions marketed to children raise concerns about dependence and isolation from real community


Topics

Human rights | Sociocultural


Both researchers emphasize the need for deeper understanding of AI’s impacts on children, going beyond technical literacy to include critical analysis of business models and rights implications.

Speakers

– Dr. Mhairi Aitken
– Maria Eira

Arguments

Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Long-term impacts of AI technology on children remain unclear with contradictory research results


Topics

Human rights | Sociocultural


Unexpected consensus

Parents’ superior knowledge of AI compared to children

Speakers

– Stephen Balkam
– Maria Eira

Arguments

Teens thought their parents knew more about generative AI than they did, contrary to previous technology trends


Parents who regularly use generative AI feel more positive about its impact on their children’s development


Explanation

This finding is unexpected because historically children have led technology adoption. The reversal occurred because parents were learning AI for work purposes, creating an unusual dynamic where parents had more AI knowledge than their children for the first time in digital technology evolution.


Topics

Sociocultural | Human rights


Children’s preference for traditional materials over AI tools in creative activities

Speakers

– Dr. Mhairi Aitken

Arguments

Children chose to use traditional tactile hands-on art materials over generative AI tools, feeling that ‘art is actually real’ while ‘AI art because the computer did it, not them’


Explanation

Despite children’s general interest in AI, when given the choice between AI and traditional creative tools, they overwhelmingly chose traditional methods. This unexpected preference reveals important insights about children’s values regarding authenticity and personal agency in creative expression.


Topics

Human rights | Sociocultural


Equity concerns creating barriers to AI adoption in education

Speakers

– Dr. Mhairi Aitken

Arguments

Stark differences exist between AI use in private schools versus state-funded schools, pointing to equity issues


Explanation

The emergence of AI creating new forms of educational inequality was unexpected, as it suggests that AI could exacerbate existing disparities rather than democratize access to educational tools. This finding highlights how technological advancement can inadvertently increase rather than reduce educational inequities.


Topics

Development | Human rights | Sociocultural


Overall assessment

Summary

There is strong consensus among speakers on fundamental principles: AI systems need child-centric design from the start, children must be included in AI governance decisions, privacy and transparency are essential rights, and AI shows promise for supporting children with additional needs while requiring careful attention to environmental impacts and bias issues.


Consensus level

High level of consensus on core principles with implications for urgent need for coordinated action across policy, industry, and research domains. The agreement suggests a clear path forward requiring collaboration between technologists, policymakers, educators, and children themselves to ensure AI development serves children’s best interests and rights.


Differences

Different viewpoints

Balance between safety and privacy in AI regulation

Speakers

– Stephen Balkam
– Joon Baek

Arguments

There’s an ongoing struggle to balance safety and privacy, with more safety potentially requiring less privacy


Privacy protection laws aimed at safeguarding children may inadvertently violate other rights


Summary

Stephen Balkam presents this as an inevitable trade-off that requires compromise, while Joon Baek raises concerns about unintended rights violations from protective measures, suggesting a more cautious approach to safety-focused legislation


Topics

Human rights | Legal and regulatory


Approach to AI literacy and education

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills


Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Summary

Young people emphasize practical skills like fact-checking and efficient AI use in schools, while Dr. Aitken advocates for deeper critical literacy that includes understanding business models and rights impacts


Topics

Human rights | Sociocultural


Unexpected differences

Children’s preference for traditional materials over AI tools

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills


Children who learn about environmental impacts of AI models often choose not to use them


Explanation

While young people in the video advocated for AI integration in education, research findings showed children often chose traditional tactile materials over AI tools and avoided AI when learning about environmental impacts. This reveals a gap between advocacy for AI education and actual usage preferences


Topics

Human rights | Sociocultural | Development


Overall assessment

Summary

The discussion showed remarkable consensus on core principles – that children need protection, representation, and age-appropriate AI design – but revealed nuanced differences in implementation approaches and priorities


Disagreement level

Low to moderate disagreement level with high consensus on fundamental goals. The main tensions were methodological rather than philosophical, focusing on how to achieve shared objectives rather than disagreeing on the objectives themselves. This suggests a mature field where stakeholders agree on problems but are still developing optimal solutions


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for distinct legal and ethical frameworks for children’s AI use, recognizing that children have unique vulnerabilities and cannot provide the same informed consent as adults.

Speakers

– Maria Eira
– Co-Moderator

Arguments

Children should have separate AI legislation because they cannot give the same consent as adults


There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks


Topics

Human rights | Legal and regulatory


Both experts express concern about children developing unhealthy emotional dependencies on AI systems, particularly AI companions and chatbots used for personal or therapeutic purposes.

Speakers

– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations


AI companions marketed to children raise concerns about dependence and isolation from real community


Topics

Human rights | Sociocultural


Both researchers emphasize the need for deeper understanding of AI’s impacts on children, going beyond technical literacy to include critical analysis of business models and rights implications.

Speakers

– Dr. Mhairi Aitken
– Maria Eira

Arguments

Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Long-term impacts of AI technology on children remain unclear with contradictory research results


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Children are already using AI extensively (22% of 8-12 year olds) but AI systems are not designed with children in mind, requiring urgent action to prioritize child-centric development


There is a significant communication gap between parents and children about AI use, particularly for personal applications, with parents who use AI themselves being more positive about its impact


AI literacy education focusing on critical thinking, fact-checking, and understanding business models behind AI systems is essential and should be integrated into schools rather than banning AI


Significant equity issues exist in AI access and education, with stark differences between private and state-funded schools creating potential digital divides


Children show strong concerns about bias and representation in AI outputs, environmental impacts, and inappropriate content, often choosing not to use AI when these issues are present


AI shows particular promise for supporting children with additional learning needs, but development must be grounded in understanding actual needs rather than pushing technology solutions


The burden of ensuring AI safety should be on developers, policymakers, and regulators rather than expecting children to police their own interactions


Children have a fundamental right to participate in AI decision-making processes that affect their lives, as they are the most impacted group but least represented in development decisions


Resolutions and action items

UNICRI and Disney are launching AI literacy resources (3D animation movie for adolescents and parent guide) at the AI for Good Summit in two weeks


Technology companies should provide transparent explanations of AI decision-making, algorithm recommendations, and system limitations


Industry should fund research and programs to help children develop AI literacy and content discernment skills


AI tools should be designed with children in mind from the start, not as an afterthought, learning from past mistakes with web technologies


Companies should focus on people rather than profits when developing AI tools for children


Separate legislation specifically targeting children’s AI rights and protections should be developed, recognizing children’s unique consent and awareness limitations


Unresolved issues

Long-term impacts of AI technology on children remain unclear with contradictory research results on effects like critical thinking development


How to effectively balance safety and privacy rights in AI systems for children without compromising either


Addressing the environmental impact of AI models and providing transparent information about resource consumption to users


Developing age-appropriate AI companions that support mental health without creating dependency or isolation from real communities


Scaling AI literacy programs globally and implementing them effectively in school systems across different countries


Addressing the power imbalance between children and AI systems in personal conversations and interactions


How to ensure AI systems designed for children with disabilities are grounded in actual needs rather than technology-first approaches


Preventing the monetization and exploitation of children through AI-powered influencer marketing and family vlogging


Suggested compromises

Accepting that perfect balance between safety, privacy, and free expression may never be achieved, requiring constant compromise and adjustment


Designing AI systems well for children will benefit other vulnerable users and wider user groups, creating broader positive impact


Starting with problem identification and user needs assessment before applying AI solutions, rather than technology-first approaches


Combining transparency about AI system nature and data collection with critical AI literacy education to enable informed choices


Developing AI literacy resources that target both children and parents simultaneously to improve communication and understanding


Thought provoking comments

AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans… Young people like me must be part of this conversation. We aren’t just the future, we’re here now.

Speaker

Online Participants (Young people from across the UK)


Reason

This opening statement immediately established the central tension of the discussion – AI as both opportunity and threat – while assertively claiming young people’s right to participate in decision-making. The phrase ‘we aren’t just the future, we’re here now’ powerfully challenges the common dismissal of children’s voices as merely preparatory for future relevance.


Impact

This comment set the entire tone for the workshop, establishing children as active stakeholders rather than passive subjects of protection. It influenced all subsequent speakers to frame their research and recommendations around meaningful youth participation rather than paternalistic approaches.


teens thought that their parents knew more about generative AI than they did. With previous trends, particularly in the early days of the web, and then web 2.0, and social media, kids were always way ahead of their parents in terms of the technology. But in this case, a large, sizable share of teens in all three countries recorded that their parents had a better understanding than they did.

Speaker

Stephen Balkam


Reason

This finding fundamentally challenges the conventional wisdom about digital natives and technology adoption patterns. It suggests a significant shift in how AI technologies are being introduced and adopted, with workplace necessity driving adult adoption ahead of youth exploration.


Impact

This observation reframed the entire discussion about digital literacy and family dynamics around AI. It led to deeper exploration of how AI literacy should be approached differently from previous technology rollouts and influenced subsequent speakers to consider intergenerational learning approaches.


art is actually real… children felt that they couldn’t say that about AI art because the computer did it, not them.

Speaker

Dr. Mhairi Aitken (quoting children from her research)


Reason

This insight reveals children’s sophisticated understanding of authenticity, creativity, and personal agency in relation to AI. It challenges assumptions that children will automatically embrace AI tools and shows their nuanced thinking about what constitutes genuine creative expression.


Impact

This comment shifted the discussion from focusing on AI capabilities to considering children’s values and choices. It introduced the important concept that technology adoption isn’t just about functionality but about meaning and identity, influencing how other panelists discussed the importance of providing alternatives and respecting children’s preferences.


parents who use generative AI tools feel more positive about the impact that this technology can have on their children’s development… when parents are familiar with the technology, when they use the technology, they see it differently.

Speaker

Maria Eira


Reason

This finding reveals a crucial insight about how personal experience with technology shapes attitudes toward children’s use of that technology. It suggests that fear and resistance may stem from unfamiliarity rather than inherent dangers, pointing toward education as a key intervention.


Impact

This observation led to discussion about the importance of adult AI literacy as a prerequisite for supporting children’s safe AI use. It influenced the conversation toward considering family-based approaches to AI education rather than child-focused interventions alone.


children are likely to be the group who will be most impacted by advances in AI technologies, but they’re simultaneously the group that are least represented in decision-making about the ways that those technologies are designed, developed, and deployed

Speaker

Dr. Mhairi Aitken


Reason

This statement crystallizes the fundamental injustice at the heart of current AI development – those most affected have the least voice. It frames the entire discussion in terms of rights and representation rather than just safety or education.


Impact

This comment elevated the discussion from technical considerations to fundamental questions of democracy and rights. It influenced subsequent speakers to consider not just how to protect children from AI, but how to include them in shaping AI’s development.


the goal cannot be the profits, it must be the people

Speaker

Maria Eira (quoting from the children’s video)


Reason

This simple but profound statement cuts to the heart of the tension between commercial AI development and human welfare. Coming from children themselves, it carries particular moral weight and clarity about priorities.


Impact

This comment served as a powerful conclusion that tied together many threads of the discussion. It reinforced the moral imperative for child-centered AI development and provided a clear principle for evaluating AI initiatives.


Should AI ethics for children be separated from general AI ethics?

Speaker

Katarina (online participant studying AI law)


Reason

This question forced the panel to articulate whether children’s needs are fundamentally different from adults’ or simply a subset of universal human needs. It challenged the assumption that child-specific approaches are necessary while opening space to consider the broader implications of child-centered design.


Impact

This question prompted important clarification from panelists about why children need specific consideration while also acknowledging that good design for children benefits everyone. It helped crystallize the argument for child-specific approaches while avoiding segregation of children’s interests from broader human rights.


Overall assessment

These key comments fundamentally shaped the discussion by establishing children as active stakeholders rather than passive subjects, challenging conventional assumptions about technology adoption and digital literacy, and elevating the conversation from technical considerations to questions of rights, representation, and values. The opening statement from young people set a tone of empowerment that influenced all subsequent speakers to frame their research in terms of meaningful participation rather than protection. The research findings about reversed technology adoption patterns and children’s sophisticated value judgments about authenticity added nuance and complexity to common assumptions. The discussion evolved from a focus on safety and education to encompass broader questions of democracy, representation, and the fundamental purposes of AI development. The interplay between research findings and direct youth voices created a rich dialogue that moved beyond typical adult-centric approaches to technology policy.


Follow-up questions

How are influencers (including AI agents as influencers) shaping children’s experiences with AI and social media, and how does this affect their real-life relationships?

Speaker

Mariana Rozo-Paz from DataSphere Initiative


Explanation

This addresses a gap in current research about the influence of AI agents and human influencers on children’s digital experiences and their concrete impact on real-world relationships


What are the long-term impacts of generative AI use on children’s development and well-being?

Speaker

Maria Eira


Explanation

Current research shows contradictory results about whether AI improves or decreases critical thinking skills, indicating need for longitudinal studies


How can AI companions be designed responsibly to support children’s mental health without creating dependency or exacerbating loneliness?

Speaker

Dr. Mhairi Aitken


Explanation

There’s growing interest from children in using AI companions for mental health support, but current tools aren’t designed with children’s well-being in mind


How can generative AI be further leveraged for the support and inclusion of people with disabilities?

Speaker

Ryan (17-year-old youth ambassador)


Explanation

Children showed strong interest in AI supporting those with additional learning needs, but development needs to be grounded in understanding actual needs and challenges


How can AI be designed to reduce power imbalances between children and AI systems, particularly in personal conversations?

Speaker

Elisa from OnePile Foundation


Explanation

Children are in vulnerable positions when communicating with AI about personal issues, requiring design approaches that maintain child agency and independence


How can we develop critical AI literacy that goes beyond technical understanding to include business models and rights impacts?

Speaker

Dr. Mhairi Aitken


Explanation

Current AI literacy efforts focus on technical aspects, but children need to understand the broader implications including data collection, business models, and rights impacts to make informed choices


What are the impacts of using AI bots for therapy, particularly regarding emotional attachments and potential risks?

Speaker

Stephen Balkam


Explanation

Anecdotal evidence shows children and adults using AI for therapeutic conversations, but research is needed on the safety and effectiveness compared to human therapy


How can we address equity gaps in AI access and education between private and state-funded schools?

Speaker

Dr. Mhairi Aitken


Explanation

Research revealed stark differences in AI access and understanding between private and state schools, pointing to important equity issues that need addressing


How can we better understand and address parental awareness gaps regarding children’s personal use of generative AI?

Speaker

Maria Eira


Explanation

Research showed parents are aware of academic AI use but lack knowledge about personal uses like AI companions or seeking help for personal problems


What regulatory approaches can protect children’s rights in AI without violating other rights like privacy?

Speaker

Joon Baek from Youth for Privacy


Explanation

There are concerns that legislation aimed at protecting children in AI contexts might inadvertently compromise other rights, requiring careful balance


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices

Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices

Session at a glance

Summary

This discussion focused on parliamentary approaches to regulating harmful online content while balancing digital safety with freedom of expression and human rights. Parliamentarians from Pakistan, Argentina, Nepal, Bulgaria, and South Africa shared their experiences with digital legislation and the challenges of creating effective regulatory frameworks.


Anusha Rahman Ahmad Khan from Pakistan highlighted the urgent need for social media platforms to respond more quickly to content removal requests, particularly regarding AI-generated harassment content targeting women and girls. She emphasized that delayed responses can have devastating consequences, including suicide, and called for platforms to be more culturally sensitive. Franco Metaza from Argentina’s Mercosur Parliament discussed harmful content including “fatphobia” that promotes eating disorders among young girls, and shared how disinformation led to an assassination attempt on a political leader, demonstrating the real-world dangers of fake news.


Yogesh Bhattarai from Nepal stressed the importance of regulating rather than controlling digital platforms while strengthening democratic institutions and maintaining constitutional protections for freedom of expression. Tsvetelina Penkova from Bulgaria outlined the European Union’s comprehensive legislative approach, including the Digital Services Act, Digital Markets Act, and GDPR, emphasizing human-centric digital transformation and the challenges of enforcement across 27 member states.


Ashley Sauls from South Africa discussed how disinformation about his country affected international relations and highlighted the need for balanced approaches that don’t infringe on privacy and human rights. The discussion also addressed youth engagement in policymaking, cybercrime legislation challenges, and the role of private sector companies in content moderation and capacity building. Participants emphasized the need for multi-stakeholder approaches, international cooperation, and the recognition of digital rights as a potential fourth generation of human rights.


Keypoints

## Major Discussion Points:


– **Platform accountability and content moderation challenges**: Multiple speakers highlighted the struggle with social media platforms’ responsiveness to government requests for harmful content removal, particularly regarding gender-based violence, harassment, and culturally sensitive content. Pakistan’s experience showed platforms treating regulatory requests as optional rather than legally binding.


– **Balancing freedom of expression with protection from harm**: Parliamentarians emphasized the need to protect vulnerable groups (especially women, children, and minorities) from online harassment, disinformation, and harmful content while preserving democratic freedoms and human rights. This tension between regulation and liberty was a central theme across different regions.


– **Legislative frameworks and enforcement challenges**: Speakers shared experiences with cybercrime laws, digital services acts, and content regulation, noting that having laws is insufficient without proper enforcement mechanisms and capacity. The EU’s comprehensive approach (DSA, DMA, GDPR, AI Act) was contrasted with implementation challenges in smaller countries.


– **Youth engagement and digital literacy**: The discussion emphasized involving young people in policymaking processes and the critical need for digital literacy programs to help users identify misinformation, develop critical thinking skills, and navigate online spaces safely.


– **Multi-stakeholder cooperation and capacity building**: Speakers called for enhanced collaboration between governments, civil society, private sector, and international organizations, with particular emphasis on the need for capacity building for parliamentarians and public officials to understand emerging technologies like AI.


## Overall Purpose:


The discussion aimed to facilitate knowledge sharing among parliamentarians from different regions about their experiences with digital governance, content regulation, and creating safer online environments while maintaining democratic principles and human rights protections.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with speakers sharing both challenges and solutions from their respective contexts. While there were moments of criticism directed at tech platforms and concerns about enforcement gaps, the overall atmosphere remained professional and solution-oriented. The tone became slightly more technical and urgent when discussing specific harms (suicide, harassment, disinformation affecting democracy) but concluded on a forward-looking note emphasizing cooperation and shared responsibility.


Speakers

**Speakers from the provided list:**


– **Anusha Rahman Ahmad Khan** – Former Minister for Technology (Pakistan), worked on cybercrime legislation including the Prevention of Electronic Crimes Act 2016


– **Sorina Teleanu** – Session moderator/chair


– **Franco Metaza** – Parliamentarian from Mercosur (regional parliament of South America covering Brazil, Argentina, Uruguay, Paraguay, and Bolivia)


– **Yogesh Bhattarai** – Member of Parliament representing the Federal Democratic Republic of Nepal


– **Tsvetelina Penkova** – Member of the European Parliament representing Bulgaria


– **Ashley Sauls** – South African parliamentarian


– **Raoul Danniel Abellar Manuel** – Member of the Philippine House of Representatives


– **Bibek Silwal** – Advocate for youth in policy from Nepal


– **Olga Reis** – Private sector representative from Google, covers AI opportunity agenda for emerging markets


– **Anne McCormick** – Private sector representative from Ernst & Young (EY)


– **Amy Mitchell** – Representative from Center for News Technology and Innovation (United States)


– **Audiance** – Honorable representative from the Democratic Republic of Congo, Kinshasa


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Parliamentary Approaches to Digital Governance: Balancing Online Safety with Democratic Freedoms


## Executive Summary


This comprehensive discussion brought together parliamentarians from across the globe to examine the complex challenges of regulating harmful online content whilst preserving fundamental democratic principles. The session, moderated by Sorina Teleanu, featured representatives from Pakistan, Argentina, Nepal, Bulgaria, South Africa, and the Philippines, alongside private sector voices and civil society advocates. The dialogue revealed both shared concerns and divergent approaches to digital governance, with particular emphasis on protecting vulnerable populations, enhancing platform accountability, and fostering international cooperation.


## Key Themes and Regional Perspectives


### Platform Accountability and Cultural Sensitivity


The discussion opened with a powerful intervention from Anusha Rahman Ahmad Khan, Pakistan’s former Minister for Technology, who articulated a fundamental challenge facing governments worldwide. She emphasised that the core issue is not a geopolitical struggle between East and West, but rather “a fight between revenue generation entities versus a revenue curbing request.” This economic framing of platform behaviour resonated throughout the session, highlighting how social media companies prioritise profit over cultural sensitivity and public safety.


Khan explained that “every single post on the social media platform is a revenue generating mechanism,” which creates inherent conflicts with content moderation requests. She shared disturbing examples of platforms’ inadequate responses to government requests for content removal, particularly regarding AI-generated harassment targeting women and girls. She noted that delayed responses can have devastating consequences, including suicide, and called for platforms to demonstrate greater cultural awareness in their content moderation decisions.


Khan also highlighted Pakistan’s innovative approach to AI governance, including an AI-powered Senate chatbot project. She referenced Pakistan’s Prevention of Electronic Crimes Act, which was developed over two years starting in 2014 and enacted in 2016. Her frustration was palpable as she declared: “We are now tired of waiting and I would urge and request all the other parliamentarians to come together to make a joint strategy where we can collectively speak to the social media platforms.”


Franco Metaza from Argentina’s Mercosur Parliament—the regional parliament of South America comprising Brazil, Argentina, Uruguay, Paraguay, and Bolivia with 100 parliamentarians—reinforced these concerns with specific examples of harmful content. Speaking in Spanish as he announced he would do, he detailed how platforms promote “racism, xenophobia, homophobia, explicit violence, banalization of the use of drugs, and fatphobia.” He provided particularly disturbing examples of “fatphobia” affecting young girls, noting that 13-14-year-old girls in Brazil are seeking aesthetic surgeries due to harmful content promoting unrealistic body standards.


Metaza also shared how disinformation led to an assassination attempt on a political leader, demonstrating the real-world dangers of inadequately moderated content. He offered a compelling metaphor, comparing unregulated social media consumption to “going at full speed with a vehicle without knowing what is in front of us or without having a traffic light,” which helped frame regulation as a safety necessity rather than freedom restriction.


### Legislative Frameworks and Implementation Challenges


The discussion revealed significant variation in legislative approaches across different regions. Tsvetelina Penkova from Bulgaria outlined the European Union’s comprehensive strategy, including the Digital Services Act (DSA), Digital Markets Act (DMA), GDPR, the European Democracy Action Plan, Media Freedom Act, and the emerging AI Act. Despite criticism of the AI Act, she defended it as “probably the best one which protects people” while ensuring “innovation and growth” alongside “protecting citizens’ rights.” She emphasised the EU’s commitment to human-centric digital transformation whilst acknowledging the substantial challenges of enforcement across 27 member states with different legal traditions and capacities.


In contrast, Yogesh Bhattarai from Nepal advocated for a more collaborative approach, arguing that “digital platforms should be regulated, not controlled.” He stressed the importance of cooperation and collaboration rather than strict governmental control, whilst ensuring constitutional compliance with freedom of speech guarantees. Nepal’s approach involves engaging youth through national and internet governance forums in legislative processes. Bhattarai noted Nepal’s linguistic diversity, with “125 languages and 125 castes,” which adds complexity to content moderation challenges.


Ashley Sauls from South Africa provided multilingual greetings and highlighted his country’s multi-faceted legislative response, including the Protection of Personal Information Act, Cyber Crimes Act, and Fullerman Publications Act. He emphasised the importance of multi-stakeholder approaches and warned against policies that might infringe on privacy and human rights. Sauls also introduced the concerning concept of “digital apartheid,” highlighting how AI training can perpetuate historical biases and discrimination.


Sauls shared a powerful example of how disinformation about “white minority genocide in South Africa” affected US government decisions and led to the cancellation of a rugby match between Atlanta Secondary School and Liff Burra Grammar School due to safety concerns. He quoted the Minister of Sport’s philosophy that “a child in sport is a child out of court,” emphasising sport’s role in social cohesion. He also noted South Africa’s recent transition from majority government to a “government of national unity.”


The Philippines’ experience, as shared by Raoul Danniel Abellar Manuel from the House of Representatives, provided a cautionary tale about legislative overreach. He criticised the country’s Cybercrime Prevention Act of 2012, particularly its cyber libel provisions that have been misused against journalists and teachers, demonstrating how well-intentioned legislation can be weaponised against legitimate expression.


### International Cooperation and Human Rights Framework


A particularly thought-provoking intervention came from the representative from the Democratic Republic of Congo, who proposed that the United Nations should recognise digital rights as a fourth generation of human rights. Speaking in French, he argued that such recognition would provide a common framework for national legislation, similar to existing human rights generations, and could lead to constitutional incorporation in many countries.


This proposal received immediate support from Tsvetelina Penkova, who acknowledged it could resolve many enforcement challenges currently faced by individual nations. The suggestion elevated the discussion from practical policy implementation to fundamental questions about the nature of rights in the digital age.


### Youth Engagement and Digital Literacy


A significant portion of the discussion focused on the critical role of young people in digital policymaking. Franco Metaza argued that “youth participation should be transversal across all policy-making rather than segregated into youth-only discussions,” advocating for integrated rather than separate consultation processes.


Bibek Silwal, an advocate for youth in policy from Nepal, emphasised that young people serve as “positive catalysts in policy implementation” and should be involved from initial policymaking through public outreach. He highlighted the importance of digital literacy programmes and critical thinking skills development to help users identify misinformation.


Tsvetelina Penkova noted that young people understand the economic implications of poor regulation, recognising that “the digital economy will shrink without proper regulation.” This insight challenged assumptions about youth attitudes towards digital governance, suggesting greater sophistication in their policy preferences than often assumed.


### Private Sector Responsibility and AI Governance


The session included notable contributions from private sector representatives, creating moments of both tension and unexpected consensus. Olga Reis from Google presented current content moderation efforts, citing statistics about video removal from YouTube, with “55% removed before being watched, 27% removed with less than 10 views.” She also mentioned Google’s AI Campus programme, which has already trained “500,000 officials” in AI literacy.


Anne McCormick from Ernst & Young highlighted the private sector’s need for clarity on AI liability frameworks as adoption spreads beyond large technology companies to smaller economic actors. She emphasised the importance of independent oversight and transparency mechanisms throughout the AI lifecycle.


The discussion revealed interesting tensions in platform relationships. Franco Metaza both praised YouTube Kids as a successful model of controlled digital ecosystems for children whilst simultaneously criticising Google for allowing defamatory content in search results. He specifically cited how searching for Cristina Fernández de Kirchner showed “ladrona de la nación argentina” (thief of the Argentine nation) in Google’s knowledge panel, demonstrating inconsistent standards across different Google services.


## Areas of Consensus and Disagreement


### Strong Agreements


The discussion revealed remarkable consensus on several fundamental principles. All speakers agreed that social media platforms need to take greater responsibility for content moderation and harm prevention. There was universal acknowledgement that protecting vulnerable populations, particularly children and women, must be a priority in digital governance frameworks.


Youth engagement emerged as another area of strong agreement, with all speakers supporting meaningful integration of young people into policymaking processes. Similarly, there was consensus on the necessity of multi-stakeholder approaches involving government, civil society, private sector, and international organisations.


### Key Disagreements


Despite broad agreement on principles, significant disagreements emerged regarding implementation approaches. The most notable tension concerned decision-making authority for content removal, with Anusha Rahman Ahmad Khan advocating for stronger government authority whilst Tsvetelina Penkova insisted on judicial oversight to prevent government overreach.


Speakers also disagreed on the appropriate level of regulation, with Yogesh Bhattarai emphasising light-touch regulation focusing on cooperation, whilst Franco Metaza supported stronger parliamentary regulation comparable to traffic laws. These disagreements reflect deeper tensions between national sovereignty and international coordination.


## Unresolved Challenges and Future Directions


### Implementation and Enforcement


The discussion highlighted persistent challenges in translating legislative frameworks into effective enforcement. Multiple speakers noted that having laws is insufficient without proper implementation mechanisms and capacity. Cultural sensitivity in content moderation remains particularly challenging, with platforms making uniform decisions without considering local contexts that could have severe consequences for users.


### Capacity Building and Education


Several speakers emphasised the critical need for capacity building amongst parliamentarians and public officials to understand emerging technologies like AI. Digital literacy emerged as equally important for general populations, with speakers calling for educational campaigns to help users identify misinformation and develop critical thinking skills.


### Economic and Social Justice Considerations


Ashley Sauls’s introduction of the “digital apartheid” concept highlighted how AI systems can perpetuate historical injustices and create new forms of discrimination. This concern extends beyond technical bias to fundamental questions about who benefits from digital transformation and who bears its costs.


## Recommendations and Action Items


### Immediate Actions


Parliamentarians agreed on the need to create joint strategies for collectively addressing social media platforms, recognising that individual national approaches lack sufficient leverage against global technology companies. Educational initiatives emerged as a priority, with speakers calling for campaigns to teach young people to identify fake news and develop critical thinking skills.


### Medium-term Developments


The proposal for UN recognition of digital rights as a fourth generation of human rights represents a significant medium-term objective that could provide clearer frameworks for national legislation. Platform accountability mechanisms need strengthening, with the YouTube Kids model suggested as a template for broader child protection measures.


### Long-term Structural Changes


The discussion pointed towards the need for coordinated international frameworks rather than individual national approaches. The integration of digital rights into constitutional frameworks could provide more robust protection against governmental overreach whilst ensuring consistent protection standards.


## Conclusion


This parliamentary discussion revealed both the urgency and complexity of digital governance challenges facing democracies worldwide. The session’s most valuable contribution was its reframing of digital governance from technical issues to fundamental questions about power, economics, and human rights in the digital age. Anusha Rahman Ahmad Khan’s characterisation of the struggle as one between “revenue generation entities versus revenue curbing requests” identified core tensions that must be addressed.


The proposal for digital rights as a fourth generation of human rights offers a potential framework for achieving balance between competing interests, but implementation will require unprecedented levels of international coordination. As parliamentarians continue to grapple with these challenges, the experiences shared provide valuable insights into both successful approaches and cautionary tales about legislative overreach.


The path forward requires sustained commitment to multi-stakeholder dialogue, international cooperation, and innovative approaches that can balance platform accountability with democratic freedoms whilst protecting the most vulnerable members of society.


Session transcript

Anusha Rahman Ahmad Khan: But there is this responsibility that goes with the governments that the digital progress is upholding human dignity. It is upholding democratic freedom and giving access to everybody. So we experience that in Pakistan, for example, that even if when we made the law, the social media platforms continue to govern our request as if we were two kilometers not above the ground, not impacted by the law. And they decided to choose what content they were going to remove and what content they were going to keep on the social media platforms. So there has been and is a continuous issue in our country that when the regulator sends out the request to remove the content, and I will give you by example that this content relates to a girl and she’s a university student and she’s being harassed by somebody and they have created a content on AI or any other means which looks like real. By the time that content is removed, the life of that girl is gone. So when I was legislating, I had dozens of examples of girls actually jumping off the wall, killing themselves, committing suicide and stuff like that. And in my country, even an aspersion on a girl is good enough to kill her. So even if they would not die physically, but they are dead emotionally. So we need to be very careful of the fact that the culture in which we are living, the social media platforms have to be sensitive about that culture. And this is the real challenge, how to make the social media platforms sensitive to the cultures. For them, it’s a revenue. Every single post on the social media platform is a revenue generating mechanism. It’s not a fight between East or the West. It’s a fight between revenue generation entities versus a revenue curbing request. So we need and we expect that this platform today, where the UN enters with the IGF, we can together use the technology platforms without fearing that these technology platform contents are going to be harmful for our children, for our girls, for our women, for the vulnerable. And believing in technology and being a former minister for technology, we all believe that we have to explore and we promise and we want to ensure that we are going to use the technology for shaping the future of the legislation, transparency and bringing more efficiency and effectiveness in the functioning of the way the parliamentarians work. So in the Senate of Pakistan, for example, advancing the vision of technology adaption, the chairman of the Senate has for the first time taken a concrete step towards developing an AI-powered Senate chatbot. It’s a virtual assistant designated to support lawmakers, secretariat staff and citizens with real-time access to legislative data, procedural guidance and multilingual services. This project proposes full-scale design, development and institutional deployment of the Senate chatbot, transforming it from a promising prototype into a high-impact digital parliamentary assistance tool. So we are working using technology at the same time. My concern and your question would still take us again back to this discussion table, that for how long are we going to continue to wait for the use and absorption of the technology positively when abused online, to continue to be abused online by the vested stakeholders and how long would social media platforms would take to listen to the governments and their requests to remove objectionable content and to secure the vulnerable groups and the non-vulnerable groups equally online. And we are now tired of waiting and I would urge and request all the other parliamentarians to come together to make a joint strategy where we can collectively speak to the social media platforms and help our vulnerable citizens in our respective countries to ensure that their offline rights are as secure online. This is what my humble request is. Thank you.


Sorina Teleanu: Thank you so much for bringing to the table so many issues. If I may ask a follow-up question, you mentioned you work on a cybercrime law. Was that law passed already in the parliament? Sorry, Ms. Rahman? If I may ask a follow-up question. No, I have to, your voice is actually echoing, yes. Yeah, you have to listen. Yeah. Let me also remove this. So, if I may ask a follow-up question. You mentioned you were working on a cybercrime law. Was it passed in the parliament? Is it approved?


Anusha Rahman Ahmad Khan: Yes, it was made in 2016. I started working on it in 2014. It took me two years to bring in a collaborative effort, bringing all the parliamentarians on board, listening to the civil society, listening to the NGOs, listening to the independent groups, listening to the media, because, as I said, the Internet is perceived, anything that will happen on the Internet is perceived as an activity, as if it is going to curb the freedom of expression, which is not the case. So, it is not about curbing the freedom of expression. We believe in it and we uphold it. It’s about protecting children, women, girls, and all vulnerable segments, and in this case, men and boys are equally vulnerable. Because the dignity of a natural person is extremely important to protect. So, this is what we made the media to understand, that we are not targeting electronic media. We are not targeting print media. We are talking about social media, which is full of misinformation, propaganda, and fake news. And we need to protect our citizens from this, because it leads to harassment and it leads to other kinds of vulnerable situations, which need to be looked at and criminalized in our law. So, in 2016, we made the cybercrime law. It’s called the Prevention of Electronic Crimes Act, and it’s a consensus document of the entire 240 million people representation in the parliament by their MPs in both the National Assembly and the Senate.


Sorina Teleanu: Thank you. I’ll get back later to you with one more question, but let me give the floor to Mr. Metaza to share your experiences.


Franco Metaza: Hello. Good afternoon to everybody. I’m going to speak in Spanish, which is one of the official languages of my regional parliament. I want to start by thanking the Department of Economic and Social Affairs of the United Nations, the Parliament of Norway, and the Inter-Parliamentary Union, the UIP, for organizing this important parliamentary tracking in the framework of the Forum for Global Internet Governance. My parliament is the Parliament of the Marcosur, it’s the regional parliament of South America. It’s made up of Brazil, Argentina, Uruguay, Paraguay, and Bolivia. It’s finishing its internal legislation to be able to be a full member. We are 100 parliamentarians, and we are having a very, very heated debate at the moment in our region regarding these topics. To answer one of the questions that Sorina asked at the beginning of this panel, well, the harmful content that we see with a lot of concern in our region, we are starting to list them and to be able, in some way, to encode what they are about. We are talking about racism, we are talking about xenophobia, we are talking about homophobia, we are talking about explicit violence, banalization of the use of drugs, and one in particular, as you asked for examples, Sorina, I’m going to give an example of a project that I presented in my parliament regarding something that perhaps does not have an exact translation in the world of the speaker. We are talking about fatphobia. What we are proposing is that there is content that is so harmful on social media, specifically on Instagram, specifically on TikTok, that leads young girls, the general population, but we are very concerned about young girls, to have behaviors that lead to anorexia and bulimia, that lead to what we call eating disorders. In this sense, we are very concerned about what the images generate. We have done tests and registering you on social media, saying that you are a 13-year-old girl, the content they bombard you with is content, images. In some real cases, in other fake cases, made with artificial intelligence of extremely skinny bodies, impossible to achieve in a natural way, and then advice to have extreme diets, and then advice or advertisements about surgeries. In Brazil, for example, 13-14-year-old girls have started going to the doctor on their own to consult for the possibility of having aesthetic surgeries. We are entering a very, very complex situation in terms of harmful content. And this dichotomy that exists between, well, regular or freedom of expression, I want to tell you that it seems to me that the regulations, when they are given in parliaments where all the social statements are represented, all the political expressions, will never go against freedom because the regulations express the will of the majority. I give you an example. There began to be cars, motor vehicles, in our societies, in the real world. We had to put a speed limit. We had to prohibit children from driving. And that was not against anyone’s freedom. Well, I think that today the permanent scrolling that we are all subjected to is as much or more harmful as going at full speed with a vehicle without knowing what is in front of us or without having a traffic light. And finally, I want to pick up something that I heard a lot yesterday, that I heard this morning, and it seems to me that it is a concern that we all have. At least, well, here there are many stakeholders in the auditorium, but perhaps it is something that I heard, especially from parliamentarians, and it is how the other issue, which is disinformation and fake news, affect our democracy. Democracy that we, as parliamentarians, have the obligation to protect and take care of. Look, I’m going to give you an example. In my country, in Argentina, a fake news began to circulate systematically about one of the leaders of my country, who was president twice and who is the main leader of the opposition, whose name is Cristina Fernández de Kirchner, about corruption, with many hate messages, and this went viral all over the networks. What ended up happening? Well, a person who consumed so many hate messages appeared at the door of the house and shot him in the head. Fortunately, the bullet did not come out, but look how far fake news and disinformation can go. Today, he has been arrested by the current president, in the framework of a great confusion of fake news and disinformation. So, we have the obligation, as parliamentarians, to put a stop to it, or at least nuance what social networks are, what Internet without government is, as the senator said here, to protect our democracies. It is not fair that our democracies, which is the last thing we have left, in this complex moment, where at any time a nuclear bomb explodes, the only thing we have left to safeguard humanity are democracies. Let’s please take care of democracies. Thank you.


Sorina Teleanu: Thank you also, including for bringing up some of the metaphors you raised. I like the one about having rules for the use of social media, similar to having rules for driving a car on the road. We can unpack that a little later. One curiosity, within the parliament of Mercosur, are you having these kind of debates, and are you looking into doing something collaboratively across the countries in dealing with harmful online content and creating more safe online? Again, not necessarily only in terms of legislation, but also looking at how to build more awareness, more capacity among users themselves, so they can be better prepared to deal with harmful content. Because, I guess, not all the answers are in passing a law and then expecting for it to be applied, but also seeing how you can prepare people to deal with these kind of things. Any reflections?


Franco Metaza: Yes, Sorina, what you ask is important. Today, in the parliament of Mercosur, if there is something in which we have consensus, it is that companies can do more than what they are doing. That they have the budget, that they have the money, and that they are not making enough efforts to be able to put a stop to harmful content, harmful content and fake news. In that we are absolutely in agreement. We believe that we have to go there. Thank you.


Sorina Teleanu: I think I’m already seeing a common thread about more responsibility for the private sector. And I know we have some private sector in the room and I think we will want to hear from them as well, but later on that. All right, let us continue. And let’s hear from Mr. Bhattarai, please.


Yogesh Bhattarai: Thank you very much, Sorina. Excellencies, distinguished delegates, fellow parliamentarians, ladies and gentlemen, friends from media. Good afternoon. It is a profound honor to be here today. Representing the Federal Democratic Republic of Nepal, I extend my sincere gratitude to the organizers, especially the UN and the government of Nauru, for creating this space where we can share our experiences and the collective Sikh wisdom. The topics before us, building the healthy information ecosystem, is one of the defined challenges of our era. This is not only the technical issues. It is a milestone for our ongoing democratic journey. Like many of you, we have witnesses to transformative power of the digital age. The Internet and social media have opened up avenues for the experiences, connected our diverse communities, and given a powerful platform to citizens to engage in the civic life of our nation. It has been a remarkable force for democratization. However, this progress is accompanied by complex challenges. We grapple with the very real harms of disinformation that can fear our social fabric, and the rise of online harassment that seeks to silence vulnerable voices. These are legitimate concerns that a very responsible government must address. In Nepal, we are currently in the midst of a profound national conversation about how to best delegate the balance between upholding the freedom of experience and protecting our citizens from harm. This debate is reflected in the legislative proposal currently under discussion, including the proposal Social Media Bill and the Information Technology Bill. This proposal streams for the genuine desire to create a safe digital environment. As a parliamentarian committed to the universal values of human rights and people-based democracy, I believe we must proceed with utmost care. The Constitution of Nepal guarantees the freedom of speech and expression, and Article 19 establishes the right to information and communication as fundamental rights. Parliament will not accept any law that contradicts the provision of the Constitution. In Nepal, we have a National Information Commission and the Press Council Nepal as an independent oversight agency. Myself and other MPs have been participating in the program organized by the Civil Society Organization, where there are discussions on the right to information and communication. We are concerned about the negative impact of misinformation and disinformation on society. Everyone should be aware of the possibility that it can divide society by spreading the confusion about caste, religious, racism, gender, and professions. Misinformation and disinformation are also having an impact on the tension and war taking place in different parts of the world today. This has also become the challenge for national security. I am convinced that only civil liberties, human rights, open societies, democratic competition, equal access, and citizen resilience can make the state accountable to its citizens. The challenges brought about by the revolution in the digital sector should strengthen the sovereignty of the nation and the people. It should support world peace and humanity. For this, digital platforms should be regulated, not controlled. Cooperation, collaboration, and solidarity should be strengthened. I am firmly convinced that the most effective and sustainable path forward lies in the empowerment and strengthening of democratic institutions. We believe that only a healthy information ecosystem can make healthy democratic practices strong and accountable. I believe that the Internet and digital platforms will connect the people’s hearts. It will make life easier. It will bring marginalized communities into the mainstream. Let us reform our collective commitment to this principle. Let us share not just our challenges, but our highest aspirations. I am confident that through the collaboration and the shared dedication to human rights, we can build a digital future that is not only safe and orderly, but also open, vibrant, and fundamentally free.


Sorina Teleanu: Thank you so much. I took quite a lot of notes while you were speaking, and I would like to get back to some points, maybe also later. But right now, I like how you said that we need responsible governance, and that states can and should be accountable to their own citizens. I think those are important points to keep in mind also when you work on legislation. And because you said that social media has to be regulated but not controlled, my question would be, how are you interacting with technology platforms as you’re working on this legislation in the country? you have any discussions with them, how is the relation being?


Yogesh Bhattarai: Yeah. Recently, the government submitted the bill about social media and digital platform, and we discussed with so many stakeholders and the different part and different organization, and the government requested the suggestion from the different stakeholder. It is ongoing, but not in the conclude. So, I hope we make the more effective law about social media and internet access, and especially in the cyber security also.


Sorina Teleanu: Thank you. Moving on to Ms. Penkova. Thank you.


Tsvetelina Penkova: Thank you, Serena. I will touch upon the question about the specific legislations, because we heard a lot of many examples, but we have the strong belief that Europe is still playing the leading role when we are speaking about digital legislation. Of course, we are at the stage of implementation still, but let’s keep in mind that when we speak about the EU legislations, we are representing 27 member states. So, if you allow me, I would start by emphasizing on some of the key and the most important EU legislations, and of course, it’s not going to surprise many people in the room, as it was mentioned many times. I’m starting with the Digital Services Act, which is the flagship EU legislation when we’re speaking about the regulation of the digital space. So, basically, the DSA is meant to be tackling a lot of the issues and problems that were already mentioned throughout the whole day’s discussion, at least today. So, we’re speaking about protecting minors and vulnerable groups, tackling cyber violence, tackling harmful content and disinformation. So, everything, more or less, is part of the content of this very key and important legislative framework of the EU. But, of course, it cannot act on its own, so we need some supportive legislations, and that’s where we come to the Digital Markets Act, for instance, which needs to complement the DSA by ensuring a fair competition in the digital economy. I mean, we believe that this is key, so the DMA does promote greater choice for consumers and the interoperability of the digital service providers. Here, I will also have to mention the data governance and the GDPR. Those are key legislations to reinforce individuals’ control over their data, while at the same time it promotes a trustworthy data sharing. So, we are speaking about protecting human rights in the digital space. When I mention GDPR, I’m sure this is probably one of the most popular legislations of the EU and the most controversial one, but it also needs approves, updates, additions. Only last week, for instance, in the EU Institution, we finished the negotiations of the procedural rules for handling cross-border cases. So, when we’re speaking about digital legislations, you have to be very pragmatic that the problems that we’re resolving today would be very different in tomorrow’s reality. So, we have to be very flexible and that has been extremely challenging for the regulators across the world, I would say, that you cannot always foresee the challenges in such a fast-growing and developing segments of the economy as it’s the digital field. I’ve spoken a lot about the human protection and the main legislative framework that the EU provides, but allow me to mention two other key legislations that we are working on. One of them is still a work in progress, but they’re focusing on the media freedom. So, the European Democracy Action Plan, it’s a plan actually, not a legislation necessarily. It needs to strengthen media freedom, but at the same time promoting pluralism. So, basically, it combats disinformation in very specific cases that we’ve been observing happening quite a lot, especially in the context of elections, for instance, and foreign interference. So, those are quite significant global challenges that we see at the moment. Of course, the Media Freedom Act, which is at the moment under negotiations, of course, tries to protect the independence of media freedom, transparency in media ownership and advertisement as well. So, there are many, many specific examples which are trying to resolve all the common problems that we are facing across the globe, but if you ask me to summarize the four key priorities which we’re having as members of the European Parliament, when we’re working on those legislations, I’ll probably focus on, I mean, there are many more than four, but I’ve chosen to focus on four main ones for today’s session. The first one, we really try to keep with the human-centric digital transformation. So, digital transition while protecting citizens’ rights. The second, combating online hate and disinformation. I’ve mentioned again, the DSA’s probably main goal is to ensure that there are stronger enforcement mechanisms against cyber violence, and we can get in more details if we need to. Third, it was mentioned many times, digital literacy and resilience. Without resolving and tackling this issue, none of those legislation or enforcements would be perceived, accepted and successful. So, the EU strategy at the moment is focused quite significantly on digital education. And last, but not least important, of course, is the children online safety. There are many sessions dedicated to this in the IGF. Tomorrow, we’re having one with the Utah IGF, and the younger generation really has a very significant role to play in ensuring that this protection is actually targeted to the most vulnerable and to the minors. And if you allow me the last 30 seconds, because when I’m speaking from the EU perspective, as I said, we have to take into account that we have 27 member states. So, each one of them has a very different experience in enforcing those legislations. In the last 30 seconds, I will just share, I’m from a small member state, actually, I’m from Bulgaria. So, we’re still struggling, actually, to enforce and implement all those legislations that I’ve listed. But we have a very active civil society sector that at the moment, for instance, is launching a lot of campaigns on teaching the youth generation to identify fake news and to be a bit more aware of the critical thinking. What I mean by that is basically try to allocate or analyze the sources of the information, where the information is coming, because we have observed that the younger generation lacks that question. So, they just see the information and they perceive it. I just wanted to mention that example because, yes, we do face a lot of challenges. Some of those legislations are very, very complicated, but once you have the state support, the regional governance and the active civil society, nothing is impossible.


Sorina Teleanu: Thank you so much for raising, I think, two important points. First of all, being the one on enforcement, it’s one thing to have laws in place, but then are authorities at the national level, as you’re saying, empowered to actually put that law into practice? I’m from your neighboring country, Romania, and I’m seeing the same challenges. It’s not easy to put in place all of that. Excellent points on literacy and capacity building and education and building critical thinking in young users, but also all of us. I think we would all benefit from a bit more critical thinking when we interact with digital technologies. If I may add one more thing that you might want to reflect on, how is the AI Act also connecting to all of these when it comes to more safe online environments and more transparency from the side of the private actors, for instance?


Tsvetelina Penkova: I’m sure maybe some people would not agree with me, but I think the AI Act as it stands in the European legislation is probably the best one which, again, protects the people, and of course we’ve seen a lot of criticism, but what we want to do is ensure that there is innovation and growth, but of course the first thing is protecting citizens’ rights and ensure that there is enough time for the consumers to understand all the risks and challenges before we put a technology for very wide use. I think this was a bit the protection mechanism and the way of thinking of the European Parliament when we were working on that legislation, and that’s why it did face a lot of criticisms.


Sorina Teleanu: Thank you. I think we can also come back to that later. All right, let’s hear from our… Final, but not the least, speaker, Mr. Ashley Sauls, please.


Ashley Sauls: Thank you very much, Serena. Esteemed parliamentarians, distinguished guests and fellow stakeholders in the realm of digital governance, being from a country where our constitution protects religious freedom, in line with my faith, I want to greet you in the name of our Lord and Saviour, Jesus Christ, and in my First Nation Indigenous Bushman mother tongue, at the brink of extinction, Mwenke Awunneki. It is with great honour, thank you, it is with great honour that I address you today at this pivotal forum where we converge to deliberate on the future of digital policy practices. As we navigate the complexities of our increasingly interconnected world, South Africa stands as a testament to the transformative potential of digital technologies while simultaneously confronting unique challenges that demand thoughtful and inclusive policy frameworks. A practical example is the recent disinformation about white minority genocide in South Africa where an executive decision by the US government was largely made based on online information. The ripple effect of that was also a fuelled narrative about my race classified as coloured, as a violent and gangster-ridden group. As a result, a local school, as an example, Atlanta Secondary School, had to host Liff Burra Grammar School from the UK in South Africa for a rugby match this July, but it was cancelled, and it was cancelled because parents feared for what was said in the White House about our country. Our current Minister of Sport says a child in sport is a child out of court. My fellow parliamentarians in the UK, maybe you guys can help us convince management and parents to contribute to a different narrative through sport by ensuring that that match actually takes place. I hope somebody is listening to me about that. In South Africa, we recognise the profound impact of the digital landscape on our socio-economic fabric with over 60% of our population accessing the internet. We see an unparalleled opportunity to enhance educational outcomes, stimulate economic growth and promote social inclusion. However, these opportunities are accompanied by significant obstacles, including the digital divide issues, cyber security threats and the need for robust regulatory frameworks that protect the rights of all citizens. Thus far, we have enacted the Protection of Personal Information Act, the Cyber Crimes Act and the Fullerman Publications Act that regulates these platforms. To effectively enhance our digital policy practices, we adopted a multi-stakeholder approach that engages government, civil society, the private sector and the public. This collaborative model is essential for fostering an inclusive digital economy where benefits are equitably distributed and innovation is harnessed to address local challenges. Moreover, as we advocate for robust cyber security measures, we must ensure that they do not infringe upon the rights to privacy and freedom of expression. South Africa is committed to aligning its digital policies with international standards, promoting a balanced approach that safeguards both security and fundamental human rights, as we would say in my mother tongue, goutes moet balance. By leveraging our position as a member of the African Union, we aim to encourage a continental dialogue that addresses common challenges and fosters regional cooperation in the realm of digital policy. In conclusion, the South African experience underscores the need for a proactive rather than the current reactive approach to digital governance. As we gather here today, let us reaffirm our commitment to a digital future that is inclusive, secure and respects the rights of all citizens. Together, we can craft policies that not only uplift our respective nations but also contribute to a more equitable global digital landscape that expresses the heart of politics, that prioritizes people above profits. Eyo. Thank you. Siyabonga kakulu.


Sorina Teleanu: Thank you as well. Also for highlighting what some of the previous speakers have mentioned, the fact that there can be and there should be a balance between protecting safety and security and also ensuring protection of human rights. And we don’t have to give one up to protect the other and the other way around. So thank you for highlighting that again. You mentioned the African Union, so my follow-up question to you would be, are there any examples of initiatives or discussions or projects being implemented or even put in place right now at the African Union level dealing with these issues that you might want to share with everyone? Sorry, again, I’m doing that. Apologies.


Ashley Sauls: Well, not specific programs. I don’t think it is as aggressive as one would want it to be. But I’d like to rather touch on the IGF approach. We have the South African Internet Governance Forum. And on that level, regionally and also continentally, there’s a lot of programs and initiatives around that. And which, for the first time, I think, because of that approach, we now have us as parliamentarians participating, I think it’s for the first time, that we actually join in the forum. And it is because of those engagements on that level.


Sorina Teleanu: Thank you. I’m glad to have you on board. All right. We’re kind of running out of time. And there are interventions that want to be made from the room. So let’s see. If you could introduce yourself, please. Thank you, Sorin.


Bibek Silwal: My name is Vivek Silwala and I’m from Nepal. I’m an advocate for youth in policy. So thank you very much to all of the member of parliament, senators. I think it was very enlightening in terms of what other work that has been going, not just across the region, but across the continents. And, you know, very much different focus areas and different reasons. So my question is regarding the involvement of youth in digital policymaking or, let’s say, just policymaking. You know, each and every processes where youth are involved, I think it amplifies the impact of the policymaking, whether is it in terms of implementations or whether in terms of the policymaking in the initial process. Youth are always the positive catalyst of reaching out to the end mile. So my question is to all of the parliaments in your region. How have you been involving youths in the days to come in policymaking or just, let’s say, a public outreach program where awareness, how do you plan to engage and indulge the youth in your specific events or policy? Thank you.


Sorina Teleanu: Thank you. Let’s take the second question as well.


Raoul Danniel Abellar Manuel: Hello to fellow parliamentarians. I’m Raul from the Philippines, member of the Philippine House of Representatives. I’d like to ask for your ideas or maybe concrete experiences about combating cybercrime because in the case of the Philippines, we’ve had the Cybercrime Prevention Act in 2012. Unfortunately, it contained a cyber libel provision that we already flagged years ago. We have foreseen that it can be used by abusive or repressive leaders and it turned out that it was actually used in recent times to go after journalists, even teachers who just had opinions that contradict those of the government of the day. So right now, we are discussing possible review and amendments to the cybercrime law that we have and you really are very guarded with that process. So there might be thoughts coming from fellow parliamentarians. Thank you.


Sorina Teleanu: Thank you as well. Shall we take these two questions and then continue? Anything on youth engagement and experiences with cybercrime law? Anyone would like to share?


Franco Metaza: Well, regarding young people, in the case of my country, we have a lot of encouragement for the participation of young people. You can vote from the age of 16 and we are proud to have a large number of young parliamentarians in our two chambers, in the House of Representatives as well. So we understand the participation of young people from a transversal point of view. Every time we make a law, when we listen to all parties, we obviously listen to young people. What we don’t like is to segregate young people. I mean, well, young people talk among young people and solve the problems of young people and the rest of the world follows separate paths. For us, youth must be transversal to political construction in society.


Tsvetelina Penkova: So they had many key messages, but the ones I remember were like, the digital economy will shrink if we don’t regulate it. So the young generation understands that and they want to be involved and they want to be asked and they want to be part of the conversation. Children, they do require protection. that is something which is limiting their rights. An interesting point that they brought, actually, was that the bloggers are not restricted in terms of the content that they’re publishing, so this is something that regulators should pay attention. They signaled about that. So the young generation is very much ahead of many of the legislators when it comes to the new trends, and they are an active stakeholder in that, so it’s not a matter if they want to be involved, it’s a matter of us going there and reaching out and asking them. And the comment on the cyber crime, I’m just going to give an example again how it is dealt with in the DSA, for instance, when we’re speaking about cyber violence. So illegal content has to be taken without a delay from the platforms once it’s detected, and this decision has to be taken by the judge, not by the government, so this is one way to tackle the specific example that was given.


Yogesh Bhattarai: In Nepal, you know, Nepal is a very specific country, because we have so many languages, like 125 languages and 125 castes, different castes, and Nepal is between China and India, you know, the very big countries, so that issue is very special for us also. And the youth engagement, it is very important. We have a national internet governance forum, and also the youth internet governance forum in Nepal, so our parliamentarians are engaged with them to make the law and other legislative process. And second thing, which is the cyber crime issues, we have a cyber crime law, and there is a branch within the police, police have a cyber crime branch, and they investigate the issues about the cyber crime, and they submit any case in the court, and the court maybe judge about the cyber crime.


Ashley Sauls: Thank you, Sorina. I think maybe I should start off by saying, let me be honest and transparent that in South Africa, we haven’t really had an emphasis on youth involvement as much, but with the dawn of the shift of our governmental structure nationally, that’s beginning to change. Many of you would know that we had a majority government, one party majority government, since 1994, and now there’s what is called the government of national unity, it’s a coalition government, and so there’s different lines, different approaches coming together, forming one government, and this has assisted to make it practical. We’ve got two, the leaders of the South African IGF, both the chairperson and the deputy chairperson are young people, they are here at the conference, and like I said, this is the first time that us as parliamentarians are forming part of this, and that is because of this different approach to government, which now includes the voices of young people, and because we listen, I’m here today. So I think that’s a good step in the right direction for South Africa.


Sorina Teleanu: Thank you, everyone, for sharing the experiences. I think there were a few more points, if you would like to get back to the mic before we wrap up the session. There’s a mic next to you.


Audiance: Merci beaucoup, Madame. Je suis l’honorable voie de la République démocratique du Congo, Kinshasa. J’aimerais aborder une question, je ne sais pas, I want to speak in French, please. Je peux? I can? Je voudrais aborder une question un peu plus technique et vraiment intense, parce que à chaque fois que l’on parle de la législation et des droits humains, on sait tous que c’est vrai que nous sommes des législateurs, mais ne votent pas ou bien n’écrit pas une loi qui veut. Et ça a été dit ici que la plupart des parlementaires ne sont pas outillés, mais j’aimerais évoquer ici le rôle que les Nations Unies ont joué sur la reconnaissance des droits humains. Lorsque les Nations Unies ont compris les réalités du monde, vers 1945-1948, ils ont parlé des droits civils et politiques. Ils les ont reconnus comme la première génération des droits humains. Ensuite, il y a eu les droits liés à l’économie, au travail. Ils ont créé la deuxième génération des droits humains, les droits économiques et sociaux, et sociales, si on peut le dire ainsi. Ensuite, en parlant de l’écologie, les Nations Unies ont reconnu les défis du moment. Ils ont créé la troisième génération des droits humains, les droits collectifs liés à l’environnement. Mais ici, il est clair que, concrètement, les Nations Unies n’ont pas encore reconnu les droits numériques comme étant, faisant partie de la quatrième génération des droits humains. Pourquoi je le dis ? Parce que lorsqu’on lit les constitutions de notre pays, on va voir qu’il y a beaucoup de travaux, beaucoup des pays ont repris les travaux des Nations Unies, donc comme droits fondamentaux. Il est clair qu’ici, si on parle des droits civils et politiques, nous savons de quoi nous parlons. Si nous parlons des desques droits économiques, nous savons de quoi nous parlons, effectivement. Mais lorsqu’il s’agit des droits numériques, on parle des droits humains, on parle des légiférés, mais en réalité, on ne s’est pas encore mis autour d’une table pour dire, en effet, que les droits numériques faisaient partie de la quatrième génération des droits humains. Si les Nations Unies passent cette étape, vous allez remarquer que beaucoup de pays vont enjamber et peut-être ça sera même inscrit dans nos constitutions et dans nos lois. Parce que je l’ai dit tout à l’heure, légiférer, c’est bien, c’est un vÅ“u, mais il faut avoir la technique pour légiférer. Et il est clair que notre contribution en tant que parlementaire est qu’il faut que nous ayons un élément, un instrument commun comme on l’a avec les trois générations des droits humains et avec la génération des droits numériques. Merci.


Sorina Teleanu: Thank you. We’ll get back to that. And the second point, please.


Olga Reis: Thank you so much. My name is Olga Reis and I represent the private sector here. I work at Google and I cover AI opportunity agenda for the region of emerging markets. I wanted, first of all, thank you so much for this insightful conversation. And I wanted to highlight a couple of points and then maybe react to some of the points that were raised during the panel discussion. We at Google look at such technologies as AI as a really transformative technology once in a generation opportunity, especially for the region of emerging markets. But we also recognize that such technology should be developed and deployed boldly, responsibly and together with international community, with public sector, with civil society and our users. And one of the ways of how I believe this technology should be used in the context of content regulation, because there was a great deal of discussion around content regulation during the session, is that we as a company that manages YouTube platform utilize AI to tackle bad content on our platform. And I just pulled out statistics as we were talking about content moderation in the first quarter of 2025, so January, February, March this year, we pulled down or took down 8.6 million videos on YouTube that didn’t comply either with our own policies or with respective policies on the markets where we operate. And 55% of this 8.6 million videos were removed before they were watched at all, meaning that they were uploaded by content creators but not published and we detected them automatically. And further 27% of this bad content was removed with less than 10 views. So that just shows the scale of how we can actually use such technology as AI to tackle bad content on our platforms. However, I wanted to use this opportunity to speak not to talk about content, but I felt compelled to bring up these statistics, but about the need, I would say, of companies like Google to actually work on capacity building of public officials and parliamentarians, including around the use of AI for their work, but also around how AI can contribute to driving economic growth, especially in emerging markets. And just two points to highlight or two programs to highlight that we as a company are doing. Next week, we’re actually gathering a number of regulators from the MENA region and our London office to talk for three days about, you know, regulations, challenges around regulations, including on AI and cloud and content issues. This is something that we’ve been doing for many years already. And we have similar programs around the world, especially in emerging markets. And there is a second program that I wanted to highlight that is actually available for everyone. And specifically built for public officials. This is a program that is called AI campus that we as a company built in cooperation with a political which is an NGO based out of UK and This is a program that is built specifically for public officials to upskill them on AI issues we have already seven courses available on different aspects of AI and 500,000 officials have already gone through this online training and I would encourage and invite all public officials that are present in this room to You know to make use of this of this content and we will make sure that we actually update and develop it Because the technology does move and involve in the world very quickly. Thank you so much once again


Sorina Teleanu: Thank you. Also, and we have one more point Okay, a few more points if we can be quick, otherwise I will be taken down here


Anne McCormick: Hello, I’m Anne McCormick from Ernst & Young EY. Also private sector, but a different perspective. It’s very important not to simplify private sector small and big enterprises innovators, but also Organizations we work with clients across very different sectors across almost all the countries in this room we see there is a need for Clarity on what is reliable AI the liability gets transferred to more and more Economic actors in the economy small and big as we adopt embed and deploy AI So I would urge policymakers legislators to look at the health and dynamism of their economy and Consider the different aspects of the private sector not just the large tech companies that are the ones in the headlines but the similar and in some cases very different needs and very different interests of Economic players and companies that are adopting and embedding AI and who are increasingly concerned about AI governance we see company leaders board members, but also investors and insurance companies Asking about how do you know that the AI that you’re buying no matter what its brand, right? How do you know its limits its risks? Are you potentially liable? How are you going to deploy it with confidence? How are you going to make sure that your employees your own clients your reputation as a company? Right is maintained. So it is very important not to over regulate it Irregulate, but it is important that there are the right mechanisms to Encourage disclosure Encourage transparency not the black box encourage accountability through the life cycle of an AI with independent oversight possibly independent assurance or assessments so that everybody can use this extraordinary technology with Confidence and we get the best out of it. There are multiple economic voices The private sector has many different facets and I really want to emphasize this as adoption grows. Thank you very much


Sorina Teleanu: Thank you. And there was one more point if we can be very fast, please


Amy Mitchell: Thank you so much, I’ll be fast Amy Mitchell with Center for News technology and innovation in the States and I had a quick follow-up thought and question I guess if you have a second to to respond to it on The digital information space and it was great to hear topics of the freedom of expression of thinking about the balance of Safeguarding as well as maintaining freedom of expression and then media freedom specifically and in the new EU initiative that’s being passed and I’m curious as to the thoughts that are in place on How one puts definitional language around those things as we know The public access to information Is is vast today and the range of those that are producing journalism moving from the very traditional space to all kinds of different Producers including in some cases citizens themselves that can be need of that safeguard and protection of freedom so I’d be curious as you’re developing these acts the thought around the definitional language and then also on Safeguarding in the enforcement side so that things like that cannot per later government structures as we know government Governments can change over time inside a country To be sure that they don’t end up being any used in a way that can be used to harm. Thank you


Sorina Teleanu: Thank you. I’m looking at the hosts I just say it and I feel asking if we can take one more minute per speaker to try to reflect on the Points. Okay. Thank you so much Let’s try to reflect on some of these issues


Franco Metaza: Bueno, dos remarks para la ejecutiva de Google What’s your name? Olga Una buena y una mala. I’m going to speak in Spanish. Sorry una buena y una mala Creo que la experiencia de YouTube kids es una experiencia muy muy virtuosa porque genera un ecosistema digital distinto Las redes sociales no, no lo tienen. Esto es como si en la vida real nosotros permitiéramos que los niños entren al casino. Bueno cuando entran a Instagram es como un niño entrando a un casino o a una discoteca La experiencia de YouTube kids de generar un ecosistema distinto con un algoritmo absolutamente controlado me parece muy exitosa Y para que le podamos exigir al resto de las empresas. Y después una crítica Yo no sé si usted se acuerda Olga Pero en el año agosto del 20 en el panel de conocimiento de Google de Argentina cuando usted buscaba el nombre de quien era la vicepresidenta de ese momento Cristina Fernández de Kirchner figuraba ladrona de la nación argentina Ladrona. Eso en el panel de conocimiento hubo un juicio al respecto. Bueno, este tipo de cosas si le pasan a una persona que es la vicepresidenta del país y le pasa a una empresa tan grande como Google Imagínense la desprotección que hay hacia abajo, ¿no? Lo que ha generado el sentido común en la sociedad argentina para que hoy el presidente Milley haya osado ponerla en prisión siendo la principal líder opositora. Gracias


Ashley Sauls: I think just on the AI element from a African perspective There should be a balance also around the importance of the well-being of people and profits My country is known for an apartheid history and And we’ve realized that in the AI training That there is Still the presence of a risk of what I would call digital apartheid And this is quite a unique danger for us and there is no attention given especially by The private sector on the importance and on the importance of this in terms of even profiling based on historic racial desperation where if AI Just to make it practical if an AI like me. I look around the room. I’m like the darkest in the room and if AI picks up they differentiate between even on that level a Darker profiling for someone that looks like me and If that’s a repeat now Then we cannot rejoice for a digital future. We should be concerned that that digital future is repeating an Ugly history. Thank you


Sorina Teleanu: Thank you as well


Tsvetelina Penkova: Just very very briefly on the remark for the for a generation of rights and the need for a common approach Absolutely agree because I Actually believe that this is probably gonna resolve the enforcement issues Well, because you’ve made a very valid point on the fact that it’s very difficult to enforce something Which is not not defined or not. Well understood so point taken


Sorina Teleanu: Just to add quickly on the point on digital rights, it’s true We don’t really have new you an instrument dealing with them But there are quite a few Human Rights Council resolutions for instance Which say clearly the same rights that people have offline must also be protected online So at least we can use that as a starting point. We’ve taken 15 more minutes of everyone’s time. Thank you so much Thank you so much to all of you for contributing and for still being in the room We hope this has been useful as a last session of the parliamentary track. I know something will still happen in this room So please do not leave and my colleagues will be telling you a bit more Thank you so much, and good luck with the rest of the IGF You You


A

Anusha Rahman Ahmad Khan

Speech speed

141 words per minute

Speech length

902 words

Speech time

382 seconds

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases

Explanation

Social media platforms treat government requests for content removal as revenue-curbing measures rather than legitimate regulatory concerns. They decide independently what content to remove or keep, showing insensitivity to local cultural contexts where even minor aspersions can have devastating consequences.


Evidence

Example of a university student being harassed with AI-generated fake content that looked real – by the time content was removed, her life was destroyed. Dozens of examples of girls committing suicide due to online harassment. In Pakistan, even an aspersion on a girl can be enough to kill her emotionally if not physically.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Human rights | Sociocultural | Legal and regulatory


Agreed with

– Franco Metaza
– Olga Reis

Agreed on

Social media platforms need to take greater responsibility for content moderation and harm prevention


Disagreed with

– Tsvetelina Penkova

Disagreed on

Decision-making authority for content removal


Pakistan’s Prevention of Electronic Crimes Act (2016) was developed through collaborative effort to protect vulnerable segments while upholding freedom of expression

Explanation

The law took two years to develop through extensive consultation with parliamentarians, civil society, NGOs, independent groups, and media. It specifically targets social media misinformation, propaganda, and fake news rather than traditional media, aiming to protect children, women, girls, and all vulnerable segments including men and boys.


Evidence

Law passed in 2016 after starting work in 2014. Described as a consensus document representing 240 million people through their MPs in both National Assembly and Senate. Focus on protecting natural person dignity and criminalizing harassment leading to vulnerable situations.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Cybersecurity


Parliamentarians should create joint strategies to collectively address social media platforms and protect vulnerable citizens globally

Explanation

Individual government requests to social media platforms are often ignored or inadequately addressed. A collective approach by parliamentarians across countries would have more leverage to ensure that offline rights are equally protected online for both vulnerable and non-vulnerable groups.


Evidence

Personal experience as former technology minister showing that social media platforms continue to ignore government requests and treat them as if they were ‘two kilometers above the ground, not impacted by the law.’


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Legal and regulatory | Human rights | Sociocultural


Agreed with

– Ashley Sauls
– Olga Reis
– Sorina Teleanu

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


F

Franco Metaza

Speech speed

155 words per minute

Speech length

1301 words

Speech time

501 seconds

Companies like Google can do more than they are currently doing to tackle harmful content, as they have the budget and resources but are not making sufficient efforts

Explanation

The Mercosur parliament has reached consensus that technology companies possess sufficient financial resources and technical capabilities to address harmful content and fake news more effectively than their current efforts demonstrate. There is agreement that these companies should increase their commitment to content moderation.


Evidence

Consensus reached within Mercosur parliament (representing Brazil, Argentina, Uruguay, Paraguay, and Bolivia with 100 parliamentarians) that companies have the budget and money but are not making enough efforts.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Legal and regulatory | Sociocultural | Economic


Agreed with

– Anusha Rahman Ahmad Khan
– Olga Reis

Agreed on

Social media platforms need to take greater responsibility for content moderation and harm prevention


Content targeting young girls promotes extreme dieting, impossible body standards, and leads to eating disorders, with 13-14 year olds seeking aesthetic surgeries

Explanation

Social media platforms, particularly Instagram and TikTok, bombard young users with harmful content promoting unrealistic body images through AI-generated or real images of extremely thin bodies. This content includes advice for extreme diets and advertisements for surgeries, leading to serious eating disorders among young girls.


Evidence

Tests conducted by registering as a 13-year-old girl on social media showed bombardment with images of extremely skinny bodies (some AI-generated), extreme diet advice, and surgery advertisements. In Brazil, 13-14 year old girls have started consulting doctors independently about aesthetic surgeries.


Major discussion point

Harmful Content and Its Impact on Vulnerable Groups


Topics

Human rights | Sociocultural | Cybersecurity


Disinformation can lead to real-world violence, as seen with assassination attempts on political leaders fueled by fake news and hate messages

Explanation

Systematic circulation of fake news and hate messages on social networks can escalate to physical violence. The spread of disinformation creates an environment where individuals consume so much hateful content that they may act violently against targeted individuals.


Evidence

Example from Argentina where fake news about corruption and hate messages against Cristina Fernández de Kirchner went viral on networks. A person who consumed these hate messages appeared at her house and shot at her head – fortunately the bullet did not fire. She has since been arrested by the current president amid confusion of fake news and disinformation.


Major discussion point

Harmful Content and Its Impact on Vulnerable Groups


Topics

Cybersecurity | Human rights | Sociocultural


Regulation through democratic parliaments representing all social and political expressions will never go against freedom, similar to traffic laws for vehicles

Explanation

When regulations are created in parliaments where all social statements and political expressions are represented, they express the will of the majority and therefore cannot be against freedom. Just as society created speed limits and age restrictions for driving when cars were introduced, similar reasonable regulations are needed for social media.


Evidence

Analogy provided: when motor vehicles appeared in society, speed limits were established and children were prohibited from driving – this was not against anyone’s freedom. Comparison made that permanent scrolling on social media is as harmful or more harmful than driving at full speed without knowing what lies ahead.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Legal and regulatory | Human rights | Sociocultural


Disagreed with

– Yogesh Bhattarai

Disagreed on

Level of regulation needed for digital platforms


Youth participation should be transversal across all policy-making rather than segregated into youth-only discussions

Explanation

Rather than creating separate spaces where young people only discuss among themselves and solve youth-specific problems, young people should be integrated across all political construction in society. This transversal approach ensures youth perspectives are included in all policy areas rather than being isolated.


Evidence

Argentina allows voting from age 16 and has a large number of young parliamentarians in both chambers. Every time they make a law and listen to all parties, they include young people in the consultation process.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Tsvetelina Penkova
– Yogesh Bhattarai
– Ashley Sauls
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


Disagreed with

– Bibek Silwal

Disagreed on

Approach to youth engagement in policymaking


YouTube Kids provides a successful model of controlled digital ecosystem for children that other platforms should emulate

Explanation

YouTube Kids creates a separate digital ecosystem with a completely controlled algorithm specifically designed for children, unlike other social media platforms that allow children into adult-oriented spaces. This approach should be demanded from other companies as it provides appropriate protection for minors.


Evidence

Comparison made that allowing children on regular Instagram is like allowing a child to enter a casino or nightclub, while YouTube Kids provides a virtuous experience with a controlled algorithm creating a distinct ecosystem for children.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Human rights | Sociocultural | Cybersecurity


O

Olga Reis

Speech speed

141 words per minute

Speech length

592 words

Speech time

250 seconds

AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed

Explanation

Google utilizes AI technology to automatically detect and remove content that violates policies before it can cause harm. The majority of problematic content is identified and removed before users can view it, demonstrating the effectiveness of AI-powered content moderation systems.


Evidence

In Q1 2025 (January-March), 8.6 million videos were removed from YouTube for policy violations. 55% were removed before being watched at all, and an additional 27% were removed with less than 10 views, showing early detection capabilities.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Cybersecurity | Legal and regulatory | Sociocultural


Agreed with

– Anusha Rahman Ahmad Khan
– Franco Metaza

Agreed on

Social media platforms need to take greater responsibility for content moderation and harm prevention


AI development should be bold, responsible, and collaborative with international community, public sector, and civil society

Explanation

AI represents a transformative once-in-a-generation opportunity, especially for emerging markets, but must be developed and deployed through collaboration with multiple stakeholders. This approach ensures that the technology benefits society while addressing potential risks and concerns.


Evidence

Google’s approach to AI development emphasizes working together with international community, public sector, civil society, and users. Specific mention of AI as transformative technology with particular opportunities for emerging markets.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Development | Legal and regulatory | Economic


Agreed with

– Anusha Rahman Ahmad Khan
– Ashley Sauls
– Sorina Teleanu

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


Y

Yogesh Bhattarai

Speech speed

108 words per minute

Speech length

831 words

Speech time

461 seconds

Digital platforms should be regulated but not controlled, requiring cooperation and collaboration rather than strict control

Explanation

Nepal’s approach emphasizes that digital platforms need regulatory frameworks that provide guidance and boundaries without imposing excessive control that could stifle innovation or freedom. The focus should be on collaborative governance that strengthens democratic institutions while ensuring platforms serve public interests.


Evidence

Nepal’s Constitution guarantees freedom of speech and expression, with Article 19 establishing right to information and communication as fundamental rights. Parliament will not accept any law that contradicts constitutional provisions. Nepal has National Information Commission and Press Council Nepal as independent oversight agencies.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Legal and regulatory | Human rights | Infrastructure


Agreed with

– Tsvetelina Penkova
– Ashley Sauls
– Sorina Teleanu

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


Disagreed with

– Franco Metaza

Disagreed on

Level of regulation needed for digital platforms


Nepal is currently discussing Social Media Bill and Information Technology Bill while ensuring constitutional compliance with freedom of speech

Explanation

Nepal is developing legislation to address digital challenges while maintaining strict adherence to constitutional protections for freedom of expression and communication rights. The legislative process involves extensive stakeholder consultation to ensure balanced approaches that protect both safety and rights.


Evidence

Social Media Bill and Information Technology Bill currently under discussion. Government has requested suggestions from different stakeholders. Nepal has National Information Commission and Press Council Nepal as independent oversight agencies. MPs participate in programs organized by Civil Society Organizations discussing right to information and communication.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Sociocultural


Constitutional guarantees of freedom of speech must be upheld while addressing legitimate concerns about harmful content

Explanation

Nepal’s legislative approach prioritizes constitutional protections for freedom of speech and expression while acknowledging the need to address misinformation, disinformation, and content that can divide society along caste, religious, racial, and gender lines. The challenge is creating effective governance without compromising fundamental rights.


Evidence

Constitution of Nepal guarantees freedom of speech and expression with Article 19 establishing right to information and communication as fundamental rights. Concerns identified about misinformation and disinformation spreading confusion about caste, religious, racism, gender, and professions, potentially dividing society and impacting national security.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Legal and regulatory | Sociocultural


Nepal engages youth through national and youth internet governance forums in legislative processes

Explanation

Nepal has established both national and youth-specific internet governance forums that provide platforms for young people to participate in policy discussions and legislative processes. Parliamentarians actively engage with these forums to ensure youth perspectives are incorporated into law-making.


Evidence

Nepal has a national internet governance forum and a youth internet governance forum. Parliamentarians are engaged with these forums to make laws and other legislative processes. Nepal is described as having 125 languages and 125 different castes, making youth engagement particularly important for inclusive policy-making.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Human rights | Legal and regulatory | Development


Agreed with

– Franco Metaza
– Tsvetelina Penkova
– Ashley Sauls
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


T

Tsvetelina Penkova

Speech speed

137 words per minute

Speech length

1437 words

Speech time

628 seconds

The Digital Services Act serves as flagship EU legislation tackling harmful content, disinformation, and protecting minors and vulnerable groups

Explanation

The DSA represents the EU’s comprehensive approach to regulating digital spaces, addressing multiple challenges including protection of minors and vulnerable groups, cyber violence, harmful content, and disinformation. It serves as the cornerstone of EU digital regulation covering 27 member states.


Evidence

DSA described as flagship EU legislation representing 27 member states. Specifically tackles protecting minors and vulnerable groups, cyber violence, harmful content and disinformation – issues mentioned throughout the day’s discussions.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Legal and regulatory | Human rights | Sociocultural


EU has comprehensive digital legislation including DSA, DMA, GDPR, and AI Act working together to create protective frameworks

Explanation

The EU has developed an interconnected system of digital laws where each piece of legislation complements others to create comprehensive protection. The Digital Markets Act ensures fair competition, GDPR protects data rights, and the AI Act provides safeguards for AI development, all working alongside the DSA.


Evidence

Digital Markets Act promotes greater choice for consumers and interoperability. GDPR reinforces individuals’ control over their data while promoting trustworthy data sharing. AI Act described as protecting people and ensuring enough time for consumers to understand risks before wide technology deployment. European Democracy Action Plan and Media Freedom Act address media freedom and pluralism.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Economic


Digital transition must be human-centric while protecting citizens’ rights, requiring balance between innovation and protection

Explanation

The EU’s approach prioritizes human-centric digital transformation that protects citizens’ rights while allowing for innovation and growth. This involves ensuring that technological advancement serves human needs rather than compromising fundamental rights and protections.


Evidence

Four key priorities identified: human-centric digital transformation, combating online hate and disinformation, digital literacy and resilience, and children online safety. EU strategy focuses significantly on digital education as essential for successful legislation and enforcement.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Development | Legal and regulatory


Agreed with

– Yogesh Bhattarai
– Ashley Sauls
– Sorina Teleanu

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


Young people understand that the digital economy will shrink without proper regulation and want to be actively involved in policy conversations

Explanation

Youth recognize that lack of appropriate regulation will harm the digital economy and actively seek participation in policy-making processes. They bring valuable insights about new trends and are ahead of many legislators in understanding digital developments, making their involvement essential rather than optional.


Evidence

Key messages from youth consultations included that the digital economy will shrink without regulation, children require protection without limiting rights, and bloggers are not restricted in content publishing. Young generation is described as being ahead of legislators on new trends and wanting to be part of conversations.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Economic | Human rights | Development


Agreed with

– Franco Metaza
– Yogesh Bhattarai
– Ashley Sauls
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power

Explanation

To prevent government overreach and protect against potential abuse of cybercrime laws, the EU framework requires that decisions about illegal content removal be made by judicial authorities rather than government officials. This provides an important check on executive power and protects democratic freedoms.


Evidence

In the DSA framework, illegal content must be taken down without delay once detected, but this decision has to be taken by a judge, not by the government. This addresses concerns about cybercrime laws being misused against journalists and opposition voices.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Legal and regulatory | Human rights | Cybersecurity


Disagreed with

– Anusha Rahman Ahmad Khan

Disagreed on

Decision-making authority for content removal


A

Ashley Sauls

Speech speed

151 words per minute

Speech length

1066 words

Speech time

423 seconds

South Africa has enacted multiple acts including Protection of Personal Information Act, Cyber Crimes Act, and Fullerman Publications Act

Explanation

South Africa has developed a comprehensive legal framework to address digital governance challenges through multiple pieces of legislation that regulate different aspects of digital activity. These laws work together to provide protection for personal information, address cybercrime, and regulate digital publications.


Evidence

Specific mention of Protection of Personal Information Act, Cyber Crimes Act, and Fullerman Publications Act as enacted legislation regulating digital platforms and activities in South Africa.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Cybersecurity


Disinformation about South Africa led to international consequences, including cancelled educational exchanges and reinforced negative stereotypes

Explanation

False information spread online about South Africa, including claims about white minority genocide and stereotypes about racial groups, influenced international decisions and relationships. This demonstrates how disinformation can have real-world diplomatic and social consequences beyond national borders.


Evidence

US executive decision was largely based on online disinformation about white minority genocide in South Africa. This fueled narratives about the ‘coloured’ racial group as violent and gangster-ridden. A rugby match between Atlanta Secondary School and Liff Burra Grammar School from the UK was cancelled because parents feared for safety based on what was said in the White House about South Africa.


Major discussion point

Harmful Content and Its Impact on Vulnerable Groups


Topics

Sociocultural | Human rights | Cybersecurity


Multi-stakeholder approach engaging government, civil society, private sector, and public is essential for inclusive digital economy

Explanation

South Africa recognizes that effective digital governance requires collaboration between all sectors of society rather than top-down government regulation alone. This collaborative model ensures that benefits of digital transformation are equitably distributed and innovation addresses local challenges.


Evidence

South Africa has adopted a multi-stakeholder approach engaging government, civil society, private sector, and public. This collaborative model is described as essential for fostering inclusive digital economy where benefits are equitably distributed and innovation addresses local challenges.


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Development | Economic | Legal and regulatory


Agreed with

– Anusha Rahman Ahmad Khan
– Olga Reis
– Sorina Teleanu

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


Policies must safeguard both security and fundamental human rights without infringing on privacy and freedom of expression

Explanation

South Africa’s approach to digital governance emphasizes that security measures and human rights protection are not mutually exclusive. The country is committed to creating policies that enhance cybersecurity while maintaining strong protections for privacy and freedom of expression.


Evidence

South Africa is committed to aligning digital policies with international standards, promoting balanced approach that safeguards both security and fundamental human rights. Emphasis on proactive rather than reactive approach to digital governance that is inclusive, secure and respects rights of all citizens.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Yogesh Bhattarai
– Tsvetelina Penkova
– Sorina Teleanu

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


South Africa is beginning to emphasize youth involvement more with the new government of national unity structure

Explanation

While South Africa previously had limited youth involvement in digital policy, the shift from single-party majority government to a coalition government of national unity has created opportunities for different approaches that include youth voices. This change is already showing practical results in internet governance leadership.


Evidence

South Africa had one party majority government since 1994, now has government of national unity (coalition government) with different approaches coming together. Both the chairperson and deputy chairperson of South African IGF are young people present at the conference. This is the first time parliamentarians are participating in IGF because of this different approach that includes youth voices.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Human rights | Development | Legal and regulatory


Agreed with

– Franco Metaza
– Tsvetelina Penkova
– Yogesh Bhattarai
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


There should be balance between people’s well-being and profits, with attention to preventing digital apartheid and racial profiling

Explanation

From an African perspective, AI development must consider the risk of perpetuating historical discrimination through digital means. South Africa’s apartheid history makes it particularly sensitive to the possibility that AI training could embed racial biases that create new forms of digital discrimination.


Evidence

South Africa’s apartheid history creates unique concerns about AI training containing risks of digital apartheid. Example given of AI potentially profiling based on historic racial separation, with concern that darker-skinned individuals might face discriminatory profiling. Warning that if AI repeats ugly history, we cannot rejoice for digital future.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Human rights | Development | Sociocultural


R

Raoul Danniel Abellar Manuel

Speech speed

163 words per minute

Speech length

139 words

Speech time

51 seconds

The Philippines’ Cybercrime Prevention Act (2012) contains problematic cyber libel provisions that have been misused against journalists and teachers

Explanation

The Philippines’ cybercrime law includes cyber libel provisions that were anticipated to be problematic and have indeed been abused by repressive leaders to target journalists and teachers who express opinions contradicting the government. This demonstrates how cybercrime laws can be weaponized against legitimate expression.


Evidence

Cybercrime Prevention Act passed in 2012 with cyber libel provision that was flagged years ago as potentially problematic. It has been used by abusive or repressive leaders to go after journalists and even teachers who had opinions contradicting the government. Philippines is now discussing possible review and amendments to the law.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Cybersecurity


B

Bibek Silwal

Speech speed

222 words per minute

Speech length

185 words

Speech time

50 seconds

Youth serve as positive catalysts in policy implementation and should be involved from initial policymaking through public outreach

Explanation

Youth involvement amplifies the impact of policymaking processes, whether in initial policy development or implementation phases. Young people serve as effective bridges to reach end-mile communities and enhance the overall effectiveness of policy initiatives through their engagement and outreach capabilities.


Evidence

Every process where youth are involved amplifies the impact of policymaking, whether in implementation or initial policymaking process. Youth are described as positive catalysts for reaching out to the end mile and enhancing policy effectiveness.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Development | Human rights | Legal and regulatory


Agreed with

– Franco Metaza
– Tsvetelina Penkova
– Yogesh Bhattarai
– Ashley Sauls

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


Disagreed with

– Franco Metaza

Disagreed on

Approach to youth engagement in policymaking


A

Audiance

Speech speed

158 words per minute

Speech length

442 words

Speech time

166 seconds

UN should recognize digital rights as the fourth generation of human rights to provide common framework for legislation

Explanation

The UN has historically recognized three generations of human rights (civil and political; economic and social; collective environmental rights) which have been incorporated into national constitutions. Digital rights should be formally recognized as a fourth generation to provide legislators with common technical frameworks and instruments for creating effective digital legislation.


Evidence

UN recognized civil and political rights as first generation (1945-1948), economic and social rights as second generation, and collective environmental rights as third generation. Many countries have incorporated these into their constitutions based on UN frameworks. Digital rights lack this formal recognition, making it difficult for legislators to have common technical instruments for lawmaking.


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Human rights | Legal and regulatory | Development


A

Anne McCormick

Speech speed

141 words per minute

Speech length

318 words

Speech time

135 seconds

Private sector needs clarity on reliable AI and liability frameworks as AI adoption spreads across different economic actors

Explanation

As AI technology becomes embedded across various sectors and company sizes, there is growing concern among business leaders, board members, investors, and insurance companies about AI governance, liability, and risk management. Small and large enterprises alike need clear frameworks to deploy AI with confidence while managing potential risks to their operations and reputation.


Evidence

Ernst & Young works with clients across different sectors and countries, observing need for clarity on reliable AI. Company leaders, board members, investors and insurance companies are asking about AI limits, risks, and potential liability. Concerns about deploying AI while maintaining employee safety, client relationships, and company reputation.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Economic | Legal and regulatory | Development


Independent oversight and transparency mechanisms are needed to ensure accountability throughout AI lifecycle

Explanation

Rather than over-regulation or under-regulation, there should be appropriate mechanisms that encourage disclosure, transparency, and accountability for AI systems throughout their development and deployment lifecycle. This includes independent oversight and assessment to ensure all economic actors can use AI technology with confidence.


Evidence

Emphasis on not over-regulating or under-regulating, but having right mechanisms to encourage disclosure and transparency rather than black box approaches. Need for independent oversight, possibly independent assurance or assessments, so everyone can use AI technology with confidence and get the best outcomes.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Legal and regulatory | Economic | Development


A

Amy Mitchell

Speech speed

173 words per minute

Speech length

223 words

Speech time

77 seconds

S

Sorina Teleanu

Speech speed

224 words per minute

Speech length

1102 words

Speech time

294 seconds

Enforcement of digital laws is challenging and requires empowered national authorities to put legislation into practice

Explanation

Having laws in place is only the first step – the real challenge lies in ensuring that national authorities have the capacity and resources to actually implement and enforce these digital regulations effectively. This is a common challenge across countries, including in the EU region.


Evidence

Reference to challenges in Romania and Bulgaria where it’s not easy to put digital legislation into practice, despite having comprehensive EU frameworks like DSA, DMA, and GDPR in place.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Development


Critical thinking and digital literacy education should benefit all users, not just young people

Explanation

While there is significant focus on educating young users about digital technologies, all users across age groups would benefit from improved critical thinking skills when interacting with digital platforms and content. Digital literacy should be a universal priority.


Evidence

Acknowledgment of excellent points on literacy, capacity building, and education for building critical thinking in young users, with extension that ‘we would all benefit from a bit more critical thinking when we interact with digital technologies.’


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Development | Human rights | Sociocultural


Safety and security protection can coexist with human rights protection without requiring trade-offs

Explanation

There is a false dichotomy in thinking that protecting safety and security requires giving up human rights protections, or vice versa. Effective digital governance should achieve both objectives simultaneously through balanced approaches.


Evidence

Highlighting speaker comments about finding balance between protecting safety and security while ensuring protection of human rights, emphasizing ‘we don’t have to give one up to protect the other and the other way around.’


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Yogesh Bhattarai
– Tsvetelina Penkova
– Ashley Sauls

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


Technology platforms should engage in discussions with legislators during the law-making process

Explanation

As countries develop digital legislation, it’s important to have dialogue and engagement with technology platforms to ensure that regulations are practical and effective. This interaction helps create more informed and implementable laws.


Evidence

Question posed to speakers about ‘how are you interacting with technology platforms as you’re working on this legislation in the country? Do you have any discussions with them, how is the relation being?’


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Legal and regulatory | Economic


Agreed with

– Anusha Rahman Ahmad Khan
– Ashley Sauls
– Olga Reis

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


Agreements

Agreement points

Social media platforms need to take greater responsibility for content moderation and harm prevention

Speakers

– Anusha Rahman Ahmad Khan
– Franco Metaza
– Olga Reis

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Companies like Google can do more than they are currently doing to tackle harmful content, as they have the budget and resources but are not making sufficient efforts


AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed


Summary

All speakers agree that social media platforms have both the capability and responsibility to do more in addressing harmful content, though they approach it from different perspectives – regulatory demands, parliamentary consensus, and industry acknowledgment of current efforts.


Topics

Legal and regulatory | Cybersecurity | Sociocultural


Balance between freedom of expression and safety protection is essential and achievable

Speakers

– Yogesh Bhattarai
– Tsvetelina Penkova
– Ashley Sauls
– Sorina Teleanu

Arguments

Digital platforms should be regulated but not controlled, requiring cooperation and collaboration rather than strict control


Digital transition must be human-centric while protecting citizens’ rights, requiring balance between innovation and protection


Policies must safeguard both security and fundamental human rights without infringing on privacy and freedom of expression


Safety and security protection can coexist with human rights protection without requiring trade-offs


Summary

Speakers consistently emphasize that protecting safety and security does not require sacrificing freedom of expression or human rights, and that balanced regulatory approaches can achieve both objectives simultaneously.


Topics

Human rights | Legal and regulatory | Cybersecurity


Youth engagement in digital policymaking should be meaningful and integrated

Speakers

– Franco Metaza
– Tsvetelina Penkova
– Yogesh Bhattarai
– Ashley Sauls
– Bibek Silwal

Arguments

Youth participation should be transversal across all policy-making rather than segregated into youth-only discussions


Young people understand that the digital economy will shrink without proper regulation and want to be actively involved in policy conversations


Nepal engages youth through national and youth internet governance forums in legislative processes


South Africa is beginning to emphasize youth involvement more with the new government of national unity structure


Youth serve as positive catalysts in policy implementation and should be involved from initial policymaking through public outreach


Summary

All speakers agree that youth should be meaningfully integrated into digital policymaking processes rather than marginalized, recognizing their unique insights and catalytic role in policy implementation.


Topics

Human rights | Development | Legal and regulatory


Multi-stakeholder approaches are necessary for effective digital governance

Speakers

– Anusha Rahman Ahmad Khan
– Ashley Sauls
– Olga Reis
– Sorina Teleanu

Arguments

Parliamentarians should create joint strategies to collectively address social media platforms and protect vulnerable citizens globally


Multi-stakeholder approach engaging government, civil society, private sector, and public is essential for inclusive digital economy


AI development should be bold, responsible, and collaborative with international community, public sector, and civil society


Technology platforms should engage in discussions with legislators during the law-making process


Summary

Speakers consistently advocate for collaborative approaches involving multiple stakeholders rather than unilateral action by any single actor, recognizing the complexity of digital governance challenges.


Topics

Legal and regulatory | Development | Economic


Similar viewpoints

Both speakers from developing countries (Pakistan and Argentina) share concerns about social media platforms’ inadequate response to harmful content that leads to real-world violence and harm, particularly affecting vulnerable populations.

Speakers

– Anusha Rahman Ahmad Khan
– Franco Metaza

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Disinformation can lead to real-world violence, as seen with assassination attempts on political leaders fueled by fake news and hate messages


Topics

Cybersecurity | Human rights | Sociocultural


Both speakers recognize the risk of cybercrime laws being abused by governments and emphasize the need for judicial oversight to prevent misuse against legitimate expression and press freedom.

Speakers

– Tsvetelina Penkova
– Raoul Danniel Abellar Manuel

Arguments

Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power


The Philippines’ Cybercrime Prevention Act (2012) contains problematic cyber libel provisions that have been misused against journalists and teachers


Topics

Legal and regulatory | Human rights | Cybersecurity


Both speakers emphasize the need for responsible AI development that considers broader societal impacts beyond profit motives, with attention to equity and clear governance frameworks.

Speakers

– Ashley Sauls
– Anne McCormick

Arguments

There should be balance between people’s well-being and profits, with attention to preventing digital apartheid and racial profiling


Private sector needs clarity on reliable AI and liability frameworks as AI adoption spreads across different economic actors


Topics

Human rights | Economic | Development


Unexpected consensus

Private sector acknowledgment of need for greater responsibility

Speakers

– Franco Metaza
– Olga Reis

Arguments

Companies like Google can do more than they are currently doing to tackle harmful content, as they have the budget and resources but are not making sufficient efforts


AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed


Explanation

It’s unexpected to see a Google representative (Olga Reis) essentially agreeing with parliamentary criticism by acknowledging current efforts while implicitly accepting that more can be done, rather than defending current practices as sufficient.


Topics

Legal and regulatory | Cybersecurity | Economic


Recognition of digital rights as fundamental human rights requiring formal framework

Speakers

– Audiance
– Tsvetelina Penkova

Arguments

UN should recognize digital rights as the fourth generation of human rights to provide common framework for legislation


Digital transition must be human-centric while protecting citizens’ rights, requiring balance between innovation and protection


Explanation

The consensus between a civil society representative calling for formal UN recognition of digital rights and an EU parliamentarian’s human-centric approach suggests growing recognition that digital rights need formal international frameworks similar to other human rights generations.


Topics

Human rights | Legal and regulatory | Development


Overall assessment

Summary

The discussion revealed strong consensus on several key areas: the need for greater platform responsibility, balanced approaches to regulation that protect both safety and freedom, meaningful youth engagement, and multi-stakeholder governance. Speakers consistently emphasized human-centric approaches to digital governance.


Consensus level

High level of consensus on fundamental principles, with differences mainly in implementation approaches rather than core objectives. This suggests potential for collaborative international action on digital governance frameworks, particularly around platform accountability, youth engagement, and balanced regulatory approaches that protect both safety and human rights.


Differences

Different viewpoints

Approach to youth engagement in policymaking

Speakers

– Franco Metaza
– Bibek Silwal

Arguments

Youth participation should be transversal across all policy-making rather than segregated into youth-only discussions


Youth serve as positive catalysts in policy implementation and should be involved from initial policymaking through public outreach


Summary

Franco Metaza argues against segregating youth into separate discussions, preferring transversal integration across all policy areas. Bibek Silwal advocates for dedicated youth involvement and specialized engagement processes.


Topics

Human rights | Legal and regulatory | Development


Level of regulation needed for digital platforms

Speakers

– Yogesh Bhattarai
– Franco Metaza

Arguments

Digital platforms should be regulated but not controlled, requiring cooperation and collaboration rather than strict control


Regulation through democratic parliaments representing all social and political expressions will never go against freedom, similar to traffic laws for vehicles


Summary

Bhattarai emphasizes light-touch regulation focusing on cooperation, while Metaza supports stronger parliamentary regulation comparing it to necessary traffic laws.


Topics

Legal and regulatory | Human rights


Decision-making authority for content removal

Speakers

– Anusha Rahman Ahmad Khan
– Tsvetelina Penkova

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power


Summary

Khan advocates for stronger government authority over content removal decisions, while Penkova insists judicial oversight is necessary to prevent government overreach.


Topics

Legal and regulatory | Human rights | Cybersecurity


Unexpected differences

Private sector engagement and criticism

Speakers

– Franco Metaza
– Olga Reis

Arguments

YouTube Kids provides a successful model of controlled digital ecosystem for children that other platforms should emulate


AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed


Explanation

Unexpectedly, Metaza both praised Google’s YouTube Kids as a virtuous model while simultaneously criticizing Google for allowing defamatory content about Argentine political leaders in search results. This shows complex relationship between acknowledging good practices while holding companies accountable for failures.


Topics

Legal and regulatory | Sociocultural | Human rights


Constitutional vs. practical approaches to digital rights

Speakers

– Yogesh Bhattarai
– Audiance

Arguments

Constitutional guarantees of freedom of speech must be upheld while addressing legitimate concerns about harmful content


UN should recognize digital rights as the fourth generation of human rights to provide common framework for legislation


Explanation

While both support strong digital rights protection, they disagree on whether existing constitutional frameworks are sufficient (Bhattarai) or whether new international frameworks are needed (Audiance). This represents a fundamental disagreement about legal foundations for digital governance.


Topics

Human rights | Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on regulatory approaches (light-touch cooperation vs. stronger parliamentary control), decision-making authority (government vs. judicial oversight), youth engagement methods (integrated vs. specialized), and legal frameworks (existing constitutional vs. new international instruments).


Disagreement level

Moderate disagreement level with significant implications. While speakers share common goals of protecting vulnerable groups and balancing rights with safety, their different approaches could lead to incompatible policy frameworks. The disagreements reflect deeper tensions between national sovereignty and international coordination, government authority and judicial independence, and regulatory approaches across different legal and cultural contexts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from developing countries (Pakistan and Argentina) share concerns about social media platforms’ inadequate response to harmful content that leads to real-world violence and harm, particularly affecting vulnerable populations.

Speakers

– Anusha Rahman Ahmad Khan
– Franco Metaza

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Disinformation can lead to real-world violence, as seen with assassination attempts on political leaders fueled by fake news and hate messages


Topics

Cybersecurity | Human rights | Sociocultural


Both speakers recognize the risk of cybercrime laws being abused by governments and emphasize the need for judicial oversight to prevent misuse against legitimate expression and press freedom.

Speakers

– Tsvetelina Penkova
– Raoul Danniel Abellar Manuel

Arguments

Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power


The Philippines’ Cybercrime Prevention Act (2012) contains problematic cyber libel provisions that have been misused against journalists and teachers


Topics

Legal and regulatory | Human rights | Cybersecurity


Both speakers emphasize the need for responsible AI development that considers broader societal impacts beyond profit motives, with attention to equity and clear governance frameworks.

Speakers

– Ashley Sauls
– Anne McCormick

Arguments

There should be balance between people’s well-being and profits, with attention to preventing digital apartheid and racial profiling


Private sector needs clarity on reliable AI and liability frameworks as AI adoption spreads across different economic actors


Topics

Human rights | Economic | Development


Takeaways

Key takeaways

Social media platforms prioritize revenue over cultural sensitivity and public safety, often failing to respond adequately to government requests for harmful content removal


Effective content moderation requires a balance between protecting vulnerable groups and preserving freedom of expression, with decisions ideally made by judicial rather than governmental authorities


Legislative frameworks must be developed through multi-stakeholder collaboration including parliamentarians, civil society, NGOs, and media to ensure comprehensive protection while upholding democratic values


Youth engagement in digital policymaking should be transversal across all policy areas rather than segregated, as young people understand digital challenges and want active involvement in solutions


International cooperation and joint parliamentary strategies are essential for addressing global digital challenges that transcend national boundaries


AI technology shows promise for automated content moderation but requires responsible development with attention to preventing digital discrimination and ensuring transparency


Digital rights may need formal recognition as a fourth generation of human rights to provide a common international framework for legislation


Capacity building for parliamentarians and public officials is crucial for effective digital governance and understanding emerging technologies


Resolutions and action items

Parliamentarians should create joint strategies to collectively address social media platforms and protect vulnerable citizens globally


UN should consider recognizing digital rights as the fourth generation of human rights to provide common legislative framework


Private sector should increase efforts and investment in tackling harmful content despite having adequate resources


Governments should engage with technology platforms during legislation development processes


Educational institutions and civil society should launch campaigns teaching youth to identify fake news and develop critical thinking skills


Capacity building programs for public officials should be expanded, including Google’s AI Campus training program


YouTube Kids model of controlled digital ecosystem should be adopted by other social media platforms for child protection


Unresolved issues

How to make social media platforms more culturally sensitive and responsive to local government requests for content removal


Enforcement challenges at national levels for implementing comprehensive digital legislation frameworks


Definitional language around media freedom and journalism in the digital age as content creators diversify beyond traditional media


Prevention of legislative abuse by future governments that might use cybercrime laws to suppress opposition voices


Addressing digital apartheid and racial profiling risks in AI training and deployment


Balancing innovation and economic growth with necessary protective regulations


Establishing liability frameworks for AI adoption across different economic sectors beyond large tech companies


Creating effective mechanisms for cross-border enforcement of digital rights and content moderation


Suggested compromises

Digital platforms should be regulated but not controlled, emphasizing cooperation and collaboration over strict governmental control


Cybercrime legislation should focus on protecting vulnerable groups while ensuring judicial rather than governmental oversight of content decisions


AI development should proceed boldly but responsibly through collaboration between private sector, government, civil society and international community


Content moderation should combine automated AI systems with human oversight to balance efficiency with cultural sensitivity


Legislative frameworks should align with international standards while addressing local cultural and social contexts


Private sector should engage in capacity building and transparency initiatives while maintaining innovation and competitive dynamics


Thought provoking comments

It’s not a fight between East or the West. It’s a fight between revenue generation entities versus a revenue curbing request.

Speaker

Anusha Rahman Ahmad Khan


Reason

This comment reframes the entire debate about content moderation from a geopolitical or cultural clash to an economic one. It cuts through diplomatic language to identify the core tension: platforms prioritize profit over cultural sensitivity and user safety. This insight is particularly powerful because it moves beyond abstract discussions of rights to concrete economic incentives.


Impact

This comment established a recurring theme throughout the discussion about private sector responsibility. Multiple subsequent speakers referenced the need for platforms to do more, and it influenced Franco Metaza’s later assertion that ‘companies can do more than what they are doing’ and his criticism of revenue-driven content decisions.


I think that today the permanent scrolling that we are all subjected to is as much or more harmful as going at full speed with a vehicle without knowing what is in front of us or without having a traffic light.

Speaker

Franco Metaza


Reason

This metaphor brilliantly captures the unregulated nature of social media consumption and its potential dangers. By comparing social media scrolling to reckless driving, it makes the abstract concept of digital harm tangible and relatable, while also providing a framework for understanding why regulation isn’t about restricting freedom but ensuring safety.


Impact

This metaphor was specifically noted by the moderator and became a reference point for discussing the legitimacy of digital regulation. It helped shift the conversation from whether regulation is needed to how it should be implemented, making the case that just as we accept traffic rules for safety, we should accept digital rules.


Digital platforms should be regulated, not controlled. Cooperation, collaboration, and solidarity should be strengthened.

Speaker

Yogesh Bhattarai


Reason

This distinction between regulation and control is crucial in the digital governance debate. It acknowledges the need for oversight while respecting democratic principles and avoiding authoritarian overreach. The comment provides a nuanced middle ground between laissez-faire and heavy-handed government intervention.


Impact

This comment influenced the moderator’s follow-up questions about how countries interact with tech platforms and helped establish a framework for discussing responsible governance approaches. It contributed to the overall theme of finding balance between protection and freedom.


We are now tired of waiting and I would urge and request all the other parliamentarians to come together to make a joint strategy where we can collectively speak to the social media platforms.

Speaker

Anusha Rahman Ahmad Khan


Reason

This call for collective action represents a shift from individual national approaches to coordinated international pressure on tech platforms. It recognizes that platforms operate globally while governments act locally, creating an inherent power imbalance that can only be addressed through cooperation.


Impact

This comment sparked discussion about regional cooperation, with Franco Metaza confirming consensus in Mercosur about platform responsibility, and influenced later questions about African Union initiatives. It helped establish the theme of multilateral approaches to digital governance.


Si les Nations Unies passent cette étape, vous allez remarquer que beaucoup de pays vont enjamber et peut-être ça sera même inscrit dans nos constitutions et dans nos lois… il faut que nous ayons un élément, un instrument commun comme on l’a avec les trois générations des droits humains et avec la génération des droits numériques.

Speaker

Honorable from Democratic Republic of Congo


Reason

This intervention fundamentally challenges the current human rights framework by proposing digital rights as a fourth generation of human rights. It’s intellectually rigorous, drawing on the historical evolution of rights recognition, and identifies a systemic gap in how we conceptualize digital governance within established human rights frameworks.


Impact

This comment prompted immediate agreement from Tsvetelina Penkova, who acknowledged it would resolve enforcement issues. It elevated the discussion from practical policy implementation to fundamental questions about the nature of rights in the digital age, representing one of the most conceptually ambitious contributions to the session.


There should be a balance also around the importance of the well-being of people and profits… we’ve realized that in the AI training that there is still the presence of a risk of what I would call digital apartheid.

Speaker

Ashley Sauls


Reason

This comment introduces the concept of ‘digital apartheid’ and connects AI bias to historical injustices, making the discussion more concrete and urgent. It challenges the tech industry’s narrative of progress by highlighting how AI systems can perpetuate and amplify existing inequalities, particularly affecting marginalized communities.


Impact

This was one of the final substantive comments and served as a powerful counterpoint to the earlier private sector presentation about AI benefits. It grounded the abstract discussion of AI governance in lived experience and historical context, emphasizing that technological advancement without equity considerations can reproduce historical injustices.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond surface-level policy debates to deeper structural questions. The session evolved from individual country experiences to systemic analysis of power dynamics between governments and platforms, the need for international cooperation, and the fundamental question of how to conceptualize rights in the digital age. The economic framing of platform behavior, the traffic regulation metaphor, and the digital apartheid concept provided concrete ways to understand abstract policy challenges. The call for collective action and the proposal for a fourth generation of human rights elevated the discussion to consider both practical coordination mechanisms and foundational legal frameworks. Together, these comments created a progression from problem identification to systemic analysis to potential solutions, while maintaining focus on protecting vulnerable populations and democratic values.


Follow-up questions

How long will social media platforms continue to take to listen to governments and their requests to remove objectionable content and secure vulnerable groups online?

Speaker

Anusha Rahman Ahmad Khan


Explanation

This addresses the ongoing challenge of platform responsiveness to government content removal requests, particularly for protecting vulnerable populations like women and children from harassment and harmful content.


How can social media platforms be made more sensitive to different cultural contexts when making content moderation decisions?

Speaker

Anusha Rahman Ahmad Khan


Explanation

This highlights the need for culturally-aware content moderation, as platforms currently make uniform decisions without considering local cultural sensitivities that could have severe consequences for users.


How can parliamentarians develop a joint strategy to collectively speak to social media platforms about protecting vulnerable citizens?

Speaker

Anusha Rahman Ahmad Khan


Explanation

This suggests the need for coordinated international parliamentary action to increase leverage when dealing with global technology platforms on content moderation issues.


Are there collaborative efforts across Mercosur countries to deal with harmful online content through awareness and capacity building, not just legislation?

Speaker

Sorina Teleanu


Explanation

This explores whether regional parliamentary bodies are taking comprehensive approaches beyond just legal frameworks to address online harms through user education and preparedness.


How are countries interacting with technology platforms while working on digital legislation?

Speaker

Sorina Teleanu


Explanation

This addresses the important process question of stakeholder engagement and dialogue between governments and platforms during the legislative development process.


How is the AI Act connecting to creating safer online environments and increasing transparency from private actors?

Speaker

Sorina Teleanu


Explanation

This explores the intersection between AI regulation and content safety, particularly regarding platform transparency obligations and automated content moderation systems.


Are there examples of African Union-level initiatives dealing with digital governance and online safety issues?

Speaker

Sorina Teleanu


Explanation

This seeks to understand regional cooperation mechanisms in Africa for addressing digital policy challenges at a continental level.


How can youth be more effectively involved in digital policymaking processes across different regions?

Speaker

Bibek Silwal


Explanation

This addresses the need for meaningful youth participation in policy development, recognizing that young people are both primary users of digital technologies and key stakeholders in implementation.


What are concrete experiences and best practices for combating cybercrime while avoiding abuse of cybercrime laws by repressive governments?

Speaker

Raoul Danniel Abellar Manuel


Explanation

This addresses the critical balance between effective cybercrime legislation and preventing authoritarian misuse of such laws to suppress dissent and free expression.


Should the UN recognize digital rights as a fourth generation of human rights to provide clearer framework for national legislation?

Speaker

Representative from Democratic Republic of Congo


Explanation

This proposes a systematic approach to digital rights recognition that could provide clearer guidance for national constitutional and legal frameworks worldwide.


How should definitional language around journalism and media freedom be developed in digital legislation to account for diverse content producers?

Speaker

Amy Mitchell


Explanation

This addresses the challenge of defining protected journalistic activity in an era where content production has expanded beyond traditional media to include citizen journalists and diverse digital creators.


How can enforcement mechanisms be designed to prevent future governments from misusing digital legislation for authoritarian purposes?

Speaker

Amy Mitchell


Explanation

This addresses the need for robust institutional safeguards that can withstand changes in government and prevent the weaponization of digital laws against civil society.


How can AI training and deployment address risks of digital apartheid and historical bias, particularly affecting marginalized communities?

Speaker

Ashley Sauls


Explanation

This highlights the critical need to address how AI systems may perpetuate or amplify existing social inequalities and discrimination, particularly in post-apartheid contexts.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online

Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online

Session at a glance

Summary

This discussion focused on protecting vulnerable groups online, bringing together parliamentarians, regulators, and advocacy experts from various countries to examine legislative and regulatory responses to digital harm. The panel explored how marginalized communities, particularly in the Global South, face unique online safety challenges that existing frameworks often fail to address adequately.


Neema Iyer from Uganda highlighted research showing that one in three African women experience online violence, often leading them to delete their digital presence due to lack of awareness about reporting mechanisms and fear of not being heard by authorities. She emphasized how intersecting inequalities, language barriers, and the normalization of abuse create complex challenges that narrow legislative frameworks cannot fully address. Raul Manuel from the Philippines discussed recent legislative measures including extraterritorial jurisdiction for child exploitation cases and expanded anti-violence bills, while noting the economic factors that drive children into exploitation.


Malaysian Deputy Minister Teo Nie Ching outlined her country’s holistic approach combining digital inclusion, robust legal frameworks, and multi-stakeholder collaboration, but acknowledged enforcement challenges with major platforms like Meta and Google refusing to comply with licensing requirements. Nighat Dad from Pakistan described the rise of AI-generated deepfake content and highlighted disparities in platform responses between Global North and South users, noting that non-public figures receive delayed or no response to abuse reports.


Arda Gerkens from the Netherlands discussed balancing human rights with content removal powers, revealing concerning trends of hybrid threats where terrorist groups target vulnerable children through mental health channels. Sandra Maximiano from Portugal introduced behavioral economics perspectives, explaining how cognitive biases affect online decision-making and can be leveraged to promote safer behaviors through design interventions and nudges.


The discussion revealed consensus that content takedowns alone are insufficient, with panelists advocating for greater platform transparency, algorithmic accountability, proactive design measures, and coordinated international responses. The session concluded with calls for global cooperation among regulators and recognition that protecting vulnerable groups online requires addressing both technological and human factors through multi-stakeholder collaboration.


Keypoints

## Major Discussion Points:


– **Unique challenges faced by marginalized communities in the Global South**: Discussion of intersecting inequalities, digital literacy gaps, language barriers, normalization of abuse, and how existing laws are often weaponized against the very groups they’re meant to protect, particularly women and marginalized communities.


– **Legislative and regulatory responses across different jurisdictions**: Panelists shared specific examples from the Philippines, Malaysia, Netherlands, Pakistan, and Portugal, highlighting both successful measures (like extraterritorial jurisdiction for child exploitation) and enforcement challenges, particularly with major tech platforms refusing to comply with local regulations.


– **Platform accountability beyond content takedowns**: Extensive discussion on the need for platforms to be more proactive, including algorithm transparency, improved reporting mechanisms, design friction to prevent harmful content sharing, and the importance of addressing root sources rather than just reactive content removal.


– **Behavioral economics and human-centered approaches**: Introduction of how cognitive biases affect online behavior and how regulators can use behavioral insights to nudge safer online practices, along with emphasis on addressing offline social structures and community-based solutions.


– **Need for coordinated global response**: Strong consensus that individual countries lack sufficient negotiating power with tech giants, leading to calls for regional blocs (like ASEAN) and international cooperation through networks like the Global Online Safety Regulators Network (GOSRN).


## Overall Purpose:


The discussion aimed to bring together diverse stakeholders (parliamentarians, regulators, and advocacy experts) to examine how to better protect vulnerable groups online, share experiences across different jurisdictions, and develop more targeted, inclusive, and enforceable policy responses to online harms.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with panelists building on each other’s insights rather than debating. There was a shared sense of urgency about the challenges, but also cautious optimism about potential solutions. The tone became increasingly focused on practical cooperation and concrete next steps toward the end, culminating in calls for international coordination and the promotion of existing collaborative networks.


Speakers

**Speakers from the provided list:**


– **Alishah Shariff** – Moderator, works at Nominet (the .UK domain name registry)


– **Neema Iyer** – Founder of Policy, a feminist organization based in Kampala, Uganda; works on feminist digital rights issues including online violence, gender disinformation, and AI impact on women; member of Meta’s Women’s Safety Board


– **Raoul Danniel Abellar Manuel** – Elected member of Parliament in the Philippines, representing the Youth Party; former student government and union activist


– **Teo Nie Ching** – Deputy Minister of Communication, Malaysia; appointed in December 2022; previously served as Deputy Minister of Education in 2018; mother of three


– **Nighat Dad** – Founder of Digital Rights Foundation, a woman-led organization based in Pakistan; focuses on digital rights, gender justice, online safety, and freedom of expression; serves on the Meta Oversight Board


– **Arda Gerkens** – President of the regulatory body of online content, terrorist content, and child sex abuse material (ATKM) in the Netherlands; former member of Parliament (8 years) and senator (10 years)


– **Sandra Maximiano** – Chairwoman of ANACOM, the Portuguese National Authority for Communications; digital service coordinator; economist specialized in behavioral and experimental economics


– **Anusha Rahman Khan** – Former Minister for Information Technology and Telecommunications; enacted cyber crime law in 2016; currently chairs standing committee on information technology


– **Andrew Campling** – Runs a consultancy; trustee of the Internet Watch Foundation


– **Audience** – General audience member asking questions


**Additional speakers:**


– **John Kiariye** – From Kenya; made comments about human-centered design and community-based approaches


Full session report

# Protecting Vulnerable Groups Online: A Multi-Stakeholder Discussion on Digital Safety and Platform Accountability


## Executive Summary


This comprehensive discussion brought together parliamentarians, regulators, and digital rights advocates from across the globe to examine the complex challenges of protecting vulnerable groups in digital spaces. Moderated by Alishah Shariff from Nominet (.UK domain name registry), the panel featured diverse perspectives from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya, highlighting both the universal nature of online harm and the unique contextual challenges faced by different regions.


The conversation revealed that while online platforms have transformed communication and access to information, they have simultaneously created new vectors for harm that disproportionately affect marginalised communities, particularly women and children. Key challenges identified included the inadequacy of reactive content moderation, geographic inequalities in platform responses, the rise of AI-generated harmful content, and the weaponisation of protective legislation against the very groups it aims to protect. The discussion moved beyond traditional approaches to explore innovative solutions rooted in behavioural economics, community-based interventions, and coordinated international responses.


## Opening Context and Participant Introductions


The session, titled “Click With Care, Protecting Vulnerable Groups Online,” was part of the Internet Governance Forum (IGF) and featured interpretation services in English, Spanish, and French. Participants represented a diverse range of expertise and geographic perspectives:


– **Neema Iyer** from Uganda, representing Pollicy and speaking from her experience in digital rights advocacy


– **Raoul Danniel Abellar Manuel**, representing the Youth Party in the Philippine Parliament


– **Deputy Minister Teo Nie Ching** from Malaysia’s Ministry of Communications


– **Anusha Rahman Khan**, former Minister for Information Technology and Telecommunications in Pakistan (served for five years)


– **Arda Gerkens**, President of ATKM (Authority for the Prevention of Online Terrorist Content and Child Sexual Abuse Material) in the Netherlands


– **Sandra Maximiano**, an economist specialized in behavioral and experimental economics from ANACOM (Portuguese National Authority for Communications)


– **John Kiariye** from Kenya, representing community-based perspectives


## Research Findings on Online Violence Against Women and Children


### Stark Statistics from Africa


Neema Iyer opened the substantive discussion with sobering research findings that framed the entire conversation: “One in three women across Africa experience online violence, and many of them end up deleting their digital presence because they don’t have adequate support systems and they feel like they’re not going to be heard by authorities.”


This statistic illuminated a broader pattern of digital exclusion, where those who could benefit most from online participation are driven away by harassment and abuse. Iyer explained how intersecting inequalities create complex barriers to digital safety: “There are large gaps in digital literacy and access, and platforms often don’t prioritise smaller markets or local languages.”


### The Weaponisation of Protective Laws


Perhaps most troubling was Iyer’s observation about how protective legislation can be turned against its intended beneficiaries: “The laws that do exist, especially in our context, have actually been weaponised against women and marginalised groups. So many of these cybercrime laws or data protection laws have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.”


This paradox challenges fundamental assumptions about the relationship between legislation and protection, suggesting that legal frameworks alone are insufficient without proper implementation and safeguards against misuse.


## Country-Specific Legislative and Regulatory Approaches


### The Philippines: Comprehensive Legal Framework


Manuel outlined several legislative initiatives demonstrating the Philippines’ comprehensive approach to online safety. The country passed Republic Act 11930, addressing online sexual abuse of children, and the House of Representatives approved an expanded anti-violence bill that “defines psychological violence through electronic devices as violence against women.”


Additionally, amendments to the Safe Spaces Act set higher standards for government officials, recognising their particular responsibility in online spaces. However, Manuel highlighted enforcement challenges: “Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence.”


The scale of internet usage in the Philippines adds urgency to these efforts: “An average Filipino spends around eight hours and 52 minutes, or roughly nine hours per day, on the internet,” Manuel noted, emphasising the significant exposure to potential online harms.


### Malaysia: Multi-Faceted National Strategy


Deputy Minister Nie Ching outlined Malaysia’s comprehensive approach, which combines legislative updates, platform regulation, and extensive public education. After 26 years, Malaysia amended its Communication and Multimedia Act, increasing penalties for child sexual abuse material and grooming.


Malaysia developed a code of conduct for social media platforms with over 8 million users and established “900 national information dissemination centres” alongside a “national internet safety campaign targeting 10,000 schools.” The campaign uses a modular approach for different age groups, recognising that safety education must be age-appropriate.


However, significant enforcement challenges remain. Nie Ching revealed that while “only X, TikTok, and Telegram have applied for licenses” under the new framework, “major platforms like Meta and Google have not applied for licenses.” This resistance led to a crucial insight: “Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN.”


### Pakistan: Balancing Protection and Rights


Anusha Rahman Khan, who served as Pakistan’s Minister for Information Technology and Telecommunications for five years, enacted Pakistan’s cyber crime law in 2016, introducing “28 new penalties criminalising violations of natural person dignity.”


Khan emphasised the ongoing challenges in balancing commercial interests with protection needs: “Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination.”


### The Netherlands: Addressing Hybrid Threats


Arda Gerkens introduced the concept of hybrid threats that combine multiple forms of online harm. Her organisation has unique powers to identify and remove terrorist content and child sexual abuse material, but faces increasingly complex challenges as different forms of harmful content become intertwined.


“We see more and more hybridisation of these types of content mixed together,” Gerkens explained. “We’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online… these terrorist groups or extremist groups are actually targeting vulnerable kids.”


This hybridisation represents a fundamental challenge to traditional regulatory approaches that treat different forms of harm in isolation. Gerkens noted that terrorist groups are “increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion.”


## Platform Accountability and Geographic Inequalities


### Inadequate Response Systems


The discussion revealed significant problems with current platform accountability mechanisms. Nie Ching highlighted practical limitations: “Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal.”


She provided a specific example involving “Dato Lee Chong Wei,” Malaysia’s famous badminton player, whose image was used in scam posts. Despite his verified status, removing the fraudulent content required regulatory intervention rather than effective platform mechanisms.


### Geographic Disparities in Platform Response


Nighat Dad from Pakistan’s Digital Rights Foundation, which has handled over 20,000 complaints since 2016, highlighted stark inequalities in platform responses: “Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment.”


This disparity is exacerbated by recent changes in platform policies. Dad noted that Meta’s scaling back of proactive enforcement systems “shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only.”


### The Rise of AI-Generated Harm


Dad also highlighted an emerging threat that exemplifies how technological advancement can amplify harm: “We are seeing a rise of AI-generated deepfake content, causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide.”


This technology democratises the creation of sophisticated abuse material while making it more difficult for victims to prove the falsity of harmful content, representing a qualitative shift in the nature of online harm.


## Behavioural Economics and Human-Centred Approaches


### Understanding Cognitive Vulnerabilities


Sandra Maximiano introduced a novel analytical framework through behavioural economics. “Users are affected by cognitive biases like confirmation bias, overconfidence bias, and optimism bias that influence online behaviour and decision-making,” she explained.


Maximiano emphasised that “vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design.” This insight suggests that effective protection requires understanding not just what harms occur, but why people are susceptible to them.


The potential for both exploitation and protection through behavioural insights became clear: “AI systems can exploit cognitive biases and overlook vulnerabilities, potentially causing significant harm even without intentional exploitation.” However, the same understanding can be used positively through “better user interface design, nudging safe behaviour, and using social norms messaging.”


### Community-Based Solutions


John Kiariye from Kenya introduced a crucial human-centred perspective: “The offenders are human. The victims are humans. If we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully.”


Kiariye advocated for leveraging existing social structures: “Schools, clubs, and family units to empower victims and prevent online abuse before it occurs.” This approach recognises that online behaviour is shaped by offline social structures and that effective prevention requires community-level interventions.


## Areas of Consensus and Disagreement


### Strong Consensus on Platform Reform


Despite diverse backgrounds, speakers demonstrated remarkable consensus on the inadequacy of current platform accountability mechanisms. All agreed that transparency in content moderation processes, proactive identification of harmful sources, and addressing geographic inequalities in platform responses are essential.


### International Coordination is Essential


Government representatives from Malaysia, the Philippines, and Pakistan all acknowledged that individual nations have limited leverage against major tech platforms, leading to growing support for coordinated international or regional approaches.


### Key Disagreement: Privacy Versus Safety


The most significant disagreement emerged during audience questions about age verification technologies. Andrew Campling from the audience advocated for “privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms,” citing the statistic that “300 million children annually are victims of technology-facilitated sexual abuse.”


However, Iyer strongly opposed such measures: “I think absolutely not… it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place… people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.”


## International Cooperation and Future Directions


### Regional Approaches to Global Challenges


The discussion revealed growing recognition that effective platform regulation requires coordinated international action while respecting cultural differences. Nie Ching advocated for regional approaches: “Different regions need different standards that meet their cultural, historical, and religious backgrounds rather than one-size-fits-all approaches.”


Gerkens mentioned the existence of the Global Online Safety Regulators Network and invited participation, representing an attempt to share best practices across jurisdictions.


### Addressing Root Causes


Manuel introduced crucial economic dimensions: “Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material.” This observation highlights how online harms often reflect offline inequalities and vulnerabilities.


## Conclusion


This comprehensive discussion revealed both the complexity of protecting vulnerable groups online and the potential for innovative, collaborative solutions. The conversation demonstrated growing sophistication in understanding online harm, moving from reactive content removal toward proactive prevention and addressing root causes.


Key insights included the recognition that online safety is fundamentally a human challenge requiring understanding of psychology, economics, and social structures alongside technical solutions. The emphasis on international coordination, cultural sensitivity, and multi-stakeholder collaboration suggests a maturing approach to online safety policy.


However, significant challenges remain, from platform resistance to enforcement difficulties to fundamental tensions between privacy and safety. Success will depend on sustained commitment to collaborative solutions that are both effective and respectful of fundamental rights and cultural differences across diverse global contexts.


Session transcript

Alishah Shariff: ♪♪ ♪♪ ♪♪ Good morning, everyone, and welcome to today’s session, Click With Care, Protecting Vulnerable Groups Online. I’m delighted you’re able to join us. I know there were some travel difficulties getting in this morning, so thank you for being here, and thank you also to our esteemed panelists for joining us today. My name is Alicia, and I work at Nominet, the .UK domain name registry, and I’ll be moderating today’s panel. Just a bit of housekeeping before we begin. You’ll have interpretation in your headphones in English, Spanish, and French, and when we open the floor to interventions and questions, you can ask your question by going to the microphone to my left and your right. So it’s a pleasure to chair today’s session, which brings together a diverse panel of parliamentarians, regulators, and advocacy experts to discuss a critical issue, which is how do we protect vulnerable groups online. We live in an increasingly digital world, which offers opportunities for connection, learning, and growth. But the digital world also brings with it risks and downsides, which are often felt more acutely by vulnerable groups, including children, individuals with disabilities, and others of marginalized communities, amongst others. The consequences of harm faced online can have a ripple effect into real lives, causing distress, harm, and isolation. The challenge of online harm has prompted a range of legislative and regulatory responses, as well as proactive and reactive approaches, and today’s session will enable us to better understand some of these across a range of geographies and contexts. I hope that by the end of the session, we’ll get a sense of how we can work towards a more targeted inclusive and enforceable policy response to online harms. I’ll now hand over to each member of our esteemed panel to briefly introduce themselves. So I think we’ll start with Nima.


Neema Iyer: Oh, super. Hi everyone. Good morning and thank you so much for joining us here. My name is Nima Iyer and I am the founder of Policy. Policy is a feminist organization based in Kampala, Uganda, but we work all across the continent and we are very interested in any issues related to feminist digital rights. So this could be about online violence, gender disinformation, the impact of AI on women, and any such topics. And yeah, we do a lot of research on these topics. We also work very closely in local communities and of course we do advocacy work, which is part of why we are here as well. Thank you. Over to you.


Alishah Shariff: Thank you, Nima. Next we’ll hear from Raul.


Raoul Danniel Abellar Manuel: Hello, good morning everyone. I am Raul Manuel. You can call me Raul. I am an elected member of Parliament in the Philippines, representing the Youth Party. And prior to being a part of the Youth Party and of the Philippine Parliament, I was active in the student government and the student union. That’s why you have been paying close attention to this issue of online freedoms and protections. Thank you.


Alishah Shariff: Thank you, Raul. And next we have Your Excellency, Tony.


Raoul Danniel Abellar Manuel: Hello, good morning everyone. Thank you, Alicia, for the introduction. My name is Ni Ching. I’m from Malaysia. I’m currently the Deputy Minister of Communication. I was appointed to this office in December 2022. However, in the year 2018, I also had this opportunity to serve in the Ministry of Education as the Deputy Minister as well. I’m currently a mother of three, so protecting children and our minors on internet is a topic that is very, very close to my heart. And under the Ministry of Communications, we have a very important agency that is called MCMC, Malaysia Communication and Multimedia Commission, who acts as a regulator for the content moderation, platform providers, etc. Looking forward to this fruitful discussion.


Alishah Shariff: Thank you. And next we have Nighat.


Nighat Dad: Good morning everyone. My name is Nighat, and I’m a founder of Digital Rights Foundation, an organization, a woman-led organization based in Pakistan, and we are committed to advance digital rights with a particular focus on gender justice, online safety, and freedom of expression. Our work is grounded in both direct support and systemic change. We have a digital security helpline which provides legal advice, digital security assistance, and psychosocial support to victims and survivors of online abuse, and has a survivor-centered approach. And we also conduct in-depth research, build digital literacy and safety tools, and engage in policy advocacy conversations at the national, regional, and international levels.


Alishah Shariff: level. Thank you. And Arda, I’m glad you were able to make it, and thanks for joining. Thank you.


Arda Gerkens: Thank you very much, and excuse me for being late, the train was so much delayed. So, my name is Arda Gerkens, I am the president of the regulatory body of online content, terrorist content, and child sex abuse material, ATKM, it’s the abbreviation. I used to be a member of Parliament for eight years, and a senator for ten years, so I bring some political experience too. My organization is really there to identify harmful content on terrorist content and child sex abuse material, and is able to remove that content or have it removed, and if not, we’ll find the ones who is not complying with our regulation. We’re kind of unique in the field, I think we’re the first regulator. at least, as far as I know, who has that special right to dive into that content. And yeah, looking forward to the discussion today.


Alishah Shariff: Great, thank you. And we have one panelist who’s still on their way here. So when they join, we should also have Sandra Maximiano, who’s the president of ANACOM Portugal. So hopefully she’ll be able to join us shortly. So the way this will work today is we have some questions for our panelists that they will speak to, followed by a quick fire round, and then we’ll open out the floor for your interventions and questions. So without further ado, I think my first question is for Nima. So Nima, what are some of the unique online safety challenges faced by marginalized communities, particularly in the Global South, that may not be adequately addressed in existing legislative frameworks?


Neema Iyer: Thank you so much for that question. So first, I just want to start by framing some of the research that we’ve already done on this topic. So, for example, we did research with 3,000 women across Africa to understand their online experiences, and we found out that about one in three women had experienced some form of online violence, and this basically led them to deleting their online identity, because many of them were not aware of reporting mechanisms, and they also felt that if they went to any authorities that they would not be listened to. A second study that we did is a social media analysis of women politicians during the Ugandan 2021 election. We wanted to see what was the experience like for women politicians, and we found that they are often targets of sexist and sexualized abuse. But more importantly, the fear of the abuse on online spaces meant that many women politicians did not actually have online profiles or chose not to exist and to participate in the online sphere. And in the third research we did is on the impact of AI on women. So we often tend to think of, when we think of care, we think of social media, but more importantly thinking of how does AI impact women in, you know, that may be marginalized in some way, and we found out there are grave issues of under-representation and data bias. There’s algorithmic discrimination. AI makes it very possible for digital surveillance and censorship. There’s labor exploitation, and also there’s a threat to low-wage jobs, which often tend to be occupied by women. So I just wanted to frame that research first and then talk more about the question, which is, what is unique about this group? And the first one is that there are intersecting inequalities, so there are large gaps in digital literacy and digital access, for example. And so when you are trying, both as a platform and a civil society or a government, you have to take into account the fact that there are some women who have absolutely no access, have no digital skills, and you know, this is across the spectrum. So how do you tailor interventions that can meet all these different people who exist in all these different inequalities? Then in our context, for example, in Uganda, there are about 50 languages that are spoken, in Uganda alone, not considering the whole continent. And because these are smaller countries, they don’t have a huge market share, you know, on these online platforms. They’re often not prioritized. And so how do you develop interventions? How do you make safety mechanisms when, you know, you don’t have these languages on your platform? Another one I want to talk about is the normalization of abuse, which is, you can see in real life and in online spaces, that are both cultural and a result of platform inaction. So in regular life, you go on the street, you get harassed, you go to the police, they don’t do anything. That is replicated in online platforms, where you face this harassment, you reach out for recourse on the platforms, and there is platform inaction. So basically, in that way, this kind of online abuse is normalized. And then there’s the invisibility in platform governance processes. Of course, this is an amazing venue where we can talk about these issues, but a lot of women, marginalized groups, are not in these rooms with us right now to talk about their experiences. And then lastly, I just want to talk about the fact that the laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them. That’s the reality that we live in. So the fact is that legislative frameworks are often too narrow. They, you know, they focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women. So for example, a law might address intimate image sharing, but it won’t, you know, it’ll ignore coordinated disinformation campaigns, for example, or it will ignore this ideological radicalization that’s happening to minors online. Or, you know, it won’t target specifically the design choices that platforms make, for example, like where, you know, they amplify violence or those kinds of things. So I think we really need to think broader about how we are legislating about online violence, and I’m really glad that this conversation is happening. So back to you.


Alishah Shariff: Thank you. And that was, I think there was so much in there from the kind of, you know, the sphere of abuse in online spaces to how different people feel and experience being marginalized, and then also how some of these kind of legislative measures and also policies can sometimes have an adverse effect, and really thinking about the context. But thinking about how we do kind of good regulation, we’ll turn now to Raul. So as a Member of Parliament, could you share recent legislative measures in the Philippines to address online exploitation of children and pending efforts to protect women, LGBTQI+, and other marginalized communities from online violence and threats?


Raoul Danniel Abellar Manuel: Yeah, thank you, Alicia. And before I proceed, I’d like to thank the IGF Secretariat for this opportunity to share our perspectives from the Parliament of the Philippines. In our case, we have been pushing for a vibrant debates and discourse to ensure that protections for marginalized and vulnerable groups do not come at the expense of sacrificing our basic and human rights. The Philippines right now, just for context, ranks as the number three as of February 2025 in terms of the daily time spent by citizens in using the internet. An average Filipino spends around eight hours and 52 minutes, or roughly nine hours per day, on the internet, which is much higher than the global average of six hours and 38 minutes. So while this time can be spent to, you know, connect with friends, family, conduct research, do homework, this also exposes vulnerable groups, including young people, to different forms of violence and threats. For example, the Philippines, unfortunately, has been a hotspot of online sexual abuse and exploitation of children, and also the distribution and production of child sexual abuse and exploitation materials. So this is a problem that we have to acknowledge so that we can take proactive measures in addressing it. Second would be the electronic violence against women and their children, which we call E-Vow-C for short, and third, among the major forms of violence and threats online, would be harassment based on identity and belief. So I will briefly touch upon what we have been doing in Parliament to address these. First, when it comes to online sexual abuse and for exploitation of children, we recently had the Republic Act 11930. It is a law that lapsed on July 30, 2022, so it is kind of fresh, and aside from content taken one major component of this is the assertion of extraterritorial jurisdiction, which means the state shall exercise jurisdiction if the offense either commenced in our country, the Philippines, or if it was committed in another country by a Filipino citizen or a permanent resident against a citizen of the Philippines. Recognizing that the problem of online sexual abuse of children can happen not just in a single occasion, but it can be part of a coordinated network involving several hubs or locations. That’s why we really had to put this into law. When it comes to electronic violence against women and children, the House of Representatives, on its part, approved the expanded anti-violence bill. It defines psychological violence, including different forms, including electronic or ICP devices. The use of those devices can be considered, and it was defined to be part of violence against women. We did this in the House of Representatives, but since the Philippines is bicameral, that’s why we’re still waiting for the Senate to also speed up in its deliberations. Now, when it comes to online harassment based on identity and belief, we approved at the committee level so far amendments to the Safe Spaces Act, which sets a higher standard on government officials who may be promoting acts of sexual harassment through digital or social media platforms, like when they have speech that tends to discriminate those in the LGBT community. Finally, we have a pending bill in the House of Representatives, which seeks to criminalize the tagging of different groups, individuals, as state enemies, subversives, or even terrorists without much basis in such labeling. Recently, the Supreme Court adopted the term red tagging, which has been a source of harm and violence that transcends up into the physical world. That’s all for now, and I hope that this can be a source of discussions also on how we can really work together to address these online problems. Thank you.


Alishah Shariff: Thank you, Raul. I think that was really eye-opening, and there’s definitely lots happening in your legislative space, and I think it’s really nice that we have this mix of where you’ve got kind of slightly newer regulation and legislation, and also to hear from somebody later on who has experience of kind of enforcing this sort of regulation. So, moving from the Philippines to Malaysia, next I will turn to Her Excellency Chini. So, what is Malaysia’s core philosophy and overall strategy for protecting vulnerable groups in today’s complex digital environment, and how does Malaysia balance creating and enforcing laws and regulations with maintaining freedom of expression? Thank you.


Teo Nie Ching: Thank you, Alicia, for the questions. First of all, in Malaysia, we view online protection not just as a single action, but as a holistic ecosystem built on three core strategic trusts. The first one is empowerment through digital inclusion, and of course literacy. And the second will be protection to a robust and balanced legal framework, and third, support to a whole of society, multi-stakeholder collaboration. So, currently in Malaysia, our internet coverage has reached 98.7% of the populated area. So, internet coverage is, I think, pretty impressive. At the same time, we also set up more than 900 national information dissemination centres, which act as a community hub providing on-the-ground digital literacy training, especially to the seniors, to the women, to the youth, who may be more susceptible to online risks. And not only that, we also recently launched a national internet safety campaign, and our target is to actually enter 10,000 schools in Malaysia. That is our primary school, secondary school, and of course, we aim to enter the campus of the university as well, so that we can engage with the user. And this programme is not the usual public awareness campaign. However, we are more specific. We developed a modular approach which depends on the audience. For example, if their age is between seven to nine, then what type of content is more suitable for them, and what type of interactive action is actually we can design for them. So, for example, primary school, secondary school, we will be focusing on cyberbullying, and of course, to protect their own personal information, and then for the elder, we will teach them more, or share with them more about online scam, financial scam, etc. So, we believe that this is an approach whereby we need to go to the community, we need to engage them, we need to empower the community, so that we can raise their digital literacy. And of course, I think we also need to have a legal framework to protect our people, and it is very, very important for us to strike a balance between freedom of expression, but at the same time, also make sure this vulnerable group, they are actually protected by law. Last year, we have amended our act that is Communication and Multimedia Act, first time in the 26 years, whereby we have actually increased the penalty for dissemination of child sexual abuse material, CSAM, grooming, or sim communication through digital platforms, with heavier penalty when minor are involved. And then, at the same time, the amended law also grants the Commission, the Communication and Multimedia Commission, Malaysia Authority, to probably instruct the service provider to block or remove harmful content, enhancing platform accountability. At the same time, we also develop a code of conduct targeting the major social media platform with more than 8 million users in Malaysia. Malaysia is a country with about 35 million population, so when we use the benchmark of 8 million, that was roughly about 25% of our population. We are hoping that by imposing this licensing regime, we will be able to impose this code of conduct against the service provider, but as I mentioned yesterday, I would not say this is a very successful attempt because the licensing regime is supposed to be implemented since 1st of January this year, but however, two major platforms in Malaysia, i.e. Metai and also Google, until today has yet to come to apply for the license. So, I think the challenge faced by Malaysia maybe would be similar to many, many other countries as well. Malaysia alone, we don’t have sufficient negotiation power when we engage with tech giants like Metai and Google. So, how can we actually impose our standard over this platform to ensure that the harmful content, according to Malaysia context, can be removed instantly in a reasonable period of time has been quite challenging in Malaysia. We see that even though sometimes platforms would still cooperate with MCMC to remove certain harmful content, but it is always like the user or the scammer put it out and then MCMC, upon the request of MCMC, the content were taken down, but however, there is no permanent solution to stop all this harmful content from being put out on the social media, such as online gambling, such as scammer posts, etc. So, I think that’s it for now and looking forward to more questions.


Alishah Shariff: Thank you. I think that was a really good overview of how you can have both legislation and then a kind of voluntary code of conduct and some of the challenges that come with that in terms of how you are able to enforce it and also maybe towards the end you were getting to actually how do you prevent some of this stuff in the first place because obviously the takedowns are a reactive measure and there’s a bigger challenge here around how we prevent this sort of thing in the first place. So, we’ll now move to more of a focus on digital rights and we’ll turn to Nigat. So, at the Digital Rights Foundation, you lead efforts against online harassment and advocacy for privacy and freedom of expression. You’re also serving on the Metta Oversight Board. So, what gaps in terms of digital rights do you observe between the global south and the global north and what are your perspectives on platform accountability?


Nighat Dad: Yeah, thank you so much. So, at the Digital Rights Foundation, over the years, we have been witnessing the rise of digital surveillance, privacy violations, gender-based disinformations which is very targeted and now the disturbing rise of AI-generated deepfake content. Since 2016, through our digital security helpline, we have dealt with more than 20,000 complaints from hundreds of young women every month, female journalists, now more from women influencers and content creators, women politicians, scholars, and students. And this number is only to a digital security helpline which is being run by a NGO. This number is even higher when it goes to our federal investigation agency, Cybercrime Wing. And the people mostly who complain to us, they are being blackmailed, silenced, or driven offline by intimate images that they never consented to, some of which aren’t even real. In the last one and a half year, I would say we have seen this rise in deepfakes that have blurred the line between fact and fiction, but at the same time, we have seen that the harms are real in the offline space. It’s reputational damage, it’s emotional trauma, and in some cases, complete social isolation. And in worst cases, we have seen some women committing suicide. What’s even more alarming is how platforms respond to it, and as Honorable Minister mentioned that many platforms in our part of the world are really not accountable to the governments, and too often, survivors are forced to become investigators of their own harm, hunting down copies of content, plaguing it repeatedly, and navigating opaque reporting systems that offer little support and no urgency. And unfortunately, if they are not public figures, and if they are not politicians, the response is even more delayed, if it comes at all. And in my work at the Metal Oversight Board, the same patterns show up, just on a global scale. Last year, we reviewed two cases of deepfake intimate imagery, one involving a US public figure, a celebrity, and another involving a woman from India. And Meta responded quickly in the US jurisdiction, because media outlets had already reported on it, but in the Indian case, the image wasn’t even flagged or escalated, and it wasn’t added to the Meta’s media matching service until the Oversight Board raised it. And what we noticed as a board, that if the system only works within these platforms when the media pays attention, what happens to the millions of women in the Global South who never make headlines? So we pushed Meta, in our recommendations in case, to change its policy. We recommended that any AI generated intimate image should be treated as non-consensual by default, that harm should not have to be proven through news coverage, and we advise that these cases be governed under the adult sexual exploitation policy, not buried under bullying and harassment, because what’s at stake is not just tone, it’s bodily autonomy. And I think that one thing which is deeply concerning, that Meta has recently scaled back, like several other platforms. It’s proactive enforcement systems now focusing mostly on illegal or high severity cases while shifting the burden of content moderation onto users. That may sound like empowerment, but let me tell you that looks very different on ground. In South Asia, many users don’t know how to report. And even when they do, the systems are in English. They are not even in our regional languages. The processes are opaque, and the fear of backlash is very real. In India, for example, we have documented cases where women reporting abuse ended up being harassed further. That’s the same case in Pakistan. It’s not just by other users, but by the very mechanisms that are meant to protect them. And I’ll stop here, and we’ll add more to the policy level debate.


Alishah Shariff: Thank you. Thank you. I think there was so much in there. And I think what’s really coming through is that if we have this right to privacy and right to freedom of expression, that should be for all of us everywhere around the world. And the way that then we are treated when something does go wrong should also be equitable, because you can’t put it all on the individual to try and get all these images taken down. I think we’re definitely seeing a lot more on non-consensual intimate imagery abuse in the UK as well. And actually, the regulatory response and the legislative approach catching up with the real harm, there’s a big gap still. So thank you so much. And so next, we’ll turn to Ada. And so, Ada, you’re the president of the Authority for the Prevention of Online Terrorist Content and Child Sexual Abuse material in the Netherlands. And that regulates online content by ordering the removal of terrorist and CSAM content. So how do you strike a balance between online rights, the promotion of a safer online environment, and law enforcement? And what are some of your areas of concern?


Arda Gerkens: Yes, thank you. Thank you very much for inviting me on this panel. To address one of the last points in your question, how do we deal with law enforcement, we basically only target the content. So we’re not looking for perpetrators, ones who is uploading it or downloading it. It’s not of our interest. But of course, certainly when it’s terrorist content, but also with child sexual abuse material, when there’s anything that is worried for us worrying, then we’ll report it to law enforcement so they can act upon. And also, we have something what’s called deconfliction, just to make sure that we’re not taking that material in areas where police or other services are already investigating to make sure that we don’t harm their investigation. So far, that hasn’t happened yet. So I think we’re doing a good job. The other question is about, how do you balance human rights? And of course, with the powers we have, which is a very important power, I think, taking down or at least ordering the take down of material comes great responsibility. And definitely, when you look at the field of terrorism, it can easily be abused and harm freedom of speech, right? So we need to see how we can balance that. Well, first of all, we have legislation. So it’s not we have to hold the standard for this legislation when we send out removal orders. But the legislation is quite broad and sometimes vague. For instance, one of the reasons of addressing something as terrorist content is the glorifying of an attack. Well, what’s glorifying? So what we’re doing at the moment is, together with other European countries, as this legislation is European legislation, we are now clarifying that law to see, so what do we think all of us is glorifying? What is a call to action? So that we can refine that and make it quite clear also to the outside world, how do we assess and the reports do we get? And what threshold does it meet before we send out a removal order? And then again, of course, we can also give that to the platform saying, listen, if it meets these and these criteria, then maybe you should take it down before you send your removal order. That would be much better than us for sending removal orders. So this is on terrorist content. And as you can imagine, child sexual abuse material, that’s quite clear. There’s no debate about it. There shouldn’t be a debate about it. And I don’t think there’s any way of freedom of speech or any other human rights except for the right of the child that’s involved. But however, if you look at the removal of this type of content, you’ll see that on terrorist content, the majority of the material we find will be on a platform. But for child sex abuse material, unfortunately, as the Philippines has their downside, we have our downside that the Netherlands is a notorious hosting country for this kind of material. So we’re basically focusing our actions on hosting companies. Now, some of them are really bad actors. So this kind of imagery would not be the only bad things on their platforms. But there are also very many legit websites as well. So we need to make sure that we’re proportionate in our actions. We have really strong powers. We are able to even pull the plug of the internet, almost, let’s say, that way. Or we could even make sure that access profilers block the access. But if you do such a thing, you need to make sure that you’re not harming innocent parties or companies involved. So again, we need to be very precise and very well know what we’re doing. And so basically, for all this work, we engage a lot with industry to know the techniques. I think it was Paul who said here yesterday, for politicians, it’s very important to know the technical aspects of the online world. So is it for us. So we know a lot. We don’t know everything. There are lots of people who are much smarter than we are. So we engage with them. And we have an advisory board who would help us to make the difficult decisions. But we also engage with civil society to make sure that we uphold all these rights which are there to be able to balance it. And in the end, of course, it’s our decision. But we have to be able to explain it to the public, to you, why did we take that position? And did we look at the downside and the effects of it? And yeah, so that’s how we’re doing it. And it’s a very, I think, very interesting job. Now, on the matter of concerns of vulnerable groups, something I would like to address is something that we are currently seeing happening in the space of what used to be, I think, terrorism. I say used to be because terrorist actions used to be quite clear cut. It’s either right wing terrorism. Look at the Christchurch shooting. Or it’s Judaism. Many of the attacks are well known from that. But we see more and more hybridization of these types of content mixed together with other content. So recently, we’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online. Can I say it that way? Because we find that these terrorist groups or these groups, extremist groups, are actually targeting vulnerable kids. For instance, create a telegram channel where kids can talk about their mental health state, eating disorders. They groom information out of them. And with that information, they then extort them. And they let them do horrible things like carving their bodies or making sexual images, which are then again spread. And we can see that this kind of material is radicalizing the kids very swiftly. And recently in Europe, we had some very young kids who were at the verge of committing attacks. And so what we see now is that this is exhilarating in a very fast pace. And as our focus is on terrorism and child sex abuse, we cannot speak on eating disorders or mental health problems. But we know here at the table, too, there are lots of organizations who address these problems. But they’re probably not aware of these things happening. It’s all in the dark. And I think, again, if you talk about protection of vulnerable groups online, we need to bring these things to light. Like you basically said, the one case is brought to light by media. The other case is not brought to light by media. I think it’s up to us to bring it to light that these things are happening online. So at least the awareness is out there for parents and other caregivers to take care of the kids. But also for adults, that if somebody finally is able to speak about what’s happening, you are there to help them and support them. But yeah, we need much more to be done here as a coordinated approach to tackle this problem.


Alishah Shariff: Thank you, Ada. I think there was a lot in there in terms of proportionality and having a position that you can defend that is kind of balanced. I think this point on hybrid kind of threats is also really interesting. It’s something I haven’t heard before personally. And yeah, I think how you have a response that works across the whole system when these threats are hybrid and blended is really tricky, but also important to get right because there’s a lot at stake. So thank you. So next we’ll turn to Sandra. And if you want to just do a short introduction, that would be great. And then I’ll get to your question. OK.


Sandra Maximiano: So I’m Sandra Maximiano. I’m a chairwoman of ANACOM, the Portuguese National Authority for Communications. And at the moment, also, ANACOM deals with electronic communications, postal communications. But it’s also the digital service coordinator, so also on the digital matters, and also responsible for online terrorism and all these new issues, and also some competences under AI. So quite a broad authority. I’m an economist and specialized in behavioral and experimental economics. Thank you. So bringing together those two roles, I guess, as a regulator and also a behavioral and experimental economist, can you explain what behavioral economics is about and how it can be used to protect vulnerable groups online? So let me first say that if we will be rational human beings, we will probably need to care so much about safety and have a big concern, because we will be super rational and be able to understand what is good and bad and immediately react upon that. But we are not. So behavioral economics is actually a field that blends insights from psychology and economics to fully understand how women make decisions. And they make decisions not like machines. They don’t really maximize all the time their welfare, but they are affected by social issues, by social pressure, by their own emotions. And we all are affected. So, we use shortcoms, we call heuristics to make decisions, and we have a ton of cognitive bias. And this cognitive bias actually, they significantly influence how users interact and behave in an online context, and we have to have that into account. For instance, I can give you some very quick examples, like confirmation bias. Users may seek out information or sources that align with their existing beliefs, leading to echo chambers on social media platforms. This can, of course, perpetuate misinformation, stereotypes, and false beliefs, and limit exposure to diverse perspectives. Another one, overconfidence bias. Users may overestimate their online security knowledge, leading to risky behaviors, such as using weak passwords or ignoring security updates. Optimism bias. So we underestimate the risks of online scams or data breaches, believing that they are less likely to be targeted than others, which can lead to inadequate precautions. And on top of that, so we all suffer from this bias, but some groups suffer even more. So if we are thinking about children, we are thinking about some disabled groups, some people with mental health problems, they have, of course, this bias influencing their decision even more. And we as regulators, we have to take that into account. So we should, of course, be aware how this bias are used to explore the decision-making process online, and we have to fight with the same weapons. Basically, we have to make usage of this bias and try to make people do or take good decisions. So we have to understand this cognitive bias and also be aware that we can use them to make individuals, make them take more informed decisions. AI can also increase the economic value of this cognitive bias. And why? Because AI makes firms, makes organizations to use even more, to exploit this cognitive bias and expose people even to higher risks. So we have to be aware of that. And also, AI systems do not need to exploit vulnerabilities to cause significant harm to vulnerable groups. Systems that, for instance, they merely overlook the vulnerabilities of these groups could potentially cause significant harm. So I can give you an example. Individuals with autism spectrum disorder, they often struggle with understanding no literal speech, such as irony or metaphor, due to impairments in socially understanding and recognizing the speaker’s communicative intention. In recent years, chatbots have become very popular to engage with and train individuals with autism to enhance their social skills. If a chatbot is trained solely on a database of typical adult conversations, it may incorporate the elements as jokes and metaphors. And individuals with autism may interpret them literally and act upon, potentially leading to significant harm. So we have to be aware. As regulators, we really have to be aware with intentional and non-intentional harms that can cause to individuals. But as I said, we can also use this bias to make individuals make good decisions to protect vulnerable groups online. So behavioral economics can be used to enhance online protections for vulnerable groups, such as children, disabled users, and marginalized communities in many ways. So we can better design user interfaces. So websites and applications can be designed with user-friendly interfaces that consider the cognitive load of users. Nudge safe behavior. Platforms can implement nudges that guide users to hard, safer online behaviors. And presenting information about online risks in a clear and relatable way that can improve understanding and compliance. So this is particularly important. For instance, regarding, just to finish, regarding cyberbullying, behavioral economics can also play a significant role in protecting children from cyberbullying. So for instance, we can apply its principles to education and awareness campaigns. Again, framing information in a way that makes it very clear and very relatable for users. Using social norms. Social norms can be really a problem because people feeling the pressure to follow what others do, for instance, and this is a really preoccupation related to the online challenges that many children engage and put them at risk. But at the same time, we can use social norms messaging and, for instance, highlight positive behaviors and peer support through campaigns can shift perceptions around cyberbullying. So by emphasizing that most children are not engaging in cyberbullying behavior, it can create a social norm against it. So this is the point I want to make, is basically we have to understand all this behavioral bias that are putting our children, and this is just an example, but putting all of us at risk online. but we can use the same weapons to make it a safer behavior. So you really have to understand and then playing with the same weapons as regulators. Using nudge, encourage reporting, nudges that remind children of the importance of reporting, bullying can increase reporting rates and that there are studies that confirm that programs can be designed to teach children how to respond to cyberbullying effectively and behavioral economics can inform the design of these programs. So incentivize positive online behavior also can test different incentives, gamification, reward systems, schools and online platforms can implement reward systems that recognize and incentivize positive online behavior and this can be tested using experimental tools. So this is just an example and there are much, much more. Online platforms can adopt clear policies against cyberbullying and communicate this effectively to users. Again behavioral economics can help in framing these policies to highlight the collective responsibility of users to maintain a safe online space. So this is the point again that I want to make and this is an example. The same can be applied to understand algorithm discrimination, how does it work, how the bias increase this discrimination, but at the same time how can we use nudge and behavioral to fight those bias that are perpetuated in some algorithms. So the message I want to leave is that especially if you are a regulator, a policymaker, be aware of behavioral insights. People are using it. to make others behave in a way they want, firms do it a lot to sell more, marketing strategies are all, they all make use of behavioral insights, so we as regulators have to use the same weapons, but for another purpose, with another goal in mind. That’s it.


Alishah Shariff: Thank you Sandra, I think that was, yeah, I think it’s great to have a different perspective on the issue and I’ve never really heard anyone come at this from a behavioral kind of bias perspective, so thank you so much for that and I think, you know, how do we actually turn this on its head and use gamification and use these things to kind of incentivize slightly different behavior is a really interesting question. I think something we’ve come to quite a lot in discussions has been around the role of platforms, so I have just a quick-fire question for each of our panelists before we open to the floor, and so that question is, what forms of accountability beyond content takedowns should platforms adopt to protect marginalized users? So I might start with Ada and go from this side.


Arda Gerkens: Thank you very much for that question. Well, first of all, I think we need to understand that the platforms do a lot already. I think we should start from the positive side, right, because there’s a lot of things we can say about the platforms, but they do have a lot of effort in there. The effort is there when it doesn’t cost them any money, but when it comes to the revenue, then it’s getting, you know, to be difficult, and I think there’s one thing it is indeed to take down content, but there’s a lot of things that you can do with the algorithms by bringing extra attention to some of the material that they have or to lower it in the attention, and here I think there’s still a big chance because it’s… A piece of content in itself is not harmful. It could be harmful, but it’s only viewed by three, four people, persons, then it’s not a problem, but once it spreads and it’s been into the eyes of millions, then there is where it gets harmful, but again, when it’s spreading that fast, that’s also the way the system works, right, because it’s there because you want to be able to spread it again, get more attention, and therefore get more viewers, and more viewers means more advertisement, means more money for the platforms, so I think if we should start a debate with them, I would really like to speak with them on how they are having that policy around moderation, or moderation in the sense of taking material lower into their feeds or bringing them up higher.


Alishah Shariff: Thanks, Ada. I think next we’ll turn to Nigat, and do you want me to repeat the question? No, you’re good.


Nighat Dad: I think just platforms are doing a lot, some of the platforms, not all, but I guess we should look at the positive side of some platforms where they have some oversight mechanisms that are still working, and gave some good decisions and recommendations, and which actually improved their policies, but at the same time I think we really need to see what to do with the platforms that are still thriving in our jurisdictions but absolutely have no accountability. And they do not have their trust and safety teams any longer. They don’t have human rights teams. I’m talking about X here. I don’t think that anyone in a room has any point of contact with X in terms of escalating content, in terms of the disinformation that thrives on this platform. And it’s very interesting for me to see, for a number of years, that in different jurisdictions, when we talk about platforms, in the North, it’s easier to say that we should move on to other alternative platforms, like Mastodon or Blue Sky. But the problem in our jurisdictions is that user base is not that digital literate. And they are very comfortable with the platforms that they already have. Not the civil society has access to these platforms, neither the government. So I’m very concerned. What are we thinking about those platforms? But at the same time, there are platforms that actually listen to all government requests and take down number is very higher. And that’s where many have mentioned necessity and proportionality. And I don’t think many jurisdictions are actually respecting that. So I think we really need to see what are the oversight or accountability mechanisms are out there. And what different actors are doing. Just government, and government is making policy and regulation. But what that regulation looks like, does it really respect UN guiding principles or international human rights, human rights law, when it comes to content moderation or algorithmic transparency? At the same time, what other actors are doing? Platforms at the moment have much more power in our part of the world. We do not have Digital Services Act. But our governments are coming up with its own kind of regulation, which might not be as ideal as DSA, and which might not have that kind of power of enforcement that DSA has. So we really need to see what kind of precedents we are setting.


Alishah Shariff: Thank you. I think from our first two speakers, there’s definitely something coming through around transparency of what the platforms share with us, and whether that’s how their content moderation processes work, or other things. And then also a point around accountability. But also, as you said, Nika, just designing this new regulation, we’ve got to also take into account privacy, freedom of expression, getting the balance right, and then also being able to enforce effectively. So yeah, next I will turn to you, Chinni.


Teo Nie Ching: Yeah, a few things I would like to highlight. First of all, I would like to see the platforms to improve their report mechanism, the built-in report mechanism. Because my experience in Malaysia would be sometimes even public figure, prominent figure, such as Dato Lee Chong Wei, a very famous badminton player from Malaysia. They are scammers who are using his video, his photo, to create scam-related posts. And however, Dato Lee Chong Wei, even though he has a Facebook account with the Blue Tick verification badge, he himself lodging report through the built-in report mechanism is not going to be helpful. He himself need to compile all the link, send to me, send to MCMC, and then we need to forward it to Meta for the scam-related content to be taken down. So I think, first of all, the self-reporting, built-in report mechanism is not functioning. And that is actually putting a heavier burden on the regulator to actually do the content moderation job on behalf of the platform. I do not think that is fair. Second, we talk about transparency. So even though the scam-related posts are being taken down, but what actions are taken by the platform against the scammer? Against the scammer? I think that is the question we need to pose to the platform provider. And I’m hoping to get an answer from them. How much advertising revenue they are collecting from Malaysia each year? Do we know? I don’t have the figure. How much advertising revenue they collect for ASEAN collectively? We never have the figure. But for me, if you only take down the scam-related posts, it’s not sufficient. Because I need to know what type of action is being taken by the platform against the person who sponsored the post. Shouldn’t that person be held responsible as well? And because we don’t have that type of transparency, it’s very difficult for us to have the platform accountable. And then, again, I would like to add a little bit more on the algorithm part. Because I think algorithm is very, very powerful. However, platform, when they design the algorithm, their only purpose is to make the platform more sticky, so that its user will spend more time on that platform. But however, I think it’s time for the public, for the general public, for the civil society to also have a say to design the algorithm, so that we can so-called practice information diet, as proposed by one of my favorite author, Yuval Harari, that we also need to make sure that the information consumed by the user, by the social media user every day, actually healthy content, and not just whatever content they like. Because I think that can be very, very dangerous.


Alishah Shariff: Thank you. Yeah, absolutely. Thank you. Yeah. Yeah, I think the incentives of these platforms, and understanding the kind of stickiness point with algorithmic promotion, and, yeah, kind of the advertising revenues is another whole piece of the puzzle that we could have a separate discussion about. But thank you. And next, I’ll turn to Raul.


Raoul Danniel Abellar Manuel: Yes, thank you. Actually, before this month of June, in the House of Representatives, we have had a series of hearings by three House committees, namely the Committee on Information and Communications Technology, Committee on Public Order, and Committee on Public Information. And the topic of takedowns has been discussed. And in the fifth, or the final hearing that we’ve had so far, the government and representatives from META reported to the public hearing that they had this non-written agreement that would enable the government to send the requests for content takedowns to META. And our reaction at that time was, without any written basis or any law that would explicitly set the standards as to what content can be taken down and what should just stay online, then it will be a slippery slope when it comes to using content takedown as a primary approach when it comes to ensuring that our online spaces are safe. It can be having decisions just being done in the shadows and people not being aware or being made knowledgeable about the basis for takedowns. That’s why, beyond takedowns, we really assert that platforms have a major responsibility. For example, when they already can monitor notable sources of content that is harmful to children, women, LGBT, and other marginalized groups, may it be bullying, hate speech, indiscriminate tagging, or those posts promoting scams to Filipinos, or those posts promoting hate speech to Filipinos, then platforms should proactively report those sources to government. And also, platforms should work with independent media and digital coalitions so that, aside from just going after each content, because that would also be very tedious and laborious, we should also focus on the sources, to promote a certain narrative or discourse, so that we can not just be reactive in our approach. Being proactive would be the better way to go. So that’s my piece. Thank you.


Alishah Shariff: Thank you so much, Raul. I think that’s really interesting on kind of knowing the sources. And also, you touched on a really important thing on independent media, which obviously is in decline in a lot of the places where we live, sadly. We’ll go to Nima next.


Neema Iyer: Thank you. So I want to shift gears a bit and talk more about actual design of platforms. So I am a member of Meta’s. as Women’s Safety Board. And sometimes they bring us in on design decisions that they make, and echoing some of the opinions of my colleagues at the other end of the table, that it’s really difficult work. It is so difficult to make these little design choices on the platform that impact user behavior. So the thing that I want to talk about is that, with content takedown, it’s a very reactive measure that happens after the fact. So the content is already shared. You go through this mechanism. It can take days, months, years, or it will never happen. It will never be taken down. That also happens. I’ve reported many times, and it doesn’t get taken down. And there’s none of this sense of justice, for the people who are wronged, after the content’s already been up there, and then you take it down, but the damage is already done. The wound is already there. So I think it’ll be interesting to think about what are the kinds of design friction that you can introduce that stops the content from being shared? And I think my behavioral economist colleague will have more to say probably. But how do you stop it from happening so that you’re not in the position of having to take down? And as Ada mentioned, that they’re already coming up with guidelines and practices that would be nice for platforms to use to take down, but what if this was used before it even comes up? Or when someone goes online to insult a woman, for whatever reason, that there’s a nudge that says, are you sure you wanna do that? You really wanna, what do you benefit from saying this? But then, of course, on the other end of that, it’s also very problematic. So I really want to acknowledge that this method is problematic because this sort of shadow banning has been used against feminist movements, against marginalized people, to silence them. So when you talk about issues like colonization, racism, any of these issues, your posts actually don’t get shown. And this is the problem because we don’t have transparency on what are the algorithms that show or hide information. And really, all of us are at the mercy of the moral and political ideologies of whoever owns that platform. If they’re a right-wing, anti-feminist person, then those are the rules of the platform, and we are all tied to those rules. So what would be lovely in a really perfect world would be if these algorithmic decisions are co-created by all of us, and we understand that if we are doing child protection, counter-terrorism, that we all have decided these are the things we don’t want to be posted to be shown. We have decided it as government, as civil society, and as the platforms coming together. I think we really need platforms to take that accountability, to be more transparent, to do more audits, to do more research with governments and civil society so that we’re not looking at the platforms as enemies, acknowledging they do a lot, that there is more need for us to collaborate on setting the guidelines. So, thank you.


Alishah Shariff: Thanks, Neema. I think that was, yeah. I see. So yeah, just having that multi-stakeholder voice in shaping, I guess, the things that govern the platforms that we interact with, but also I really liked your point on introducing design friction. I think that’s a really interesting one. And so finally we’ll turn to Sandra and then we’ll go to questions from the floor.


Sandra Maximiano: So I couldn’t agree more with what has been said so far. Think about this. Think about if you wanted to take, like, you know, just do a skydiving activity. You go, you go to a firm, you know, sign up for this service and you always get some briefing about security and safety measures that you need to take. Okay, so you are buying a service and the firm that offers that service is forced to provide that briefing. I think what I really would like is online platforms that are providing us a service who’ll be also entitled and forced to give at least these briefings to all of us about safety, about measures that we need to take as human beings. So we need to be aware of our cognitive biases, as I said, and what all this content and all this online interaction may impact on our decisions and on our behavior. So I think they should be more entitled to provide us that sort of information. Then what should be illegal offline should be illegal online. That’s the main principle. But then we have this gray area. What is not illegal offline and should be forced to be illegal online or take it down. So and here I’m more pro, let’s say, measures like nudge interventions, like some applying these behavioral insights and increasing awareness and giving more education, improving digital literacy and, of course, making us better users of online content and trying to be aware of what is there that can really damage us. But of course platforms, it needs to be very much easier for users to comply to platforms and that’s one of the biggest problems nowadays and we can see that as digital service coordinators that users, the first step that they have to take is to comply to a platform and then it’s very hard. It’s very even hard to, you know, have to whom they can contact and this is something that platforms need to be responsible, take those complaints seriously and respond to users appropriately. And of course about algorithms, more audits are really needed, regular auditing algorithms for bias that can help identify and correct discriminatory patterns, diverse development teams. This is also something that platforms should look for. Building diverse teams of developers and stakeholders that can help mitigate biases in algorithms, for instance, transparency and accountability, making algorithms more transparent that can allow users to understand how decisions are made, which can help identify also potential discrimination and, again, giving users more education. Also playing again with the behavioral issue, the default settings is a very important point for behavioral economists. So, setting stronger privacy defaults that can protect vulnerable groups. For instance, social media platforms can make private accounts the default setting for children, ensuring that their information is more secure unless they choose to change it. So, changing the defaults, playing with those, it’s also very, very important. So, basically, we have to be, as I said, aware of this cognitive bias. Platforms should give us more information about this cognitive bias that all of us face and give us briefings, give us information and education and be more accountable and transparent.


Alishah Shariff: Thank you, Sandra. That’s great. I think that’s been a really thought-provoking set of interventions on that question and now we will open to the floor. We’ve got about 15 minutes. So, if you’d like to ask a question, I’d encourage you to go to the microphone at the front just so that we can make sure everyone can hear and we’ll put these headsets in.


Anusha Rahman Khan: Thank you very much. I’m Anusha Rehman Khan. I’m a former minister for information technology and telecommunications and I’ve remained the minister for five years and I’m somebody who enacted the cyber crime law in 2016, which gave and introduced 28 new penalties and criminalized the offense. As an offense, the dignity of natural person was, if violated, would result in criminal penalty of going to the jail or being find. So we all know that it is important to legislate. We also know that when we are legislating and when we are creating new offenses of such nature, we have noticed that the interest lobbies, the interest groups, come out very strong against such activities. And we all know that the funding is provided by the commercial interest holders. So when in 2016 I was trying to make the enactment, I had a huge resistance from the interest groups. And at that time, it was difficult for people to appreciate that how they are being played in the hands of the commercial interest. And then I noticed that similar people, similar interest groups, make commercial interest for themselves. From the law that was enacted later on, we found out the same interest groups were creating and found it as an opportunity to generate revenue for their own interest later on. So this is a game that is being played globally. And we, by now, have seen the games that are being played in for this revenue generation at the cost of the dignity of a natural person. It is not just the women, not just the children, not just the girls. It is all the people on this globe who get affected by the abuse online. Now, the questions. My question and my ask is from the Minister from Malaysia. I’ve heard you and you are very eloquent and your clarity is really appreciated. Now, what do you think in your experience that after legislating in Malaysia, have you been able to overcome the difficulties the enforcement entails? Because I feel, having been the former minister and also now chairing the standing committee, part of the information technology system for the last 32 years, that the time has come that we stop begging the social media platforms now. Because we cannot continue to remain hostage to requests made for the welfare of our citizens. So what is it that we can together do to make sure that we introduce the mechanisms where we do not expose our children, our girls and our women, at the hands of those people who probably have a different philosophy about content online. So when people are sitting, perhaps in the West, have a different ideology and different legal system governing them. But people sitting in the East have a different value system. We are a country where a single aspersion on a girl can cause her to jump from the window without waiting for the content to get removed. This has been the major issue for me that we in the East and the Far East live in a different value system. What is it that we think that together we today come by and bring out a solution? I do not think that the commercial interest and the revenue generation is going to allow you to provide the civil protection that is needed. So maybe you could guide me and tell me that what is it in your mind that we need to do and come forward with some very solid recommendations. Thank you. Thank you.


Alishah Shariff: Thank you. Are you happy to answer that? I think maybe if we could do a really quick response to that one and then maybe also.


Teo Nie Ching: Thank you madam. Thank you for your questions. Frankly speaking, after what we have been trying to do in Malaysia, passing the law is easy. Being in the government means that we have the majority in Parliament. So passing the law is easy, it’s relatively easy. Of course we have to do a lot of engagement, consultation etc. But passing the law itself is not too difficult. However, as you rightly pointed out, to enforce it, it will be super super difficult. It will be super super challenging and as I mentioned to every one of you here just now, we need to admit that Malaysia, even though we have introduced this licensing regime, supposed to be implemented since 1st of January this year. But however, until today, only X, TikTok that is Baidans and also Telegram. They got more than 8 million users in Malaysia. They came to us, get the license. But however, until today, Meta and Google have yet to apply the license from Malaysian government. So but the question, next question would be what can we do? First of all, I think it is too difficult for Malaysia to deal with this tech giant. It’s too difficult. So I’m really hoping that we can actually have a common standard imposed on this social media platform. My neighbouring country, Singapore, I think they are doing something, I myself I think it’s a good idea, i.e. they impose a duty on Meta. Meta must verify the identity of every advertiser if the advertisement is targeting Singapore citizens. And Meta actually is doing that and that is partly because Meta has an office in Singapore and they are deemed to be licensed as well. So Meta is actually doing this in Singapore. So my question actually would be, why can’t you do it for Malaysia? Because if you verify the identity of the advertiser, then it will be much, much easier for us to identify who are scammers, who are those behind this account promoting online gambling, etc. Why are you only doing it in Singapore and not the rest of the world? So to me, it is very, very important if we can have one international organisation identifying what are the responsibilities that should be carried out by the platforms instead of one individual country. Because as Malaysia, our negotiation power is just too limited. And at the same time, I think to overcome the issue that the standard is set by the West, I think it is very, very important for us to engage this platform as a bloc. For example, instead of engaging, instead of Malaysia trying to engage with this platform, we are hoping that ASEAN as a whole, we can engage with this platform. If you engage with Malaysia, maybe they worry that the Malaysian government will abuse our power to restrict the freedom of expression, but how about ASEAN as a bloc? Because as 10 ASEAN countries, we have similar culture, we understand each other better, and therefore we shall be able to set a standard that actually meets our cultural and also a history and religious background, etc. So I think it is important for us not to, you know, apply one standard, but we understand the world as different, different, multi-polar or different, different region whereby we can sit down and discuss about the standard that should be imposed on our platforms at different, different region. That is something really I would like to propose. Thank you.


Alishah Shariff: Thank you. Okay, there’s going to be some future cooperation here, so that’s great. I’ll turn briefly to Nigat who also wanted to provide some comments, and if we can keep them short, that would be great.


Nighat Dad: Very briefly. I think governments really need to understand that we are here in a multi-stakeholder spirit, and when we make national policies, multi-stakeholder means government, industry, civil society, and I think civil society is a critical space because when they present policies and regulations, it’s a role of civil society to basically think of critical points and nuances and hold the government also accountable. I think when we are talking about accountability, it’s about all powerful actors, government and platforms. Thank you.


Alishah Shariff: Yeah, that’s a really important perspective to bring. Okay, we’ll go to our next question in the room.


Andrew Campling: Thank you. Good morning. My name is Andrew Campling. I run a consultancy, and I’m a trustee of the Internet Watch Foundation, which finds and takes down with partner hotlines CSAM material around the world. Over 300 million children annually are the victims of technology-facilitated sexual abuse and exploitation. That’s about 14% of the world’s children are victims every year. So with that in mind, does the panel agree that we should mandate the use of privacy-preserving age estimation or verification technology to stop children from accessing adult platforms and adult content, and also from adults accessing child-centric platforms and opening child accounts so they can target children? And also, does the panel agree that we should make better use of technologies like client-side scanning to prevent the use of messaging platforms like WhatsApp from being used to share CSAM at scale around the world, which you can do in a privacy-preserving way? Thank you.


Alishah Shariff: Thank you. I think we’ll take one more question, and then I’ll open it up. Thank you very much, and I must start by congratulating the panel. It looks like there is a bit less testosterone on the panel today. It was a girls’ day this morning.


Audience: But my name is John Kiariye from Kenya, and mine is more of a comment that, seated at the IGF, we are able to have a conversation around what it is that regulators can do. And regulators have other platforms to be able to learn. and what to do with the technology that is availed to us. But if we are talking about human-centered design, we’ve got to remind ourselves that the offenders are human. The victims are humans. And we have to look and see beyond what is happening online and see if there are opportunities on already existing human structures in community. Because the technical stuff that we would talk about at IGF, some of it is not applicable practically in some jurisdictions. For example, we come from places where big tech has got platforms that people are interacting with, but they do not have physical presence in some of these jurisdictions. So you have no place to go and have a conversation with these big tech to ask them to do some of these things that we are saying at IGF. But if we look at an already existing structure within community, then we might find an opportunity to empower the victim in the sense that if it is a child who is under threat, in a school, there are already existing social structures. There are social clubs. For example, the lessons we are learning from Kenya, we’ve got clubs like the scouting clubs and the girl guides that already exist. And for young people, we know that if you make it cool, for them it becomes the truth. So what if this discussion starts offline for the victim so that by the time they are getting online, they already have the tools and they’re empowered to get this done? Because the bully is a human. The victim is a human. So if we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully. And they can do that online. But if they were trained offline long before they got onto the internet, then maybe it can become a movement that. that saves a generation. So my point here and the comment is that even as we are focusing on the technology, let us not forget that this is technology for humans and there are already existing social setups. These social setups could be family, they could be school, they could be clubs, and all these other social setups that already exist before we get even online. We will leave it to the regulator to deal with the big tech because that animal is too big for the victim to face up, I thank you.


Alishah Shariff: Thank you, thank you. I think we’ll answer Andrew’s question first. So I think there was two parts to that. There was something around age verification or creation of child accounts and whether that could be a preventative action and then also something on client side scanning on device and whether that’s a good kind of proactive measure. I don’t know if there’s anyone in particular who wants to take that one. Would be good to hear from, yes? Yeah, okay, Neema and then Raul, okay.


Neema Iyer: I think absolutely not. So I live in Australia and we just passed a social media ban on children. In the past year, I have no idea what is the plan for implementation. And it’s really, it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place. So my general opinion is no, that we as humans need some level of privacy in our lives. And I think that there are better, and the fact is that people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.


Alishah Shariff: Thank you, and Raul?


Raoul Danniel Abellar Manuel: On our end in the Philippines, we’ve had this observation that sometimes the best way to solve a problem is to find the underlying basis for such a problem because directly confronting the problem may not be totally enough. For example, in the case of CISA-M and how young people are being used for these very bad objectives, we’ve had a realization that the economic basis is really a primary factor that drives children and unfortunately their relatives to have this kind of livelihood so that they can live from day to day. So we also have measures to really address issues like poverty, child hunger, and all that, alongside, of course, preventing the spread and the proliferation of these kinds of materials that exploit children. And I would just like to refer also to another point regarding how difficult it is, really, especially for those in the Global South to hold social media platforms to account. I can sympathize with our colleagues here and I also agree that we need to form a coordinated response, really, because in our case, when we invited representatives from these social media platforms, they did not attend our first two hearings and their reason was simply because they did not have an office in the Philippines, so why bother to attend? And we were insulted by such kind of a response because we just want to have concerted action on these issues that we are talking about. So we kind of threatened them with a subpoena and the prospects of an arrest if they will not attend the hearing. So fortunately, by the third hearing, they attended and that was the start of having them send representatives. But of course, we can’t act alone and we really have to work collaboratively. Thank you.


Alishah Shariff: Thank you. We actually only have a couple of minutes left. I think, Sandra, would you like to offer a kind of final comment?


Sandra Maximiano: Yes, just to add that and reinforce the point that what is illegal offline should continue to be illegal. Online. And, if we restrict children to have access to certain services offline, certain contexts, I think we should also take the same approach online. But that doesn’t mean, of course, you know, make every account private and ban every sort of possibility or behavior. So there are other better approaches rather than just going for extreme options. But I also like to add that this last intervention was very important, and thanks a lot for it, because we are humans, and we need, of course, to be aware of our shortcomings, our biases as humans, and that need to be taught, as it was said, in schools, and we need to be more prepared now to deal with this usage of cognitive biases online. So it’s basically making usage of technology to take advantage of them. So we need to be aware, we need to be more aware of that, and so you need more digital literacy for sure. But let me also add something as an economist. We are in a world that there are lots of incentives for platforms to start developing features that take into account safety and security, and make a profit out of it. And here I’m just talking as an economist, and we will see that happening. And then it will be left to us, and there should be some minimum standards that should be for everyone, and regulators should impose those, but also I’m pretty sure that there will be selling any sort of features that we, as users, will be able to buy and to add on to our systems to increase the level of protection. So there is like a huge market out there that is going to explore the safety, the security, and be prepared as consumers, as users, to make that choice. And it will depend on our risk aversion, our risk preference, and safety preferences, but it will come.


Alishah Shariff: Thank you, Sandra. I think that is all we have time for today, so I’d like to say a massive thank you.


Arda Gerkens: Could I make one remark, which I think is really important? A positive message. Look the way we are here together as regulators. I’ve been at the IGF for 15 years. There’s a lot changing, and there’s a lot of politicians involved in that change. What we need to do now is come together globally, because, indeed, Malaysia has a problem with some platforms. Other countries might have problems with other platforms. Once we are able to get some platforms to obey to our regulations, other ones will pop up. We really need to work together globally. We’re part of Global Online Safety Regulators Network, GOZERN. That’s a new initiative. I invite everybody who wants to be a part of it, please go to the website, GOSRN. Let’s see how we can tackle this problem, because it’s a global problem, and we need to work together here. Thank you.


Alishah Shariff: Thank you, Ada. I think that’s really the takeaway from this session for me, is that, through having this kind of multistakeholder, multidisciplinary discussion, this is the only way we will be able to tackle some of these challenges, and also to take into account intersectionality, geographical differences, the way platforms behave differently in different jurisdictions. Just very quickly, the opening of the IGF, the official opening, is at 11 a.m. in the plenary room on the ground floor, so we hope to see you there. Thanks once again to all the panelists, and to all of you, and to our online audience. Thank you.


N

Neema Iyer

Speech speed

179 words per minute

Speech length

1577 words

Speech time

525 seconds

One in three women across Africa experience online violence, leading many to delete their online identities due to lack of awareness about reporting mechanisms

Explanation

Research conducted with 3,000 women across Africa revealed that approximately one-third had experienced some form of online violence. This abuse led many women to completely delete their online presence because they were unaware of available reporting mechanisms and felt authorities would not listen to them if they sought help.


Evidence

Research study with 3,000 women across Africa showing one in three women experienced online violence


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Sociocultural


Agreed with

– Nighat Dad
– Sandra Maximiano

Agreed on

Marginalized communities face disproportionate online harm with inadequate support systems


Women politicians face sexist and sexualized abuse online, causing many to avoid having online profiles or participating in digital spaces

Explanation

A social media analysis of women politicians during Uganda’s 2021 election showed they were frequently targets of sexist and sexualized abuse. The fear of such abuse meant many women politicians chose not to have online profiles or participate in digital political discourse.


Evidence

Social media analysis of women politicians during the Ugandan 2021 election


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Sociocultural


AI systems create grave issues including under-representation, data bias, algorithmic discrimination, digital surveillance, and labor exploitation affecting marginalized women

Explanation

Research on AI’s impact on women revealed multiple systemic problems that disproportionately affect marginalized women. These include biased data representation, discriminatory algorithms, increased surveillance capabilities, and threats to low-wage jobs typically occupied by women.


Evidence

Research study on the impact of AI on women showing under-representation, data bias, algorithmic discrimination, digital surveillance, censorship, labor exploitation, and threats to low-wage jobs


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Economic


Intersecting inequalities create large gaps in digital literacy and access, with platforms often not prioritizing smaller markets or local languages

Explanation

Marginalized communities face multiple overlapping disadvantages including limited digital access and skills. Platforms often neglect smaller markets, with countries like Uganda having 50+ languages but lacking platform support for local languages due to limited market share.


Evidence

Uganda has about 50 languages spoken but platforms don’t prioritize smaller countries due to limited market share


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Development | Sociocultural


Laws designed to protect are often weaponized against women and marginalized groups, being used to punish rather than protect them

Explanation

Cybercrime laws, data protection laws, and other protective legislation are frequently misused to target women, activists, and dissenting voices. Instead of providing protection, these laws become tools of oppression against the very groups they were meant to safeguard.


Evidence

Cybercrime laws and data protection laws have been used against women, dissenting voices, and activists to punish rather than protect them


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Legal and regulatory | Human rights


Agreed with

– Nighat Dad

Agreed on

Laws designed to protect can be weaponized against vulnerable groups


Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing

Explanation

Current content moderation relies on reactive takedown processes that occur after harmful content has already been shared and caused damage. There’s a need for proactive design elements that create friction to prevent harmful content from being shared in the first place, though this approach has its own risks of censorship.


Evidence

Personal experience reporting content that doesn’t get taken down, and acknowledgment that damage is already done even when content is eventually removed


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Raoul Danniel Abellar Manuel

Agreed on

Platform accountability requires transparency beyond content takedowns


Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies

Explanation

Current algorithmic content moderation reflects the moral and political ideologies of platform owners, creating unfair power dynamics. A collaborative approach involving multiple stakeholders would ensure more balanced and transparent decision-making about what content should be promoted or suppressed.


Evidence

Shadow banning has been used against feminist movements and marginalized people discussing issues like colonization and racism


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Human rights


N

Nighat Dad

Speech speed

137 words per minute

Speech length

1238 words

Speech time

542 seconds

Digital Rights Foundation helpline has handled over 20,000 complaints since 2016, with hundreds of young women reporting monthly about blackmail and harassment

Explanation

The Digital Rights Foundation’s helpline has processed over 20,000 complaints since 2016, receiving hundreds of reports monthly from young women, female journalists, influencers, politicians, and students. These complaints primarily involve blackmail and harassment through non-consensual intimate images.


Evidence

Over 20,000 complaints handled since 2016 through digital security helpline, with hundreds of complaints from young women monthly


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Cybersecurity


Agreed with

– Neema Iyer
– Sandra Maximiano

Agreed on

Marginalized communities face disproportionate online harm with inadequate support systems


Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide

Explanation

The increasing prevalence of deepfake technology has created new forms of harm where people are blackmailed and silenced using intimate images they never consented to, some of which aren’t even real. The psychological impact includes severe reputational damage, emotional trauma, and in extreme cases, suicide.


Evidence

Rise in deepfakes over the last one and a half years, with cases of women committing suicide due to the harm


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Cybersecurity


Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment

Explanation

Meta Oversight Board cases revealed significant disparities in platform response times based on geography and prominence. A US celebrity’s deepfake case received immediate attention due to media coverage, while an Indian woman’s case wasn’t flagged until the Oversight Board intervened.


Evidence

Meta Oversight Board reviewed two deepfake cases – US celebrity case received quick response due to media attention, while Indian case wasn’t flagged until Oversight Board raised it


Major discussion point

Platform Accountability and Content Moderation


Topics

Human rights | Legal and regulatory


Agreed with

– Arda Gerkens
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Meta’s recent scaling back of proactive enforcement systems shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only

Explanation

Meta and other platforms have reduced their proactive content moderation, focusing mainly on illegal or high-severity cases while expecting users to handle more moderation themselves. This is especially problematic in South Asia where users may not know how to report, systems are only in English, and fear of backlash is significant.


Evidence

Meta has scaled back proactive enforcement systems; reporting systems are in English, not regional languages; documented cases in India and Pakistan where women reporting abuse faced further harassment


Major discussion point

Platform Accountability and Content Moderation


Topics

Human rights | Sociocultural


Agreed with

– Neema Iyer

Agreed on

Laws designed to protect can be weaponized against vulnerable groups


R

Raoul Danniel Abellar Manuel

Speech speed

132 words per minute

Speech length

1436 words

Speech time

650 seconds

Philippines passed Republic Act 11930 addressing online sexual abuse of children with extraterritorial jurisdiction provisions

Explanation

The Philippines enacted Republic Act 11930 in July 2022 to combat online sexual abuse and exploitation of children. A key component is the assertion of extraterritorial jurisdiction, allowing the state to prosecute offenses that commence in the Philippines or are committed abroad by Filipino citizens against Philippine citizens.


Evidence

Republic Act 11930 lapsed on July 30, 2022, includes extraterritorial jurisdiction provisions recognizing coordinated networks involving multiple locations


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity


House of Representatives approved expanded anti-violence bill defining psychological violence through electronic devices as violence against women

Explanation

The Philippine House of Representatives passed legislation expanding the definition of violence against women to include psychological violence committed through electronic or ICT devices. However, the bill still awaits Senate approval in the bicameral system.


Evidence

House of Representatives approved the expanded anti-violence bill, but waiting for Senate deliberations in the bicameral system


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Amendments to Safe Spaces Act set higher standards for government officials promoting discrimination through digital platforms

Explanation

The Philippines approved committee-level amendments to the Safe Spaces Act that establish stricter standards for government officials who promote sexual harassment or discrimination against LGBT communities through digital or social media platforms.


Evidence

Amendments approved at committee level targeting government officials who discriminate against LGBT community through digital platforms


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Pending bill seeks to criminalize ‘red tagging’ – labeling individuals as state enemies or terrorists without basis

Explanation

The House of Representatives has a pending bill to criminalize the practice of ‘red tagging’ – falsely labeling individuals or groups as state enemies, subversives, or terrorists without proper basis. The Supreme Court has adopted this term, recognizing it as a source of harm that extends into the physical world.


Evidence

Supreme Court adopted the term ‘red tagging’ and recognized it as causing harm that transcends into the physical world


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Platforms should proactively report sources of harmful content to government rather than just reacting to individual posts

Explanation

Beyond content takedowns, platforms should take greater responsibility by proactively identifying and reporting sources of harmful content to government authorities. This would shift from reactive individual post removal to proactive source identification, working with independent media and digital coalitions.


Evidence

Platforms can monitor notable sources of harmful content including bullying, hate speech, indiscriminate tagging, scams, and should work with independent media and digital coalitions


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence

Explanation

When the Philippine House of Representatives invited social media platform representatives to hearings, they initially refused to attend, stating they had no office in the Philippines and therefore no obligation to participate. Only after threats of subpoenas and arrest did they begin attending by the third hearing.


Evidence

Platforms did not attend first two hearings claiming no office in Philippines; attended third hearing after threats of subpoena and arrest


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Economic


Agreed with

– Teo Nie Ching
– Anusha Rahman Khan

Agreed on

Individual countries lack sufficient power to regulate global tech platforms effectively


Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material

Explanation

The root cause of child sexual abuse and exploitation often lies in economic desperation, where poverty drives children and their families to engage in such activities for daily survival. Effective solutions must address underlying economic issues like poverty and child hunger alongside technical and legal measures.


Evidence

Philippines ranks as hotspot for online sexual abuse of children; economic basis drives children and relatives to this livelihood for day-to-day survival


Major discussion point

Age Verification and Privacy Concerns


Topics

Development | Cybersecurity


T

Teo Nie Ching

Speech speed

153 words per minute

Speech length

1789 words

Speech time

699 seconds

Malaysia amended Communication and Multimedia Act after 26 years, increasing penalties for child sexual abuse material and grooming

Explanation

Malaysia made its first amendment to the Communication and Multimedia Act in 26 years, significantly increasing penalties for dissemination of child sexual abuse material, grooming, and similar communications through digital platforms. The law imposes heavier penalties when minors are involved and grants the Malaysian Communications and Multimedia Commission authority to instruct service providers to block or remove harmful content.


Evidence

First amendment in 26 years to Communication and Multimedia Act, with heavier penalties when minors are involved and new powers for MCMC to instruct content blocking/removal


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity


Malaysia developed code of conduct for social media platforms with over 8 million users, though major platforms like Meta and Google have not applied for licenses

Explanation

Malaysia implemented a licensing regime with a code of conduct targeting major social media platforms serving over 8 million users (about 25% of Malaysia’s 35 million population). However, despite the January 2025 implementation date, major platforms Meta and Google have not applied for licenses, while only X, TikTok, and Telegram have complied.


Evidence

Licensing regime for platforms with 8+ million users (25% of 35 million population); only X, TikTok, and Telegram applied for licenses while Meta and Google have not


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Economic


Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal

Explanation

Platform reporting systems are inadequate, as demonstrated by cases where even verified public figures like badminton player Dato Lee Chong Wei cannot successfully report scam content using their accounts. Instead, they must manually compile links and send them to regulators, who then forward them to platforms for removal.


Evidence

Dato Lee Chong Wei, a famous badminton player with verified Facebook account, cannot successfully use built-in reporting and must send links to MCMC for forwarding to Meta


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Economic


Agreed with

– Arda Gerkens
– Nighat Dad
– Raoul Danniel Abellar Manuel
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Platforms lack transparency about actions taken against scammers and advertisers, making accountability difficult to assess

Explanation

While platforms may remove scam-related posts, there’s no transparency about what actions are taken against the actual scammers or those who sponsored the posts. Malaysia lacks access to data about advertising revenue collected from their jurisdiction, making it difficult to hold platforms accountable for their broader responsibilities.


Evidence

No transparency on actions against scammers who sponsor posts; no access to data on advertising revenue collected from Malaysia or ASEAN region


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Economic


Agreed with

– Arda Gerkens
– Nighat Dad
– Raoul Danniel Abellar Manuel
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN

Explanation

Malaysia’s experience shows that individual countries have limited negotiation power with major tech platforms. A coordinated approach through regional blocs like ASEAN would provide stronger negotiating positions and allow for standards that reflect regional cultural, historical, and religious contexts rather than Western-imposed standards.


Evidence

Meta complies with advertiser identity verification in Singapore but not Malaysia; Malaysia alone has insufficient negotiation power with tech giants


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Economic


Agreed with

– Raoul Danniel Abellar Manuel
– Anusha Rahman Khan

Agreed on

Individual countries lack sufficient power to regulate global tech platforms effectively


Different regions need different standards that meet their cultural, historical, and religious backgrounds rather than one-size-fits-all approaches

Explanation

Rather than applying universal Western standards, different regions should be able to establish standards that align with their specific cultural, historical, and religious contexts. Regional blocs like ASEAN, with similar cultural understanding, could set appropriate standards for platform regulation in their jurisdictions.


Evidence

ASEAN countries have similar culture and understand each other better, allowing them to set standards meeting their cultural, historical, and religious backgrounds


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Sociocultural


A

Arda Gerkens

Speech speed

170 words per minute

Speech length

1792 words

Speech time

629 seconds

Netherlands established unique regulatory body ATKM with special powers to identify and remove terrorist content and child sexual abuse material

Explanation

The Netherlands created ATKM, a unique regulatory body with special authority to dive into and identify harmful terrorist content and child sexual abuse material. The organization can order content removal and fine non-compliant entities, representing a first-of-its-kind regulatory approach with direct content intervention powers.


Evidence

ATKM is described as unique and first regulator with special right to dive into terrorist and child sexual abuse content, with power to remove content and fine non-compliant entities


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity


Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion

Explanation

ATKM has identified a concerning trend where extremist groups create Telegram channels focused on mental health and eating disorders to target vulnerable children. These groups extract personal information through grooming, then extort children into harmful activities like self-harm and creating sexual images, which are then distributed.


Evidence

Terrorist groups create Telegram channels for kids to discuss mental health and eating disorders, groom information, then extort them to carve bodies or make sexual images


Major discussion point

Hybrid Threats and Emerging Challenges


Topics

Cybersecurity | Human rights


Hybridization of terrorist content with child sexual abuse material is radicalizing children rapidly, leading to cases of very young potential attackers

Explanation

There’s an emerging pattern of terrorist environments containing child sexual abuse material, creating hybrid threats that rapidly radicalize vulnerable children. This hybridization has accelerated to the point where very young children in Europe have been found on the verge of committing attacks.


Evidence

Finding child sexual abuse material within online terrorist environments; recent cases in Europe of very young kids at the verge of committing attacks


Major discussion point

Hybrid Threats and Emerging Challenges


Topics

Cybersecurity | Human rights


Coordinated approach needed to tackle hybrid problems that span multiple regulatory domains

Explanation

The hybrid nature of emerging threats requires coordination across different regulatory domains and organizations. While ATKM focuses on terrorism and child sexual abuse, issues like eating disorders and mental health fall under other organizations’ purview, necessitating collaborative approaches to address interconnected problems.


Evidence

ATKM cannot address eating disorders or mental health problems directly, but these issues are connected to terrorist grooming activities


Major discussion point

Hybrid Threats and Emerging Challenges


Topics

Legal and regulatory | Cybersecurity


S

Sandra Maximiano

Speech speed

123 words per minute

Speech length

2154 words

Speech time

1049 seconds

Users are affected by cognitive biases like confirmation bias, overconfidence bias, and optimism bias that influence online behavior and decision-making

Explanation

Behavioral economics reveals that users are not rational decision-makers but are influenced by psychological factors and cognitive biases. These include confirmation bias (seeking information that confirms existing beliefs), overconfidence bias (overestimating security knowledge), and optimism bias (underestimating personal risk of scams or breaches).


Evidence

Examples include confirmation bias leading to echo chambers, overconfidence bias causing risky behaviors like weak passwords, and optimism bias leading to inadequate precautions against online threats


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Sociocultural


Vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design

Explanation

While all users experience cognitive biases, certain vulnerable populations including children, disabled individuals, and those with mental health problems are disproportionately affected. Regulators must understand and account for these heightened vulnerabilities when designing policies and interventions.


Evidence

Children, disabled groups, and people with mental health problems have cognitive biases influencing their decisions even more than general population


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Development


Agreed with

– Neema Iyer
– Nighat Dad

Agreed on

Marginalized communities face disproportionate online harm with inadequate support systems


AI systems can exploit cognitive biases and overlook vulnerabilities, potentially causing significant harm even without intentional exploitation

Explanation

AI increases the economic value of exploiting cognitive biases and can cause harm to vulnerable groups even without malicious intent. For example, chatbots trained on typical adult conversations may use metaphors and jokes that individuals with autism interpret literally, potentially leading to harmful actions.


Evidence

Example of chatbots for autism training that may incorporate jokes and metaphors from typical adult conversations, which individuals with autism may interpret literally and act upon


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Infrastructure


Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging

Explanation

The same behavioral insights that create vulnerabilities can be redirected to enhance protection. This includes designing user-friendly interfaces that consider cognitive load, implementing nudges that guide safer behaviors, and using social norms messaging to promote positive online conduct.


Evidence

Examples include framing cyberbullying information clearly, using social norms to highlight that most children don’t engage in bullying, and implementing reward systems for positive behavior


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Sociocultural


Regulators should use the same behavioral insights that firms use for marketing, but redirect them toward safety and protection goals

Explanation

Marketing strategies extensively use behavioral insights to influence consumer behavior and increase sales. Regulators and policymakers should adopt these same techniques but redirect them toward promoting safety, security, and positive online behavior rather than commercial objectives.


Evidence

Marketing strategies use behavioral insights to sell more; regulators should use the same weapons but with different goals in mind


Major discussion point

Behavioral Economics and Digital Safety


Topics

Legal and regulatory | Economic


Platforms should provide safety briefings to users similar to how other service providers are required to give security information

Explanation

Just as service providers in other industries (like skydiving) are required to provide safety briefings before service delivery, online platforms should be mandated to provide users with information about cognitive biases, online risks, and safety measures. This would help users make more informed decisions about their online behavior.


Evidence

Comparison to skydiving services that must provide safety briefings before service delivery


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Human rights


What is illegal offline should remain illegal online, but extreme restriction measures may not be the best approach

Explanation

The fundamental principle should be that illegal activities offline should also be illegal online. However, when it comes to restricting access for children or implementing extreme measures like complete bans, there are better approaches than blanket restrictions that may be overly broad or ineffective.


Major discussion point

Age Verification and Privacy Concerns


Topics

Legal and regulatory | Human rights


A

Anusha Rahman Khan

Speech speed

148 words per minute

Speech length

594 words

Speech time

239 seconds

Former Pakistani minister enacted cyber crime law in 2016 introducing 28 new penalties criminalizing violations of natural person dignity

Explanation

As Pakistan’s former IT and telecommunications minister, Anusha Rahman Khan enacted comprehensive cybercrime legislation in 2016 that introduced 28 new criminal penalties. The law specifically criminalized violations of natural person dignity, with offenders facing jail time or fines for online abuse.


Evidence

Cyber crime law enacted in 2016 with 28 new penalties, criminalizing dignity violations of natural persons with jail time or fines


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination

Explanation

The fundamental challenge is that commercial interest groups and revenue generation motives of platforms conflict with the need to protect citizens from online harm. This creates a situation where countries become hostage to platform policies, particularly problematic when Western platforms apply different value systems to Eastern societies where online harm can have more severe consequences.


Evidence

Interest groups funded by commercial interests resisted cybercrime legislation; same groups later found revenue opportunities in the law; different value systems between East and West where single aspersion can cause suicide


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Economic | Legal and regulatory


Agreed with

– Teo Nie Ching
– Raoul Danniel Abellar Manuel

Agreed on

Individual countries lack sufficient power to regulate global tech platforms effectively


A

Audience

Speech speed

148 words per minute

Speech length

460 words

Speech time

185 seconds

Existing social structures like schools, clubs, and family units should be leveraged to empower victims and prevent online abuse before it occurs

Explanation

Rather than focusing solely on technical solutions, existing community structures such as schools, scouting clubs, girl guides, and family units should be utilized to empower potential victims before they encounter online threats. These established social frameworks can provide foundational protection and education.


Evidence

Examples from Kenya including scouting clubs and girl guides; social clubs in schools that already exist as community structures


Major discussion point

Community-Based Solutions


Topics

Development | Sociocultural


Offline education and empowerment can prepare young people with tools before they encounter online threats

Explanation

By training and empowering young people through offline education and community programs, they can be better prepared to handle online threats when they encounter them. This proactive approach focuses on building resilience and awareness before exposure to digital risks.


Evidence

If young people are trained offline before getting online, and if training makes behavior ‘cool’ for them, it becomes truth and can save a generation


Major discussion point

Community-Based Solutions


Topics

Development | Sociocultural


Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions

Explanation

Technology solutions alone are insufficient because both perpetrators and victims of online abuse are human beings embedded in social contexts. Effective interventions must address the human element through community-based approaches that work alongside technical measures, recognizing that many jurisdictions lack direct access to big tech platforms.


Evidence

Big tech platforms don’t have physical presence in many jurisdictions, making direct engagement impossible; both bullies and victims are human and can be influenced by community interventions


Major discussion point

Community-Based Solutions


Topics

Development | Human rights


A

Alishah Shariff

Speech speed

177 words per minute

Speech length

2027 words

Speech time

683 seconds

The digital world offers opportunities for connection, learning, and growth but also brings risks and downsides that are felt more acutely by vulnerable groups

Explanation

While digital technologies provide significant benefits for human connection and development, they simultaneously create new forms of harm and risk. These negative impacts disproportionately affect vulnerable populations including children, individuals with disabilities, and marginalized communities.


Evidence

Consequences of online harm can have ripple effects into real lives, causing distress, harm, and isolation


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Development


Effective policy responses to online harms require targeted, inclusive, and enforceable approaches developed through multistakeholder collaboration

Explanation

Addressing online safety challenges requires policy frameworks that are specifically designed for different contexts, include diverse perspectives, and can be effectively implemented. This necessitates collaboration between parliamentarians, regulators, and advocacy experts across different geographies.


Evidence

Session brings together diverse panel of parliamentarians, regulators, and advocacy experts across range of geographies and contexts


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Human rights


A

Andrew Campling

Speech speed

125 words per minute

Speech length

155 words

Speech time

73 seconds

Over 300 million children annually are victims of technology-facilitated sexual abuse and exploitation, representing about 14% of the world’s children

Explanation

The scale of child sexual abuse and exploitation facilitated by technology is massive, affecting approximately one in seven children globally each year. This statistic demonstrates the urgent need for comprehensive protective measures in digital spaces.


Evidence

Over 300 million children annually are victims, representing about 14% of world’s children; Internet Watch Foundation finds and takes down CSAM material with partner hotlines around the world


Major discussion point

Age Verification and Privacy Concerns


Topics

Cybersecurity | Human rights


Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children

Explanation

Technical solutions like age verification can help create barriers that prevent inappropriate access to platforms while maintaining privacy protections. This includes stopping children from accessing adult content and preventing adults from creating child accounts to target minors.


Evidence

Need to stop children from accessing adult platforms and adult content, and stop adults from accessing child-centric platforms and opening child accounts to target children


Major discussion point

Age Verification and Privacy Concerns


Topics

Cybersecurity | Human rights


Disagreed with

– Neema Iyer

Disagreed on

Age verification and privacy-preserving technologies for child protection


Client-side scanning technology should be better utilized to prevent messaging platforms from being used to share child sexual abuse material at scale

Explanation

Privacy-preserving technologies like client-side scanning can help detect and prevent the distribution of child sexual abuse material through encrypted messaging platforms. This approach can maintain user privacy while providing protection against large-scale distribution of harmful content.


Evidence

Messaging platforms like WhatsApp are being used to share CSAM at scale around the world, which can be addressed in a privacy-preserving way


Major discussion point

Age Verification and Privacy Concerns


Topics

Cybersecurity | Human rights


Agreements

Agreement points

Platform accountability requires transparency beyond content takedowns

Speakers

– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Neema Iyer

Arguments

Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal


Platforms lack transparency about actions taken against scammers and advertisers, making accountability difficult to assess


Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment


Platforms should proactively report sources of harmful content to government rather than just reacting to individual posts


Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing


Summary

All speakers agreed that current platform accountability mechanisms are insufficient, with particular emphasis on the need for transparency in content moderation processes, proactive identification of harmful sources, and addressing geographic inequalities in platform responses.


Topics

Legal and regulatory | Human rights | Economic


Individual countries lack sufficient power to regulate global tech platforms effectively

Speakers

– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Anusha Rahman Khan

Arguments

Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN


Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence


Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination


Summary

Government representatives from Malaysia, Philippines, and Pakistan all acknowledged that individual nations have limited leverage against major tech platforms, emphasizing the need for coordinated international or regional approaches to regulation.


Topics

Legal and regulatory | Economic


Marginalized communities face disproportionate online harm with inadequate support systems

Speakers

– Neema Iyer
– Nighat Dad
– Sandra Maximiano

Arguments

One in three women across Africa experience online violence, leading many to delete their online identities due to lack of awareness about reporting mechanisms


Digital Rights Foundation helpline has handled over 20,000 complaints since 2016, with hundreds of young women reporting monthly about blackmail and harassment


Vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design


Summary

Civil society representatives agreed that vulnerable populations experience higher rates of online harm and face additional barriers in accessing help, requiring specialized approaches that account for their unique vulnerabilities.


Topics

Human rights | Sociocultural


Laws designed to protect can be weaponized against vulnerable groups

Speakers

– Neema Iyer
– Nighat Dad

Arguments

Laws designed to protect are often weaponized against women and marginalized groups, being used to punish rather than protect them


Meta’s recent scaling back of proactive enforcement systems shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only


Summary

Both civil society advocates highlighted the paradox where protective legislation and platform policies can be misused to harm the very groups they were intended to protect, particularly in Global South contexts.


Topics

Legal and regulatory | Human rights


Similar viewpoints

Both speakers emphasized the need for collaborative, multi-stakeholder approaches to platform governance and the importance of using behavioral insights to promote safer online behavior through design interventions.

Speakers

– Neema Iyer
– Sandra Maximiano

Arguments

Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies


Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging


Topics

Legal and regulatory | Human rights | Sociocultural


Both emphasized that technical solutions alone are insufficient and that addressing root causes through community-based interventions and socioeconomic factors is essential for effective protection.

Speakers

– Raoul Danniel Abellar Manuel
– Audience

Arguments

Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material


Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions


Topics

Development | Human rights | Sociocultural


Both highlighted emerging hybrid threats that exploit vulnerable populations through sophisticated targeting and manipulation techniques, requiring coordinated responses across different regulatory domains.

Speakers

– Arda Gerkens
– Nighat Dad

Arguments

Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion


Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide


Topics

Cybersecurity | Human rights


Unexpected consensus

Rejection of extreme age verification measures

Speakers

– Neema Iyer
– Sandra Maximiano

Arguments

Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing


What is illegal offline should remain illegal online, but extreme restriction measures may not be the best approach


Explanation

Despite coming from different professional backgrounds (civil society advocacy vs. regulatory economics), both speakers rejected blanket age verification or social media bans as solutions, instead favoring more nuanced approaches that preserve privacy while promoting safety.


Topics

Legal and regulatory | Human rights


Need for behavioral and design-based interventions over purely legal approaches

Speakers

– Sandra Maximiano
– Neema Iyer
– Audience

Arguments

Regulators should use the same behavioral insights that firms use for marketing, but redirect them toward safety and protection goals


Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing


Existing social structures like schools, clubs, and family units should be leveraged to empower victims and prevent online abuse before it occurs


Explanation

Unexpectedly, speakers from regulatory, advocacy, and community perspectives all converged on the idea that behavioral interventions and proactive design changes are more effective than reactive legal measures, representing a shift from traditional regulatory thinking.


Topics

Human rights | Sociocultural | Development


Overall assessment

Summary

The speakers demonstrated strong consensus on several key issues: the inadequacy of current platform accountability mechanisms, the need for international coordination to effectively regulate global tech platforms, the disproportionate impact of online harm on marginalized communities, and the limitations of purely reactive legal approaches. There was also notable agreement on the need for more proactive, design-based interventions and multi-stakeholder collaboration.


Consensus level

High level of consensus with significant implications for policy development. The agreement across different stakeholder groups (government officials, regulators, civil society advocates) suggests these issues transcend traditional boundaries and require coordinated responses. The consensus on moving beyond reactive measures toward proactive design interventions represents a potential paradigm shift in online safety approaches. However, the challenge remains in translating this consensus into actionable policies given the power imbalances between individual nations and global tech platforms.


Differences

Different viewpoints

Age verification and privacy-preserving technologies for child protection

Speakers

– Neema Iyer
– Andrew Campling

Arguments

I think absolutely not. So I live in Australia and we just passed a social media ban on children. In the past year, I have no idea what is the plan for implementation. And it’s really, it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place. So my general opinion is no, that we as humans need some level of privacy in our lives. And I think that there are better, and the fact is that people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.


Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children


Summary

Andrew Campling advocates for mandatory privacy-preserving age verification technology to protect children online, while Neema Iyer strongly opposes such measures, arguing they compromise privacy and are ineffective since people will circumvent them anyway.


Topics

Cybersecurity | Human rights


Unexpected differences

Approach to addressing root causes of child exploitation

Speakers

– Raoul Danniel Abellar Manuel
– Andrew Campling

Arguments

Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material


Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children


Explanation

While both speakers are deeply concerned about child protection online, they approach the problem from fundamentally different angles. The Philippine MP emphasizes addressing underlying economic causes like poverty that drive families to exploit children, while the Internet Watch Foundation trustee focuses on technical solutions like age verification. This disagreement is unexpected because both are child protection advocates but see completely different primary solutions.


Topics

Development | Cybersecurity | Human rights


Overall assessment

Summary

The main areas of disagreement center around the balance between privacy and safety (particularly regarding age verification), the effectiveness of technical versus socioeconomic solutions for child protection, and the specific mechanisms for international cooperation in platform regulation.


Disagreement level

The level of disagreement is moderate but significant in its implications. While speakers largely agree on the problems (online harm to vulnerable groups, platform accountability issues, need for international cooperation), they diverge substantially on solutions. The privacy versus safety debate represents a fundamental tension in digital rights policy, while the technical versus socioeconomic approach to child protection reflects different philosophical frameworks for addressing online harm. These disagreements suggest that achieving consensus on specific policy measures will require careful negotiation and potentially hybrid approaches that incorporate multiple perspectives.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the need for collaborative, multi-stakeholder approaches to platform governance and the importance of using behavioral insights to promote safer online behavior through design interventions.

Speakers

– Neema Iyer
– Sandra Maximiano

Arguments

Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies


Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging


Topics

Legal and regulatory | Human rights | Sociocultural


Both emphasized that technical solutions alone are insufficient and that addressing root causes through community-based interventions and socioeconomic factors is essential for effective protection.

Speakers

– Raoul Danniel Abellar Manuel
– Audience

Arguments

Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material


Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions


Topics

Development | Human rights | Sociocultural


Both highlighted emerging hybrid threats that exploit vulnerable populations through sophisticated targeting and manipulation techniques, requiring coordinated responses across different regulatory domains.

Speakers

– Arda Gerkens
– Nighat Dad

Arguments

Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion


Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide


Topics

Cybersecurity | Human rights


Takeaways

Key takeaways

Online harm disproportionately affects marginalized communities in the Global South due to intersecting inequalities, language barriers, and lack of platform prioritization


Legislative frameworks are often too narrow, focusing on takedowns rather than prevention, and can be weaponized against the very groups they aim to protect


Platform accountability requires transparency in content moderation processes, algorithmic decision-making, and actions taken against violators beyond simple content removal


Individual countries lack sufficient negotiation power with tech giants, necessitating coordinated regional or international approaches


Behavioral economics insights can be leveraged to design better safety interventions, using the same cognitive bias understanding that platforms use for engagement


Hybrid threats combining terrorism, child exploitation, and targeting of vulnerable groups through mental health platforms represent emerging challenges requiring coordinated responses


Prevention through design friction and community-based offline education is more effective than reactive content takedown measures


Multi-stakeholder collaboration between governments, platforms, and civil society is essential for developing effective and balanced online safety policies


Resolutions and action items

Invitation extended for regulators to join the Global Online Safety Regulators Network (GOSRN) to facilitate international cooperation


Proposal for ASEAN countries to engage with platforms as a bloc rather than individually to increase negotiation power


Recommendation for platforms to provide mandatory safety briefings to users similar to other service providers


Call for platforms to proactively report sources of harmful content to governments rather than just responding to individual takedown requests


Suggestion for algorithmic decision-making to be co-created by governments, civil society, and platforms together


Proposal to leverage existing community structures (schools, clubs, families) to provide offline education and empowerment before online exposure


Unresolved issues

How to effectively enforce regulations when major platforms refuse to comply with licensing requirements or attend government hearings


Balancing privacy rights with age verification and content scanning technologies for child protection


Addressing the fundamental economic incentives that drive platforms to prioritize engagement over safety


Developing culturally appropriate standards for different regions while maintaining international cooperation


Creating effective reporting mechanisms in local languages and contexts for Global South users


Preventing the weaponization of online safety laws against marginalized groups and activists


Addressing the gap between Western-designed platforms and Eastern value systems and legal frameworks


Managing the rise of platforms with no accountability mechanisms or human rights teams


Suggested compromises

Implementing minimum universal safety standards while allowing regional variations for cultural and contextual differences


Using behavioral nudges and design friction as alternatives to extreme restriction measures like complete social media bans


Combining technical solutions with community-based offline interventions rather than relying solely on either approach


Establishing transparency requirements for platform actions against violators while respecting commercial confidentiality


Creating tiered accountability systems where platforms with larger user bases face stricter requirements


Developing privacy-preserving safety technologies that protect users without compromising fundamental rights


Balancing proactive content moderation with protection against algorithmic bias and shadow banning of legitimate content


Thought provoking comments

The laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.

Speaker

Neema Iyer


Reason

This comment is deeply insightful because it reveals the paradox of protective legislation becoming a tool of oppression. It challenges the assumption that creating laws automatically leads to protection and highlights how power structures can co-opt well-intentioned regulations.


Impact

This comment fundamentally shifted the discussion from focusing solely on creating new regulations to examining how existing laws are implemented and enforced. It introduced the critical concept that legislative frameworks can have unintended consequences, setting the stage for other panelists to discuss the importance of balanced, enforceable policies.


We see more and more hybridization of these types of content mixed together with other content… we’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online… these terrorist groups or these groups, extremist groups, are actually targeting vulnerable kids.

Speaker

Arda Gerkens


Reason

This observation is thought-provoking because it reveals the evolution of online threats from discrete categories to complex, interconnected forms of harm. It demonstrates how traditional regulatory silos may be inadequate for addressing modern digital threats.


Impact

This comment introduced a new dimension to the discussion about the complexity of online harms. It moved the conversation beyond simple content takedowns to understanding how different forms of abuse intersect and require coordinated responses across different regulatory domains.


If the system only works within these platforms when the media pays attention, what happens to the millions of women in the Global South who never make headlines?

Speaker

Nighat Dad


Reason

This comment powerfully exposes the inequality in platform responses based on visibility and geography. It challenges the notion of equal protection online and highlights how media attention becomes a prerequisite for justice.


Impact

This comment crystallized the discussion around global inequities in platform accountability. It prompted other speakers to discuss the need for coordinated international responses and highlighted how current systems fail those without voice or visibility.


Behavioral economics is actually a field that blends insights from psychology and economics to fully understand how women make decisions… we have to understand this cognitive bias and also be aware that we can use them to make individuals, make them take more informed decisions.

Speaker

Sandra Maximiano


Reason

This comment introduced an entirely new analytical framework to the discussion, shifting from purely regulatory and technical approaches to understanding the psychological mechanisms that make people vulnerable online. It’s innovative in suggesting that the same tools used to exploit can be used to protect.


Impact

This intervention fundamentally broadened the scope of the discussion beyond traditional regulatory approaches. It introduced the concept of ‘nudging’ for protection and influenced subsequent speakers to consider design-based solutions rather than just content moderation.


I think we really need to think broader about how we are legislating about online violence… legislative frameworks are often too narrow. They focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women.

Speaker

Neema Iyer


Reason

This comment challenges the dominant paradigm of online safety regulation by questioning both the scope and cultural appropriateness of current approaches. It calls for more nuanced, context-specific solutions.


Impact

This comment established a critical theme that ran throughout the discussion – the inadequacy of one-size-fits-all solutions and the need for culturally sensitive, comprehensive approaches to online harm prevention.


The offenders are human. The victims are humans… if we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully… if they were trained offline long before they got onto the internet, then maybe it can become a movement that saves a generation.

Speaker

John Kiariye


Reason

This comment reframes the entire discussion by emphasizing the human element behind technology-mediated harm. It challenges the tech-centric approach and advocates for community-based, preventive solutions rooted in existing social structures.


Impact

This intervention brought the discussion full circle, grounding the technical and regulatory focus back in human relationships and community structures. It emphasized prevention over reaction and highlighted the importance of offline interventions for online safety.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches to online safety and introducing new analytical frameworks. The conversation evolved from a focus on reactive measures (content takedowns, legislation) to proactive, holistic approaches that consider behavioral psychology, cultural context, and community-based solutions. The comments revealed the limitations of current regulatory frameworks and highlighted the need for coordinated, multi-stakeholder responses that address both the technical and human dimensions of online harm. Most significantly, they exposed the global inequities in how online safety is implemented and experienced, pushing the discussion toward more inclusive and comprehensive solutions.


Follow-up questions

How can we develop interventions and safety mechanisms for platforms that don’t prioritize smaller countries with multiple local languages?

Speaker

Neema Iyer


Explanation

This addresses the challenge of platform governance in regions with linguistic diversity and smaller market shares, where safety mechanisms may not be adequately developed or localized


How can we develop broader legislative frameworks that address coordinated disinformation campaigns and ideological radicalization of minors online, beyond just intimate image sharing?

Speaker

Neema Iyer


Explanation

Current legislative frameworks are often too narrow and don’t address the full spectrum of online harms faced by marginalized communities


What are the specific criteria and thresholds for determining what constitutes ‘glorifying’ terrorist content or ‘call to action’ in content moderation?

Speaker

Arda Gerkens


Explanation

This is needed to clarify vague legislation and create consistent standards across European countries for terrorist content removal


How can we develop coordinated approaches to tackle hybrid threats that combine terrorism, child sexual abuse material, and targeting of vulnerable children across different regulatory domains?

Speaker

Arda Gerkens


Explanation

There’s an emerging trend of hybridization where terrorist groups are using CSAM and targeting vulnerable children, requiring cross-domain collaboration


What actions are platforms taking against scammers who sponsor harmful posts, beyond just content takedown?

Speaker

Teo Nie Ching


Explanation

There’s a lack of transparency about platform accountability measures against bad actors, not just their content


How much advertising revenue do major platforms collect from individual countries or regions like ASEAN?

Speaker

Teo Nie Ching


Explanation

This information is needed to understand the economic leverage that could be used in platform negotiations


How can we establish international standards for platform responsibilities instead of individual countries negotiating separately?

Speaker

Teo Nie Ching


Explanation

Individual countries lack sufficient negotiation power with tech giants, requiring coordinated international approaches


What happens to millions of women in the Global South who face online harm but never make headlines or receive media attention?

Speaker

Nighat Dad


Explanation

Platform response systems often only work when media pays attention, leaving many victims without recourse


How can we design algorithmic decisions through co-creation involving governments, civil society, and platforms rather than leaving them to platform owners’ ideologies?

Speaker

Neema Iyer


Explanation

Current algorithmic decisions reflect the moral and political ideologies of platform owners, requiring more democratic input


How can we introduce design friction to prevent harmful content from being shared in the first place, rather than relying on reactive takedown measures?

Speaker

Neema Iyer


Explanation

Proactive prevention through design changes could be more effective than reactive content moderation


How can we better utilize existing community structures (schools, clubs, families) to empower potential victims before they encounter online threats?

Speaker

John Kiariye


Explanation

Focusing on offline preparation and community-based solutions could complement technical approaches to online safety


What are the most effective behavioral economics interventions and nudges that can be implemented by platforms to promote safer online behavior?

Speaker

Sandra Maximiano


Explanation

Understanding and applying behavioral insights could help design more effective safety measures that work with human psychology rather than against it


How can regional blocs like ASEAN develop coordinated standards for platform regulation that reflect their cultural and religious contexts?

Speaker

Teo Nie Ching


Explanation

Regional coordination could provide more negotiating power and culturally appropriate standards than individual country approaches


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #13 Bridging the Digital Divide Focus on the Global South

Open Forum #13 Bridging the Digital Divide Focus on the Global South

Session at a glance

Summary

This open forum, hosted by the World Internet Conference (WRC), focused on bridging the digital divide with particular emphasis on the Global South and the role of emerging technologies like artificial intelligence. The discussion brought together high-level representatives from international organizations, regulatory bodies, and youth leaders to address solutions for digital inclusion.


UN Under-Secretary-General Li Junhua highlighted that 2.6 billion people remain offline, primarily in least developed countries, emphasizing that the digital divide has evolved beyond infrastructure to include affordable devices, digital skills, and safe navigation capabilities. He stressed the importance of local community empowerment and bottom-up approaches, noting progress since 2015 when 4 billion people were offline. Former WIPO Director-General Francis Gurry warned of a crisis point due to declining development funding (38% reduction expected) coinciding with rapid AI advancement, which risks exacerbating the digital divide when funding is most needed.


ICANN co-chair Tripti Sinha emphasized that the divide encompasses participation and inclusiveness beyond mere access, advocating for AI-powered solutions to optimize network infrastructure while maintaining multi-stakeholder governance approaches. She warned against fragmentation risks from state-led governance models that could separate Global South countries from the global internet. Chinese officials outlined China’s commitment to supporting Global South digital development through capacity building, international cooperation, and AI governance initiatives, including training workshops and technical assistance programs.


Dr. Nii Quaynor from Ghana provided an African perspective, noting infrastructure improvements but highlighting persistent challenges including limited technical capacity, fragile infrastructure, and economic sustainability issues. Malaysian representative Chern Choong Thum shared Southeast Asian solutions, including digital literacy centers and AI training programs, emphasizing human-centric approaches to digital governance. The forum concluded with consensus on the need for continued international cooperation, inclusive dialogue, and sustainable solutions to ensure equitable digital development for all communities.


Keypoints

## Major Discussion Points:


– **Scale and urgency of the digital divide**: 2.6 billion people remain offline globally, with the majority in least developed countries, creating gaps in opportunity rather than just access. The divide exists both between and within countries, affecting rural populations, women, indigenous peoples, and persons with disabilities.


– **Crisis in development funding amid AI advancement**: A critical timing challenge where development funding is decreasing by an estimated 38% while AI technology is rapidly advancing, potentially exacerbating the digital divide. The high costs and technical requirements of AI infrastructure risk creating an even wider gap between developed and developing nations.


– **Infrastructure and technical foundations**: Beyond physical connectivity, the discussion emphasized the need for reliable technical infrastructure including domain name systems, IP addresses, multilingual support, and local capacity building. Universal acceptance and internationalized domain names are crucial for cultural and linguistic participation.


– **Multi-stakeholder governance and Global South participation**: The importance of maintaining collaborative, bottom-up approaches to internet governance while ensuring meaningful participation from Global South countries in digital policy-making processes. There’s concern about potential fragmentation if countries pursue separate technical standards.


– **Practical solutions and international cooperation**: Concrete initiatives including China’s AI capacity building programs, Malaysia’s NADI Centers for digital literacy, Africa’s progress in internet infrastructure, and the need for South-South cooperation to share knowledge and resources effectively.


## Overall Purpose:


The discussion aimed to identify actionable solutions for bridging the digital divide affecting the Global South, with particular focus on how emerging technologies like AI can be leveraged to expand access and opportunities rather than widen existing gaps. The forum sought to build international consensus and cooperation frameworks for inclusive digital development.


## Overall Tone:


The discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers acknowledged serious challenges with urgency while remaining optimistic about potential solutions through international cooperation. The tone was formal yet inclusive, emphasizing shared responsibility and mutual benefit. There was a notable emphasis on practical examples and concrete commitments rather than abstract policy discussions, reflecting a pragmatic approach to addressing complex global challenges.


Speakers

– **Zhang Hui** – Deputy Secretary-General of the World Internet Conference (WRC)


– **Li Junhua** – Under-Secretary-General for UNDESA (United Nations Department of Economic and Social Affairs)


– **Francis Gurry** – Vice-Chair of WRC, former WIPO Director-General, global authority on intellectual property and digital innovation


– **Tripti Sinha** – Co-chair of Internet Corporation for Assigned Names and Numbers (ICANN), extensive experience in Internet infrastructure and multi-stakeholder governance


– **Ren Xianliang** – Secretary-General of the WRC


– **Qi Xiaoxia** – Director General of International Cooperation Bureau of Cyberspace Administration of China, extensive experience in international cyberspace exchange and cooperation


– **Nii Quaynor** – Chairman of Ghana Dot Com, known as the “Father of Internet in Africa,” Internet Hall of Fame awardee, WRC distinguished contribution awardee


– **Chern Choong Thum** – Special Functional Officer at the Ministry of Communications of Malaysia, 2024 global youth leader, supports Malaysia’s digital strategy and regional innovation programs, doctor working in public health


Additional speakers:


None identified outside the provided speakers names list.


Full session report

# World Internet Conference Open Forum: Bridging the Digital Divide


## Executive Summary


The World Internet Conference hosted an open forum on bridging the digital divide, featuring UN Under-Secretary-General Li Junhua, former WIPO Director-General Francis Gurry, ICANN co-chair Tripti Sinha, and senior officials from China, Ghana, and Malaysia. The discussion addressed the challenge of connecting 2.6 billion people who remain offline globally, with particular focus on the Global South, declining development funding, and the dual role of artificial intelligence in either bridging or widening digital gaps.


## Current State of the Digital Divide


### Global Scale and Distribution


UN Under-Secretary-General Li Junhua established that 2.6 billion people remain offline worldwide, with the majority in least developed countries. He noted progress since 2015 when 4 billion people were offline, but emphasized that the remaining gap represents the most challenging populations to reach. The digital divide affects rural populations, women, indigenous peoples, refugees, and persons with disabilities disproportionately.


Dr. Chern Choong Thum from Malaysia’s Ministry of Communications provided a public health perspective, stating that “digital exclusion deepens health inequalities and cuts off access to life-saving services and vital health education.” He noted that while 5.5 billion people are online, a third of the world remains disconnected, predominantly in Global South rural areas.


### African Infrastructure Challenges


Dr. Nii Quaynor, known as the “Father of Internet in Africa,” highlighted persistent challenges including fragile infrastructure, limited technical capacity, and economic sustainability issues. He provided historical context, noting that “every new technology comes with its distinct divides, and some may widen other divides.”


Quaynor shared specific statistics: “Africa is at 4.4 domain names per thousand, where global is 45 per thousand,” illustrating the continent’s digital infrastructure gaps. He raised critical questions about sustainability: “Where is the revenue to maintain, improve and develop infrastructure services constantly in the global south?”


## Funding Crisis and AI Acceleration


### Development Funding Challenges


Former WIPO Director-General Francis Gurry identified what he termed “a real crisis point,” outlining two converging challenges. First, a dramatic crisis in development funding with an estimated 38% reduction expected in the coming year. Second, the unprecedented speed of artificial intelligence deployment.


Gurry emphasized that “never has funding and development assistance been more needed than at the present time when artificial intelligence is coming online at such a speed that it is baffling to all of us.”


### AI’s Dual Impact


Tripti Sinha acknowledged AI’s potential to optimize network infrastructure and enable efficient resource allocation for unconnected markets. However, she warned that “knowledge begets knowledge, wealth begets wealth, and those who possess these will only have the opportunity to obtain more. Similarly, innovation begets innovation.”


Nii Quaynor warned that AI technology threatens the digital divide most significantly due to high infrastructure costs, substantial power requirements, and technical skills needed for participation.


## Infrastructure and Technical Requirements


### Beyond Physical Connectivity


Tripti Sinha emphasized that bridging the digital divide requires comprehensive technical foundations, including reliable domain name systems, IP address allocation, root servers, and multilingual support systems. She highlighted the importance of universal acceptance and internationalized domain names, noting that millions cannot engage with the Internet in their own language.


### National Success Models


Chern Choong Thum shared Malaysia’s achievements through the Jandela Plan, which equipped 9 million premises with fiber optic access and significantly boosted mobile speeds. Malaysia also established NADI Centers providing internet access and ICT training, including AI skills programs.


Ren Xianliang, Secretary-General of the WRC, emphasized sustainable infrastructure operation alongside digital education, which he termed “the biggest equalizer.”


## Governance Approaches and International Cooperation


### Multi-Stakeholder vs. State-Led Models


Tripti Sinha strongly advocated for ICANN’s multi-stakeholder model, bringing together governments, private sector, civil society, and the technical community. She warned about fragmentation risks from state-led approaches that could threaten the single, interoperable Internet.


Qi Xiaoxia, Director General of China’s Cyberspace Administration International Cooperation Bureau, presented a different perspective emphasizing respect for sovereignty in cyberspace. She advocated for countries’ rights to independently choose their Internet development paths while opposing “cyber hegemony.”


### Implementation Challenges


Nii Quaynor provided a balanced assessment of multi-stakeholder governance, acknowledging both its potential and limitations. He noted implementation challenges including difficulties finding qualified participants, potential “decision by fatigue,” and the need for skilled moderation to achieve consensus.


## Capacity Building Initiatives


### Bottom-Up Approaches


Li Junhua emphasized that “bottom-up, grassroots processes are foundational to global efforts,” giving communities voice in their digital development. This approach recognizes that sustainable solutions must emerge from local needs and capabilities.


### International Programs


Qi Xiaoxia outlined China’s commitment to supporting Global South digital development through comprehensive capacity building initiatives, including training workshops, knowledge sharing platforms, and technical assistance programs. China announced implementation of UN resolution on AI capacity building with ten major actions and five additional training workshops for Global South countries.


### Regional Leadership


Chern Choong Thum described Malaysia’s approach to regional leadership through its 2025 ASEAN Chairmanship, championing inclusivity and sustainability themes in digital development with human-centric policies ensuring no one is left behind.


## Concrete Commitments


### Organizational Actions


– **World Internet Conference**: Committed to deepening cooperation with the Global South through continued dialogue platforms


– **China**: Announced specific implementation of UN resolution on AI capacity building with ten major actions and five training workshops for Global South countries


– **Malaysia**: Committed to leveraging its 2025 ASEAN Chairmanship to champion inclusivity and sustainability in digital development


– **ICANN**: Committed to continued support for technical resilience, multilingual access, and global connectivity in underserved regions


### Global Review Opportunities


Li Junhua identified the WSIS Plus 20 review as a crucial opportunity to renew global commitment to digital inclusion and meaningful access for all.


## Key Challenges and Disagreements


### Unresolved Issues


The forum identified several critical unresolved challenges:


– Addressing the massive development funding crisis while meeting increased needs for AI-era digital infrastructure


– Reconciling unified global Internet standards with national sovereignty concerns


– Preventing AI advancement from creating new forms of digital exclusion


– Creating financially viable models for ongoing infrastructure maintenance in resource-constrained environments


### Governance Philosophy Differences


The most significant disagreement centered on governance approaches, with tension between maintaining technical coordination and respecting political sovereignty. This reflects broader challenges in global Internet governance between unified standards and national control.


## Conclusion


The forum revealed both the complexity and urgency of bridging the digital divide amid rapid technological change and constrained resources. The convergence of declining development funding with accelerating AI deployment creates challenges requiring innovative solutions and enhanced international cooperation.


Success will depend on reconciling technical coordination needs with political sovereignty concerns while ensuring emerging technologies bridge rather than widen existing divides. The commitments made by participating organizations provide concrete starting points, but the scale of the challenge requires sustained effort and continued dialogue.


The upcoming WSIS Plus 20 review offers an opportunity to translate forum insights into coordinated global action addressing the digital divide before it becomes insurmountable.


Session transcript

Zhang Hui: Your Excellency Under-Secretary-General Li Jinhua, Your Excellency Vice-Chair Francis Gurry, Your Excellency Board Chair Timothy Sinha, Your Excellency Secretary-General Ren Xianliang, Distinguished guests, ladies and gentlemen, good morning. It is my great honor to welcome you to attend this open forum. We feel highly appreciated that UNDESA and IGF provides us with a global platform for open dialogue on key digital issues. My name is Zhang Hui, Deputy Secretary-General of the World Internet Conference, also known as the WRC. The WRC is an international organization committed to establishing a global Internet platform for extensive consultation, joint contribution, and shared benefits, promoting the international community to follow the trend of digitalization, networking, and intelligence, to address security challenges for common development in the information age, and building a strong community with a shared future in cyberspace. Today’s session seems to be bridging the digital divide, focusing on the global south, highlights a key priority for inclusive global development, while challenges remain. The focus today is on solutions. How emerging technologies, particularly AI, can help expand access, strengthen digital capacity, and unlock new opportunities for the global south. We are honored to be joined by an exceptional group of speakers who are helping shape the future of digital governance. Among them are high-level representatives from international organizations, national regulatory bodies, and young representatives from emerging digital communities. First, I feel deeply honored to invite His Excellency, Mr. Li Junhua, on the Secretary General for UNDESA. UNDESA leads global initiatives on sustainable development and has played a key role in shaping international cooperation on emerging technologies. Welcome.


Li Junhua: Thank you. Thank you very much. Good morning, everyone, Excellencies, distinguished delegates. It is my great pleasure to join you today for this important gathering on bridging the digital divide with its vital focus on global south. I extend my sincere thanks to the World Internet Conference for convening this open forum. The theme of the digital divide could not be more urgent as we increasingly relied on the digital technologies to access education, healthcare, jobs, services, and civic participation. The divide between those who are connected and those who are not has become one of the defining challenges of our time. As technology evolves, so does the nature of the digital divide. It is no longer just the questions of cables, satellites, or cell towers. It is about affordable devices, the skill to use them, and the confidence and support needed to navigate the online world safely. It is a divide of opportunities. Today, 2.6 billion people remain offline. The majority live in the world’s least developed or lower-middle-income countries. This is where the digital gap remains widest and where our efforts must now be consolidated. We must also recognize the inequality within countries, even in those considered well-connected. Remote and rural populations, refugees, indigenous peoples, women and girls, and persons with disabilities continue to face barriers to full digital inclusion. These are not just gaps in access, but gaps in opportunity, which calls for a renewed focus on digital capacity development and building partnerships that are inclusive, innovative, and sustained. Forums like the World Internet Conference and the Internet Governance Forum are vital spaces for collaboration. They provide essential spaces for global dialogue, coordination, and collaboration on digital policy. Their true impact is realized when they are informed by what happens on the ground, because the roots of the digital divide are deeply local. The solution lies in empowering local communities. The IGF has evolved into a global ecosystem with over 176 national, regional, sub-regional, and youth IGFs now active worldwide. These local and regional processes are not just complementary to our global efforts. They are foundational. They give the communities a voice, surface the local innovations, and help shape the policies that are relevant, inclusive, and grounded in lived realities. To close the digital divide, we must strive for inclusive cooperation between global efforts and grassroots processes. The priorities that emerged from the bottom-up should guide the investment in infrastructure, human capacity, and meaningful partnerships, ensuring that no community is left behind in the digital age. Dear friends, dear colleagues, we now have a golden opportunity. The upcoming 20-year review of the World Summit on Information Society, or WSIS Plus 20, allows us to renew our commitment to digital inclusion and meaningful access for all. We have made progress. In 2015, during the WSIS Plus 10 review, an estimated 4 billion people were offline. Today, allow me to reiterate, that number has dropped to 2.6 billion. This is a major improvement, but of course, still far too many remain unconnected. That’s why there’s every reason for all of us to intensify our efforts. Let’s leverage the global platform to amplify the solutions, to collaborate, share, and work together to build a truly inclusive and equitable digital future for all. Thank you.


Zhang Hui: Thank you, Mr. Li. Thank you, Mr. Li. Next, it is our great pleasure to welcome Dr. Francis Gurry, Vice-Chair of WRC, former WIPO Director-General. He is a global authority on intellectual property and digital innovation.


Francis Gurry: Thank you very much indeed. Under-Secretary Li Junhua, Secretary-General Ren Xiaolong, distinguished panelists and guests, it’s so nice to be part of this forum and to see so many of you here participating on this exceptionally important topic in the context of this extremely important ongoing meeting of the Internet Governance Forum. Let me start with the very obvious point that digital technology has penetrated all aspects of our life. We’re all very much aware of this, but I don’t think we can repeat it sufficiently enough. We know that it is the basis of economic production now, if not the basis, at least a major factor of economic production. It is responsible for cost efficiencies, quality outcomes, innovation, and competitive advantage in the field of the economy. And outside the economy, we’re very much aware that digital technology enables or improves social communication, the delivery of social services such as health and medicine, cultural exchanges, and educational opportunities. So any impairment in the capacity… to use any of these advantages that are conferred by digital technology is obviously a major disadvantage. Digital technology is so important as the foundation of economic, cultural, social life now that a lack of access or a disadvantaged access of course creates a major, major problem. And that disadvantage, that divide, we know exists within countries. There is an urban-rural divide, there is a gender divide, there is an age divide, and there is an income divide. And we know it exists between countries, which is the one that we are concentrating on and addressing today. Now much good work has been done and Under-Secretary General Lee has referred to some of this. There’s the great work that’s been done by the International Telecommunication Union, for example, the great work of the World Internet Conference in increasing number of fields, and the many, many other organisations that are involved in trying to address this major question of the digital divide. Despite the progress, and I think Under-Secretary General Lee has referred to the fact that we now have two-thirds of the world connected, but of course one-third still not connected. Despite the progress, I think we are at a real crisis point in relation to the digital divide, and that crisis I think comes from two challenges. The first challenge is the crisis in development funding that we are witnessing right at the moment. So it’s estimated that next year there will be about 38% less development funding available around the world as a consequence of the change of attitude of the United States of America in relation to foreign aid, but also the diversion of funding by many European countries away from development and towards military expending and so on around the world. So there is a massive crisis we know in relation to development funding. And on some estimates, it’s scarcely sufficient to meet debt obligations. So this is the first problem we have. And the second problem is that never has funding and development assistance been more needed than at the present time when artificial intelligence is coming online at such a speed that it is baffling to all of us. So artificial intelligence now is another general purpose technology that will exacerbate or risk exacerbating the digital divide. We know that there are many positive aspects of artificial intelligence, and some of those include, for example, open access, but the speed at which it is unfolding and the amounts of money that is being invested in the development of artificial intelligence by some of the leading economies are such that we are at a great risk of the exacerbation of the digital divide, especially given that development funding is suffering a crisis at the same time. So this, I think, is the essence of the problem that we confront right at the moment in relation to the digital divide. And I think it requires a major international strategic plan with all the major actors involved in order to ensure that we do not end up in a worse situation with the advent, the arrival of these new artificial intelligence technologies, not so new perhaps now, we don’t end up in a worse position. If you look, just one final example, at data centres around the world essential to artificial intelligence infrastructure, you find that they are all, of course, or mainly in the north, with the exception of China. So we have a real potential crisis here, and I think a major international effort and strategic plan is required. Thank you very much.


Zhang Hui: Thank you, Dr. Gurry. Next, it is our great honour to welcome Ms. Tripti Sinha, co-chair of Internet Corporation for Signed Names and Numbers, or ICANN. She has extensive experience in Internet infrastructure and multi-stakeholder governance.


Tripti Sinha: Thank you very much, and thanks to the World Internet Conference for convening this very important discussion, and to my co-panellists for sharing this opportunity to speak to you today. So the digital divide, as you know, is not a new issue, and it continues to evolve in very complex ways, as my colleagues just stated. Today, the discussion is broader than access. It is also about participation and inclusiveness. The fact that 5 billion people are now online is significant. This growth was not accidental. It reflects years of coordination, technical cooperation, and a shared commitment to an Internet that remains global, resilient, and accessible to all. But as Dr. Currie just said, we are in a financial crisis in the world, and priorities are shifting and we are in a very, very difficult time. And we must treat this prevailing global digital divide with a sense of urgency. As the old adage goes, knowledge begets knowledge, wealth begets wealth, and those who possess these will only have the opportunity to obtain more. Similarly, innovation begets innovation. And for those who are not part of this opportunity ecosystem, you know, they will suffer and they will fall behind. And during this time of yet another innovation and the change agent at play, which is artificial intelligence, this divide will continue to grow. So a global community of have and have-nots will only lead to future significant problems. We know the world will change in unknown ways with the application of artificial intelligence. However, we should leverage the advantages that come with AI as we begin to create a strategic blueprint to reduce this digital divide for the world community. So let’s talk a little bit about AI as it offers significant benefits for building out networks in unconnected markets by enabling efficient resource allocation, proactive maintenance, and of course enhanced security, which is so needed in today’s world. AI-powered solutions for addressing digital divide can also optimize network infrastructure and we can apply the technology to intelligently assess opportunity gaps. So in terms of where do we start, we need to be infrastructure-ready. There are many reasons why this divide exists. So let’s talk about the infrastructure. Clearly a blueprint will start with addressing the lack of physical cables and so on and addressing the installation and putting an architecture in place to put the media together. And while this connectivity is essential, it’s only one part of the equation. You will then need to light this infrastructure to begin to get the bits and bytes flowing. So the Internet depends on this very strong technical foundation that allows it to function reliably, securely, and at scale. And this foundation, as you know so well, includes a domain name system, IP addresses, and root service systems. These elements may not be visible to the user community. However, we need to come together as a global community to ensure that we can put these different parts together to bring connectivity to those who are unconnected. At ICANN, we help coordinate this layer of the Internet. We work to maintain the stability and security of the DNS. We manage and facilitate the Internet’s unique identifier systems. We support the deployment of root server operations in underserved regions. And we also partner with technical operators and institutions across the global south to help strengthen local capacity and resilience. ICANN, of course, doesn’t build physical infrastructure. We coordinate key systems with colleagues around the world that makes connectivity reliable and sustainable. However, there’s yet another barrier, and that’s the barrier of language. Today, millions of users still cannot fully engage with the Internet in their own language or script, and this speaks to locality. And ICANN’s work on universal acceptance and internationalized domain names directly addresses this. These initiatives ensure that domain names and email addresses and local scripts work across devices, applications, and platforms. But this can only be possible if the technical community comes together, those who operate up and down the technology stack to make this happen. These capabilities are critical for cultural and linguistic participation. We encourage governments and institutions to integrate universal acceptance, international ICT strategies, and public service delivery. The technical steps are clear. The impact, particularly for multilingual and underserved communities, is significant. Solving this divide also depends significantly on coordination. ICANN was created as a multi-stakeholder organization, bringing together governments, the private sector, civil society, the technical community, and others to help manage these critical Internet resources. That model continues to be very relevant, and it’s open, it’s collaborative, and technically grounded, has helped keep the Internet stable and interoperable and global. We must continue to embrace it. So this coordination should not be assumed. Fragmentation at the technical level is a real and growing risk. An increasing number of governments are exploring state-led approaches to infrastructure and governance. Some are talking about a creation of a new multilateral model of Internet governance, which could result in serious issues in the functioning or even the existence of the single interoperable Internet. Of course, national interests are legitimate. No one’s disagreeing with that. However, divergence from global technical norms threatens the Internet’s core functionality, especially for the countries from the global South that could find themselves separated from the global Internet and part of some other networks that are not compatible. The Internet Governance Forum and all the other multi-stakeholder spaces help maintain alignment where it matters most, at the technical layer. They provide neutral platforms to resolve tensions and share solutions without imposing uniformity. The future of universal, affordable access depends on infrastructure that works, governance that adapts, accessibility above and beyond prevailing norms by applying universal acceptance, and participation that reflects the diversity of those who use the Internet. At ICANN, we remain committed to supporting this future by working with partners across the world, indeed with the global South, to expand technical resilience, enable multilingual access, and help keep the Internet globally connected. And hopefully we can close the digital divide. Thank you.


Zhang Hui: Thank you, Mr. Tripathi Sinha. Next, it is our great pleasure to welcome Mr. Zeng Xianliang, Secretary-General of the WRC. He has actually pushed forward the WRC’s transformation into a global platform for inclusive digital dialogue. Zeng Xianliang, Secretary-General of the WRC.


Ren Xianliang: and other key facilities to extend to developing countries. We focus on sustainable operation of infrastructure so that digital benefits can truly benefit the local population. We also strengthen the investment in capacity building and improve the level of digital education and skills. In the digital age, education and training are the biggest equalizer. We should increase the number of young people in southern countries and the ability of women, small and medium-sized entrepreneurs to set up digital training centers and develop localization courses to open the door to the future of digitalization. Third, we should improve the global digital governance mechanism and ensure the participation rights of developing countries. At present, developing countries are working on key governance mechanisms such as digitalization rights and technical supervision. We call on the international community to join the global governance program under the framework of multilateral participation and realize the beautiful vision of building, sharing and governing together. Fourth, we should strengthen international cooperation and expand multilateral participation channels. We should strengthen international cooperation under the framework of multilateral participation and expand multilateral participation channels. We should strengthen international cooperation under the framework of multilateral participation and expand multilateral participation channels. As an international organization, the World Internet Conference sincerely invites more enterprises, institutions and individuals from all over the world to join the membership and start cooperation. We should work together to promote the sharing of technology, the complementarity of capabilities, the joint construction of a multilateral participation and the mutually beneficial future of digitalization. Ladies and gentlemen, digitalization is not only a technical problem, but also a problem of development and fairness. The World Internet Conference welcomes all parties to continue to promote digital technology and to contribute to a stronger development momentum for the global South. Let’s work together to build a network space and a shared future and make the Internet a blessing for the people all over the world. Thank you.


Zhang Hui: Thank you, Mr. Zeng. Next, it’s our great honor to welcome Ms. Qi Xiaoxia, the Director General of International Cooperation Bureau of Cyberspace Administration of China. She has extensive experience in international cyberspace exchange and cooperation. Welcome.


Qi Xiaoxia: Distinguished guests, ladies and gentlemen, friends, good morning. I’m very pleased to be part of this distinguished panel discussing how to promote digital development and bridge digital divide for the global South. At present, as a collective of emerging market countries and developing countries, the global South has stepped onto the historical stage with great strides, injecting new impetus to global development and new progress for global governance. It has attracted the attention and anticipation of the international community. However, at the same time, the digital development deficit in the global South has become a weak link and a challenge that cannot be ignored as mankind embraces the digital age. How to bridge the digital divide and ensure that the global South does not fall behind in the digital age is a common task facing the international community. As a natural member of the global South, China has had the global South at heart and been deeply rooted in the global South. China regards assisting the development of the global South and bridging the digital divide as an unshakable international responsibility. In 2015, Chinese President Xi Jinping unveiled the vision of building a community with a shared future in cyberspace, contributing China’s wisdom and approach to the development and global governance of Internet. The vision advocated prioritizing development and deepening international exchanges and cooperation in the digital field. It has responded effectively to the development demands and common concerns of the global South in the digital age and provided important guidance for helping the global South bridge the digital divide, narrow the digital divide, and enable more countries and people to share the fruits of Internet development. In the face of the wave of AI development, President Xi Jinping emphasized the need to carry out extensive international cooperation on AI, helping the global South countries strengthen their technological capacity building and making China’s contributions to bridging the global intelligence gap. To help the global South bridge the digital divide, China is not only an advocate, but also a promoter and a pioneer. Under the theme of building a community with a shared future in cyberspace, we have continuously hosted World Internet Conference Wuzhen Summit, providing an important platform for exchanges and cooperation for the global South to share the dividends of digital development. For four consecutive years, the Wuzhen Summit has released a collection of practice cases of jointly building a community with a shared future in cyberspace, providing useful reference experiences for the global South in bridging the digital divide. In July last year, the UN General Assembly adopted the China-sponsored resolution Enhancing International Cooperation on Capacity Building of Artificial Intelligence. China has prioritized the follow-up implementation of the resolution and announced an action plan with ten major actions to fulfill the visions of the global South in five aspects, which contribute to strengthening AI capacity building for the global South. Last year, Chinese think tanks jointly launched the research report on global AI governance, identified AI divide and international collaboration as one of the ten key issues in global AI governance, and proposed a clear path of action to help the global South bridge the intelligence divide. Ladies and gentlemen, friends, development is the master key to solving all problems, and it is also the common aspiration and general expectations of the global South. Looking to the future, the global South should become a highland for digital innovation and development rather than a swamp left behind in sharing digital dividends. I would like to share three observations on how to accelerate bridging digital divide and create highlights of the global South. Firstly, the right to development of the global South should be upheld in the spirit of equality and mutual respect. Development is an eternal theme of human society and the right of all countries rather than an exclusive privilege of the few. China advocates respect for sovereignty in cyberspace and maintains that all countries, regardless of size, strength and wealth, are equal members of the international community and have the right to independently choose their own path of Internet development and the model of governance. Chinese think tanks have actively followed up and studied the issue of sovereignty in cyberspace and have successively released sovereignty in cyberspace theory and practice version 1.0 to 4.0, which provides an in-depth and systematic study and explanation of the specific issues related to the application of sovereignty in the process of digital-driven, Internet-based and smart growth and contribute to the theoretical support of safeguarding the right to digital development for the global South. Facing the issue of our time to help the global South bridge digital divide, China will always be committed to respecting sovereignty in cyberspace and work with the international community to respect the path of digital development and the models of governance of all countries, jointly oppose cyber hegemony and the politicization of technological issues with a view to fostering a favorable environment conducive to digital development for the global South. Secondly, practical cooperation should be strengthened to enhance digital Thank you all for joining us today. We are proud to announce that we are launching a new digital capacity for the global South. AI and other emerging technologies are on the rise. Dramatically enhancing mankind’s ability to understand and transform the world, while at the same time, raising the threshold of digital development capacity. In addition to international cooperation in capacity-building and digital capacity building, we will launch five new training workshops for Latin America and Caribbean countries, and for ASEAN countries, to carry out targeted training in digital capacity. Next, we will organize five more training workshops for the global South, with a view to continuously strengthening digital capacity building for the global South. We call on the international community to join hands in building a multi-channel exchange platform for the global South, to enhance its digital capacity, and help bridge the digital divide by building a multi-channel exchange platform, carrying out assistance and training projects, and promoting the sharing of knowledge on AI and other emerging technologies. Thirdly, efforts should be made to promote collaborative governance and amplify the voice of the global South in digital capacity building. At present, global digital governance is at an important crossroads, and the global South presents an important force for improving global governance. Listening to more voices from the global South can better help bridge the digital divide. China has organized the China-ASEAN Digital Governance Dialogue, China-Africa Internet Exchange Forum, and has deeply engaged in cooperation on digital governance under the platforms such as APEC, the BRICS, and the Shanghai Cooperation Organization, thus contributing more solutions to global digital governance. China is willing to work with the international community to support more active and broader participation by the global South in the digital governance processes of the United Nations, regional multilateral organizations, and specialized agencies. China aims to promote the enhancement of the representation and the voice of the global South in global digital governance so as to make the will of the global South be reflected in a more balanced and reasonable manner, and further consolidate the international consensus on bridging the digital divide. Thank you for your attention.


Zhang Hui: Thank you. and advocate, who is the chairman of Ghana Dot Com, the Father of Internet in Africa, Internet hall of fame, and the awardee of the WRC distinguished contribution.


Nii Quaynor: Thank you for the opportunity for me to share a session with such excellent speakers. In my perspective, it’s about time to mobilize additional attention on bridging the digital divide to address the systemic issues that impede efforts to eliminate divides with the global south. Although the digital divide is a very difficult challenge and pervasive, some countries are making progress in preventing its widening. Technology divides have a long history, as was mentioned earlier. In the 70s, we were missing human resources to initiate computer science institutions or build enterprise systems. In the 80s, we faced scientific instrumentation, computer interfacing deficiencies, VLSI, and in the 90s, Internet arrived in our countries with even more divides. It appears every new technology comes with its distinct divides, and some may widen other divides. However, addressing known limitations of infrastructure and costs, quality education, and digital governance will determine effective participation by the global south in the digital economies. Emerging Internet communities like in Africa feel fortunate with the open practices that give us a chance to be globally involved. The open standards, open documentation, and open participation have been particularly helpful in building capacity and networks addressing digital divide. Though we have made good progress with Internet, we have several challenges. Observations on resilience of Internet in Africa show a ready digital economy at about half midway user penetration, but has fragile infrastructure and known technical capacity needs. We have seen a decline in the number of data centers, connectivity, exchange points, capacity, and users are all improving. Africa is at 4.4 domain names per thousand, where global is 45 per thousand. 10 CCTLD registries have 92% of names. There are 13 ICANN globally accredited registrars in Africa and more than 1,000 registries in the world. Demand for hosting and data centers continue to increase. With an increasing attention on growing the infrastructure, the number of users in Africa region would soon be second only to Asia. The Burgundy REN research and education network ecosystem is becoming active with regional RENs and national RENs and campuses. We have seen a decline in the number of data centers, connectivity, exchange points, capacity, and users are all improving. With an increasing attention on growing the infrastructure, the number of users in Africa region would soon be second only to Asia. The infrastructure is working by standards, best practices regulation of operators and technical capacity. The approaches here are inherently multi-stakeholder and more bottom-up community discussions. The multi-stakeholder approach has its potency well-known, but is also known to have requirements. The multi-stakeholder approach is a multistakeholder approach and is a multi-stakeholder approach. The multi-stakeholder approach is a multi-stakeholder approach and is a multi-stakeholder approach. It is necessary to avoid capture and can sometimes result in a decision by fatigue. It also needs a meritorious moderator to call consensus in deliberations. The lack of consensus among resource members has caused a review of the arrangements around regional Internet registries. Fortunately, this impasse, like the Internet itself, the African registry core functions have shown resilience, and there are lessons learned to improve the governance. Participation in global MS organizations is voluntary and or by paid staff of organizations. The global south can have challenges finding, therefore, good participants. How to make the multi-stakeholder approach work better in the global south might be a governance divide issue to be addressed. We continue to deepen our foundation to cope with emerging technologies and learn how to manage with our limited resources, yet be able to be on the supply chain. We are not alone in this. We have to be prepared to adapt to new challenges and to adapt to new aspects of our strategy as well. With weak foundations, power, general infrastructure, skills, science education, our efforts were not good enough to meet the rapid growth of access speed, quality, and need for IPv6 access technology upgrades. The need for access speed and service went hard with some dominant providers with concentration and consolidation. The economic model of the internet, never favorable to newcomers, has not eased things for the global south. Where is the revenue to maintain and improve and develop infrastructure services constantly? The fast tracking of things for immediate results creates an efficient ecosystem that is unable to address the challenges and coping with the future. The non-existence of a stimulative and adaptive framework for rapidly evolving technology tends to hibernate innovations. What can we do? We can review the frameworks and make policies to enable innovation and creation and not just regulate usage. We have to build up on science education, optimize the use of data centers and exchange points and other existing infrastructure. Lots of effort and resources have been put in these and prudent to preserve the investment. Optimize knowledge transfer and capacity building through strong fundamental and intergenerational mentorship and coaching. Digital divide is tough. Therefore, in addition to all ongoing efforts, we welcome increased attention on it. We are encouraged by technical cooperation opportunities on global governance and digital divide. A South-South cooperation and collaboration leveraging WIC’s multi-stakeholder network to join in dealing with digital divides of the global South is a useful addition. The maturing AI technology threatens the digital divide the most, given associated high cost of infrastructure, high power requirements and technical skills needed to be on the supply side. Hence, attempt to harness AI to address the digital divide in this forum is insightful, might prevent it generating divide and bring real meaning to AI for good, AI for digital unity. Thank you very much for your attention.


Zhang Hui: Thank you, Dr. Quaynor. Now let’s give the floor to our youth leader. We welcome Mr. Chern Choong Thum, Special Functional Officer at the Ministry of Communications of Malaysia and also a 2024 global youth leader. He supports Malaysia’s digital strategy and regional innovation programs. Welcome.


Chern Choong Thum: And good morning from Malaysia. It is a great honour to be here, not only as a representative of the youth, but also one from Southeast Asia to speak about an issue affecting not just economies, but also the very heart of our societies, the digital divide. In Malaysia, we say muafakat membawa berkat, bersekutu bertambah mutu, reflecting our belief that unity brings great things. In our ultra-connected world, this spirit has never been more important. Yes, the rapid advancement of digital technology has brought incredible opportunities, but it has also widened gaps. In a global South, communities are still being left behind due to unequal access, high cost and also limited digital skills. As artificial intelligence, cloud services and the digital economy continue to accelerate, these gaps risk turning into chasms. The ITU Facts and Figures 2024 starkly highlight this uneven progress. While 5.5 billion people are online today, a third of the world, predominantly in the global South’s rural, low-income areas, remains disconnected. Internet use is almost universal in high-income nations, but drops to just 27% in low-income economies. Even at 5G expense, its reach in the poorest countries is a mere 4%. This digital exclusion mirrors existing social and economic inequalities demanding urgent action. That is why our policies must be open, inclusive, accepting of one another and most importantly, human-centric. The internet, after all, is not just a tool for commerce or entertainment. It has become a lifeline, a platform for learning, for healthcare, for livelihoods and for communities to connect and support one another. Malaysia takes this very seriously. As Chair of ASEAN in 2025, we champion inclusivity and sustainability as our theme for the year. As our theme, it is not enough to just grow fast. We must grow together and sustainably. Our Kuala Lumpur Declaration, sealed this May, envisions a shared future where no one is left behind. Recognising these disparities, Malaysia has actively deployed tangible solutions. Our National Information Dissemination Centres, or NADI Centres, exemplify this commitment. With 1,069 operational nationwide, these hubs provide collective internet access and vital ICT training, bridging the gaps for rural and urban poor communities. In a significant collaboration, the Malaysian Communications and Multimedia Commission, MCMC, and Microsoft has launched the AI Teach Skills for AI-Enabled Economy programme at NADI Centres, directly equipping local communities with crucial AI skills. Our Jandela Plan further strengthens this foundation. As of December 2024, Jandela has equipped over 9 million premises with fibre optic access, boosted media mobile download speeds to 105 Mbps and also extended internet coverage in populated areas to 98.66%. These efforts ensure a more equitable quality of digital experience regardless of location. Beyond infrastructure, we champion digital literacy and skills for the AI era. Initiatives like AI Untuk Rakyat enhance emerging tech skills across Malaysians. And through the Ministry of Human Resources National Training Week 2025, nearly 400,000 teachers nationwide are receiving large-scale upskilling, including comprehensive AI training to prepare our generation for a future-ready education. Take AI governance for example. Malaysia has developed the National Guidelines on AI Governance and Ethics and collaborated with ASEAN on their AI Guide. In both, we prioritise people and not just systems. Ethical, inclusive AI is not just a luxury for the global south, but a necessity for equitable development. We aim to advocate for digital governance frameworks that empower and uplift every community. Now as a doctor myself working in public health, I’ve seen firsthand how the digital divide carries a very real, very tangible consequences. At the time when a surgeon in Rome can perform an operation on a patient in Beijing through 5G-powered surgical robots, far too many communities still struggle to access even basic online health consultations or timely public health information. This gap is about lives. Digital exclusion deepens health inequalities, inequities, cutting off access to life-saving services and vital health education. Digital inclusion is not just an economic imperative, it is also a public health priority. And while we look at macro-level solutions, we must not forget the micro. The Southeast Asian kampung spirit, which means looking out for your neighbour, remains really strong. We should embrace this globally, creating spaces where no one is left behind. Women, youth, persons with disabilities, refugees and vulnerable populations must be given platforms to be heard and to lead. As part of the Global South, Malaysia will collaborate with ASEAN, Africa, Latin America and the Pacific to co-develop tailored solutions. The Global Youth Leaders Programme organised by the World Internet Conference is an inspiring example of bringing diverse young changemakers together. We must ensure such opportunities exist for the marginalised, not as a token, but also as a core part of our digital future. Let us collectively build bridges, not walls. Let us harness digital governance not as a tool of control, but as a platform for empowerment. And let us remember that the true measure of our digital progress is not in how we advance our systems are, but in how many lives we uplift along the way. Terima kasih, thank you, and may we , move forward together. Thank you.


Zhang Hui: Thank you to all our distinguished speakers. Today, we will prioritise forward-looking solutions to bridge the digital divide in the Global South. This open forum reflects a growing international consensus that inclusive dialogue and global engagement are essential to building a trusted, accessible and people-centred digital future. Looking ahead, the WRC is committed to deepening our cooperation with the Global South, listening, engaging and creating pathways to digital empowerment. On behalf of the WRC, thank you once again for your participation, insights and ongoing dedication. We look forward to continuing this wide conversation and working together to build a more inclusive, human-centred digital future. See you next year. Thank you.


L

Li Junhua

Speech speed

103 words per minute

Speech length

544 words

Speech time

316 seconds

2.6 billion people remain offline, majority in least developed countries – Digital Divide as Opportunity Gap

Explanation

Li Junhua emphasizes that the digital divide represents gaps in opportunity rather than just access. He highlights that 2.6 billion people remain offline, with the majority living in the world’s least developed or lower-middle-income countries where the digital gap remains widest.


Evidence

Specific statistic that in 2015, 4 billion people were offline, which has improved to 2.6 billion today, showing progress but still indicating far too many remain unconnected


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Infrastructure


Agreed with

– Francis Gurry
– Tripti Sinha
– Nii Quaynor
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


Digital divide exists within and between countries affecting rural populations, women, refugees, and persons with disabilities – Inequality Within Connected Nations

Explanation

Li Junhua points out that digital inequality exists not only between countries but also within countries that are considered well-connected. He specifically identifies vulnerable groups that continue to face barriers to full digital inclusion.


Evidence

Mentions remote and rural populations, refugees, indigenous peoples, women and girls, and persons with disabilities as groups facing barriers


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Human rights


Bottom-up grassroots processes are foundational to global efforts, giving communities voice

Explanation

Li Junhua argues that local and regional processes are not just complementary but foundational to global digital inclusion efforts. He emphasizes that solutions must be grounded in local realities and community empowerment.


Evidence

IGF has evolved into a global ecosystem with over 176 national, regional, sub-regional, and youth IGFs active worldwide


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Sociocultural


Agreed with

– Ren Xianliang
– Qi Xiaoxia
– Nii Quaynor
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


WSIS Plus 20 review provides opportunity to renew commitment to digital inclusion

Explanation

Li Junhua highlights the upcoming 20-year review of the World Summit on Information Society as a golden opportunity to renew global commitment to digital inclusion and meaningful access for all.


Evidence

References the progress made since WSIS Plus 10 review in 2015 when 4 billion people were offline compared to 2.6 billion today


Major discussion point

Regional and National Strategies


Topics

Development | Legal and regulatory


F

Francis Gurry

Speech speed

129 words per minute

Speech length

763 words

Speech time

353 seconds

Digital technology has penetrated all aspects of life making lack of access a major disadvantage – Digital Technology as Foundation of Modern Life

Explanation

Francis Gurry argues that digital technology has become fundamental to economic production, social communication, healthcare, education, and cultural exchanges. Any impairment in accessing these digital advantages creates major disadvantages for individuals and communities.


Evidence

Digital technology is responsible for cost efficiencies, quality outcomes, innovation, and competitive advantage in the economy, and enables improvements in social services, health, medicine, cultural exchanges, and educational opportunities


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Economic | Sociocultural


Agreed with

– Li Junhua
– Tripti Sinha
– Nii Quaynor
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


38% reduction in development funding next year creates crisis in addressing digital divide – Development Funding Crisis

Explanation

Francis Gurry warns of a massive crisis in development funding, with an estimated 38% reduction next year due to changes in US foreign aid policy and European countries diverting funds to military spending. This funding crisis makes it difficult to address the digital divide when resources are most needed.


Evidence

Attributes funding reduction to change of attitude of the United States in foreign aid and diversion of funding by European countries towards military spending; notes funding is scarcely sufficient to meet debt obligations


Major discussion point

Crisis Points and Emerging Challenges


Topics

Development | Economic


Artificial intelligence arrival at unprecedented speed risks exacerbating digital divide – AI as Accelerating Factor

Explanation

Francis Gurry identifies AI as another general purpose technology that poses risks of exacerbating the digital divide due to its rapid development and the massive investments being made by leading economies. The speed of AI development combined with the funding crisis creates a perfect storm for widening digital gaps.


Evidence

Points to data centers essential for AI infrastructure being located mainly in the north, with the exception of China, and the massive amounts of money being invested in AI development by leading economies


Major discussion point

Crisis Points and Emerging Challenges


Topics

Development | Infrastructure | Economic


Agreed with

– Tripti Sinha
– Nii Quaynor
– Qi Xiaoxia

Agreed on

AI poses both opportunities and risks for exacerbating the digital divide


T

Tripti Sinha

Speech speed

136 words per minute

Speech length

1010 words

Speech time

442 seconds

Need for infrastructure readiness including physical cables, DNS systems, and root servers – Technical Foundation Requirements

Explanation

Tripti Sinha emphasizes that while physical connectivity is essential, it’s only part of the solution. The Internet depends on a strong technical foundation including domain name systems, IP addresses, and root service systems that may not be visible to users but are crucial for reliable, secure, and scalable Internet function.


Evidence

ICANN coordinates this layer of the Internet, maintains DNS stability and security, manages Internet’s unique identifier systems, and supports root server deployment in underserved regions


Major discussion point

Infrastructure and Technical Solutions


Topics

Infrastructure | Legal and regulatory


Agreed with

– Li Junhua
– Francis Gurry
– Nii Quaynor
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


AI can optimize network infrastructure and enable efficient resource allocation for unconnected markets – AI-Powered Network Solutions

Explanation

Tripti Sinha argues that AI offers significant benefits for building networks in unconnected markets through efficient resource allocation, proactive maintenance, and enhanced security. AI-powered solutions can optimize network infrastructure and intelligently assess opportunity gaps.


Evidence

AI enables efficient resource allocation, proactive maintenance, enhanced security, and can intelligently assess opportunity gaps for addressing digital divide


Major discussion point

Infrastructure and Technical Solutions


Topics

Infrastructure | Development


Agreed with

– Francis Gurry
– Nii Quaynor
– Qi Xiaoxia

Agreed on

AI poses both opportunities and risks for exacerbating the digital divide


Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance

Explanation

Tripti Sinha advocates for ICANN’s multi-stakeholder model that brings together governments, private sector, civil society, and technical community. She argues this open, collaborative, and technically grounded approach has kept the Internet stable, interoperable, and global.


Evidence

ICANN was created as a multi-stakeholder organization bringing together various stakeholders, and this model has helped keep the Internet stable and interoperable


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Infrastructure


Agreed with

– Li Junhua
– Ren Xianliang
– Nii Quaynor

Agreed on

Multi-stakeholder governance approaches are essential but face implementation challenges


Disagreed with

– Qi Xiaoxia

Disagreed on

Governance approach – Multi-stakeholder vs State sovereignty


Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks

Explanation

Tripti Sinha warns that an increasing number of governments are exploring state-led approaches to infrastructure and governance, with some considering new multilateral models. She argues this divergence from global technical norms threatens the Internet’s core functionality and could separate Global South countries from the global Internet.


Evidence

Notes that countries from the global South could find themselves separated from the global Internet and part of incompatible networks


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Infrastructure


Disagreed with

– Qi Xiaoxia

Disagreed on

Risk assessment of fragmentation vs sovereignty protection


Millions cannot engage with Internet in their own language creating participation barriers – Language Barriers

Explanation

Tripti Sinha identifies language as a significant barrier to Internet participation, noting that millions of users still cannot fully engage with the Internet in their own language or script. This creates barriers to cultural and linguistic participation in the digital world.


Major discussion point

Language and Cultural Inclusion


Topics

Sociocultural | Human rights


Universal acceptance and internationalized domain names critical for cultural and linguistic participation – Multilingual Internet Access

Explanation

Tripti Sinha explains that ICANN’s work on universal acceptance and internationalized domain names directly addresses language barriers. These initiatives ensure that domain names and email addresses in local scripts work across devices, applications, and platforms, enabling cultural and linguistic participation.


Evidence

ICANN’s initiatives ensure domain names and email addresses in local scripts work across devices, applications, and platforms, requiring technical community collaboration


Major discussion point

Language and Cultural Inclusion


Topics

Sociocultural | Infrastructure | Multilingualism


R

Ren Xianliang

Speech speed

128 words per minute

Speech length

324 words

Speech time

151 seconds

Focus on sustainable infrastructure operation and digital education as the biggest equalizer – Sustainable Infrastructure Focus

Explanation

Ren Xianliang emphasizes the importance of sustainable operation of infrastructure so that digital benefits can truly benefit local populations. He argues that in the digital age, education and training serve as the biggest equalizer for addressing digital divides.


Evidence

Mentions extending infrastructure to developing countries and establishing digital training centers with localized courses to open doors to digitalization


Major discussion point

Infrastructure and Technical Solutions


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Li Junhua
– Qi Xiaoxia
– Nii Quaynor
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


Need for multilateral participation framework ensuring developing countries’ participation rights – Global Governance Participation

Explanation

Ren Xianliang calls for improving global digital governance mechanisms to ensure participation rights of developing countries. He advocates for international cooperation under multilateral participation frameworks to realize the vision of building, sharing, and governing together.


Evidence

Notes that developing countries are working on key governance mechanisms such as digitalization rights and technical supervision


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Development


Agreed with

– Li Junhua
– Tripti Sinha
– Nii Quaynor

Agreed on

Multi-stakeholder governance approaches are essential but face implementation challenges


World Internet Conference provides platform for Global South to share digital development dividends – International Platform Creation

Explanation

Ren Xianliang positions the World Internet Conference as an international organization that provides a platform for the Global South to participate in digital development. He invites global participation in membership and cooperation to promote technology sharing and capability complementarity.


Evidence

WRC sincerely invites more enterprises, institutions and individuals from all over the world to join membership and start cooperation


Major discussion point

Regional and National Strategies


Topics

Development | Legal and regulatory


Q

Qi Xiaoxia

Speech speed

142 words per minute

Speech length

1202 words

Speech time

506 seconds

China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace

Explanation

Qi Xiaoxia argues that all countries, regardless of size, strength, and wealth, have the right to independently choose their own path of Internet development and governance models. China advocates for respecting sovereignty in cyberspace and opposes cyber hegemony and politicization of technological issues.


Evidence

Chinese think tanks have released sovereignty in cyberspace theory and practice versions 1.0 to 4.0, providing systematic study of sovereignty application in digital processes


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Human rights


Disagreed with

– Tripti Sinha

Disagreed on

Risk assessment of fragmentation vs sovereignty protection


Digital capacity building through training workshops and knowledge sharing platforms essential – Digital Capacity Building

Explanation

Qi Xiaoxia emphasizes the importance of practical cooperation to enhance digital capacity for the Global South, particularly as AI and emerging technologies raise the threshold for digital development. She advocates for international cooperation in capacity-building and targeted training programs.


Evidence

China will launch five new training workshops for Latin America and Caribbean countries, and for ASEAN countries, with plans for five more workshops for the Global South


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Sociocultural


Agreed with

– Li Junhua
– Ren Xianliang
– Nii Quaynor
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


China’s commitment to helping Global South through AI capacity building and international cooperation – China’s Global South Support

Explanation

Qi Xiaoxia outlines China’s comprehensive approach to supporting the Global South, including hosting the World Internet Conference, releasing practice cases, and implementing UN resolutions on AI capacity building. China positions itself as an advocate, promoter, and pioneer in helping bridge the digital divide.


Evidence

UN General Assembly adopted China-sponsored resolution on AI capacity building; China announced action plan with ten major actions; Chinese think tanks launched research report on global AI governance


Major discussion point

Regional and National Strategies


Topics

Development | Legal and regulatory


N

Nii Quaynor

Speech speed

172 words per minute

Speech length

933 words

Speech time

324 seconds

Africa shows fragile infrastructure despite improving connectivity and user growth – African Infrastructure Challenges

Explanation

Nii Quaynor describes Africa’s digital economy as being at midway user penetration with improving connectivity, but having fragile infrastructure and known technical capacity needs. Despite progress in various areas, fundamental challenges remain in building resilient digital infrastructure.


Evidence

Africa is at 4.4 domain names per thousand compared to global 45 per thousand; 10 CCTLD registries have 92% of names; only 13 ICANN accredited registrars in Africa versus over 1,000 globally


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Infrastructure | Development


Agreed with

– Li Junhua
– Francis Gurry
– Tripti Sinha
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


Every new technology brings distinct divides and may widen existing ones – Technology Divide Pattern

Explanation

Nii Quaynor provides historical perspective showing that technology divides have a long history, with each new technology era bringing its own distinct challenges. He traces this pattern from the 1970s computer science era through the 1990s Internet arrival to current AI developments.


Evidence

In the 70s: missing human resources for computer science; 80s: scientific instrumentation and VLSI deficiencies; 90s: Internet arrival with new divides


Major discussion point

Crisis Points and Emerging Challenges


Topics

Development | Infrastructure


AI threatens digital divide most due to high infrastructure costs and technical skill requirements – AI Infrastructure Barriers

Explanation

Nii Quaynor warns that maturing AI technology poses the greatest threat to the digital divide because of the associated high costs of infrastructure, high power requirements, and technical skills needed to be on the supply side. This makes it particularly challenging for Global South countries to participate.


Major discussion point

Crisis Points and Emerging Challenges


Topics

Infrastructure | Development | Economic


Agreed with

– Francis Gurry
– Tripti Sinha
– Qi Xiaoxia

Agreed on

AI poses both opportunities and risks for exacerbating the digital divide


Multi-stakeholder approach has potency but requires good moderation and can face participation challenges in Global South – Governance Implementation Challenges

Explanation

Nii Quaynor acknowledges the effectiveness of multi-stakeholder approaches while noting their requirements and limitations. He points out that participation in global multi-stakeholder organizations can be challenging for the Global South due to resource constraints and the need for quality participants.


Evidence

Multi-stakeholder approach needs meritorious moderator to call consensus; participation in global MS organizations is voluntary or by paid staff; Global South can have challenges finding good participants


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Development


Agreed with

– Li Junhua
– Tripti Sinha
– Ren Xianliang

Agreed on

Multi-stakeholder governance approaches are essential but face implementation challenges


Need for intergenerational mentorship and coaching to optimize knowledge transfer – Knowledge Transfer Optimization

Explanation

Nii Quaynor emphasizes the importance of optimizing knowledge transfer and capacity building through strong fundamental education and intergenerational mentorship and coaching. He sees this as crucial for building sustainable digital capacity in the Global South.


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Sociocultural


Agreed with

– Li Junhua
– Ren Xianliang
– Qi Xiaoxia
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


C

Chern Choong Thum

Speech speed

151 words per minute

Speech length

855 words

Speech time

339 seconds

5.5 billion people are online but a third of the world remains disconnected, predominantly in Global South rural areas – Uneven Global Progress

Explanation

Chern Choong Thum cites ITU Facts and Figures 2024 to highlight the stark disparity in global internet access. While internet use is almost universal in high-income nations, it drops to just 27% in low-income economies, with 5G reach being only 4% in the poorest countries.


Evidence

ITU Facts and Figures 2024 shows internet use almost universal in high-income nations but only 27% in low-income economies; 5G reach is mere 4% in poorest countries


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Infrastructure


Agreed with

– Li Junhua
– Francis Gurry
– Tripti Sinha
– Nii Quaynor

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


Malaysia’s NADI Centers provide internet access and ICT training, with AI skills programs – Community Access Centers

Explanation

Chern Choong Thum describes Malaysia’s National Information Dissemination Centres (NADI) as a tangible solution with 1,069 operational centers nationwide providing collective internet access and vital ICT training. These centers specifically target rural and urban poor communities and include AI skills training through collaboration with Microsoft.


Evidence

1,069 NADI Centers operational nationwide; collaboration between Malaysian Communications and Multimedia Commission (MCMC) and Microsoft for AI Teach Skills programme


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Li Junhua
– Ren Xianliang
– Qi Xiaoxia
– Nii Quaynor

Agreed on

Capacity building and education are fundamental to bridging the digital divide


National Infrastructure Success – Malaysia’s Jandela Plan equipped 9 million premises with fiber optic access and boosted mobile speeds

Explanation

Chern Choong Thum highlights Malaysia’s Jandela Plan as a successful infrastructure initiative that has equipped over 9 million premises with fiber optic access, boosted median mobile download speeds to 105 Mbps, and extended internet coverage in populated areas to 98.66% as of December 2024.


Evidence

As of December 2024, Jandela equipped over 9 million premises with fiber optic access, boosted median mobile download speeds to 105 Mbps, extended internet coverage in populated areas to 98.66%


Major discussion point

Infrastructure and Technical Solutions


Topics

Infrastructure | Development


Malaysia champions inclusivity and sustainability as ASEAN Chair with human-centric policies – ASEAN Leadership Approach

Explanation

Chern Choong Thum explains Malaysia’s leadership role as ASEAN Chair in 2025, championing inclusivity and sustainability with the theme that it’s not enough to grow fast but must grow together and sustainably. Malaysia advocates for human-centric policies and ethical AI governance frameworks.


Evidence

Kuala Lumpur Declaration sealed in May envisions shared future where no one is left behind; Malaysia developed National Guidelines on AI Governance and Ethics and collaborated on ASEAN AI Guide


Major discussion point

Regional and National Strategies


Topics

Legal and regulatory | Development | Human rights


Agreements

Agreement points

Digital divide represents urgent global challenge requiring immediate attention

Speakers

– Li Junhua
– Francis Gurry
– Tripti Sinha
– Nii Quaynor
– Chern Choong Thum

Arguments

2.6 billion people remain offline, majority in least developed countries – Digital Divide as Opportunity Gap


Digital technology has penetrated all aspects of life making lack of access a major disadvantage – Digital Technology as Foundation of Modern Life


Need for infrastructure readiness including physical cables, DNS systems, and root servers – Technical Foundation Requirements


Africa shows fragile infrastructure despite improving connectivity and user growth – African Infrastructure Challenges


5.5 billion people are online but a third of the world remains disconnected, predominantly in Global South rural areas – Uneven Global Progress


Summary

All speakers acknowledge the digital divide as a critical and urgent global challenge, with billions still offline, particularly in the Global South. They agree this represents not just access gaps but opportunity gaps that require immediate coordinated action.


Topics

Development | Infrastructure


AI poses both opportunities and risks for exacerbating the digital divide

Speakers

– Francis Gurry
– Tripti Sinha
– Nii Quaynor
– Qi Xiaoxia

Arguments

Artificial intelligence arrival at unprecedented speed risks exacerbating digital divide – AI as Accelerating Factor


AI can optimize network infrastructure and enable efficient resource allocation for unconnected markets – AI-Powered Network Solutions


AI threatens digital divide most due to high infrastructure costs and technical skill requirements – AI Infrastructure Barriers


Digital capacity building through training workshops and knowledge sharing platforms essential – Digital Capacity Building


Summary

Speakers agree that AI represents a double-edged sword – offering solutions for network optimization and resource allocation while simultaneously threatening to widen the digital divide due to high infrastructure costs and technical requirements.


Topics

Development | Infrastructure | Economic


Multi-stakeholder governance approaches are essential but face implementation challenges

Speakers

– Li Junhua
– Tripti Sinha
– Ren Xianliang
– Nii Quaynor

Arguments

Bottom-up grassroots processes are foundational to global efforts, giving communities voice


Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance


Need for multilateral participation framework ensuring developing countries’ participation rights – Global Governance Participation


Multi-stakeholder approach has potency but requires good moderation and can face participation challenges in Global South – Governance Implementation Challenges


Summary

All speakers support multi-stakeholder governance models while acknowledging practical challenges in implementation, particularly ensuring meaningful participation from Global South countries and communities.


Topics

Legal and regulatory | Development


Capacity building and education are fundamental to bridging the digital divide

Speakers

– Li Junhua
– Ren Xianliang
– Qi Xiaoxia
– Nii Quaynor
– Chern Choong Thum

Arguments

Bottom-up grassroots processes are foundational to global efforts, giving communities voice


Focus on sustainable infrastructure operation and digital education as the biggest equalizer – Sustainable Infrastructure Focus


Digital capacity building through training workshops and knowledge sharing platforms essential – Digital Capacity Building


Need for intergenerational mentorship and coaching to optimize knowledge transfer – Knowledge Transfer Optimization


Malaysia’s NADI Centers provide internet access and ICT training, with AI skills programs – Community Access Centers


Summary

Speakers unanimously agree that capacity building, education, and skills development are crucial equalizers in addressing the digital divide, with emphasis on localized training programs and community-based approaches.


Topics

Development | Sociocultural


Similar viewpoints

Both speakers provide historical and economic context showing that technology divides are persistent challenges that require sustained resources, with current funding crises making the situation more critical.

Speakers

– Francis Gurry
– Nii Quaynor

Arguments

38% reduction in development funding next year creates crisis in addressing digital divide – Development Funding Crisis


Every new technology brings distinct divides and may widen existing ones – Technology Divide Pattern


Topics

Development | Economic


Both speakers emphasize the importance of maintaining global Internet unity while respecting national sovereignty, though from different perspectives – technical stability versus political sovereignty.

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Topics

Legal and regulatory | Infrastructure


Both speakers advocate for inclusive international platforms and human-centric approaches to digital development, emphasizing the importance of ensuring no one is left behind.

Speakers

– Ren Xianliang
– Chern Choong Thum

Arguments

World Internet Conference provides platform for Global South to share digital development dividends – International Platform Creation


Malaysia champions inclusivity and sustainability as ASEAN Chair with human-centric policies – ASEAN Leadership Approach


Topics

Development | Legal and regulatory | Human rights


Unexpected consensus

Language and cultural barriers as significant digital divide factors

Speakers

– Tripti Sinha

Arguments

Millions cannot engage with Internet in their own language creating participation barriers – Language Barriers


Universal acceptance and internationalized domain names critical for cultural and linguistic participation – Multilingual Internet Access


Explanation

While most speakers focused on infrastructure and economic barriers, Tripti Sinha uniquely highlighted language and cultural barriers as significant factors in the digital divide, representing an important but often overlooked dimension of digital inclusion.


Topics

Sociocultural | Infrastructure | Multilingualism


Digital divide as public health priority

Speakers

– Chern Choong Thum

Arguments

5.5 billion people are online but a third of the world remains disconnected, predominantly in Global South rural areas – Uneven Global Progress


Explanation

Chern Choong Thum uniquely framed the digital divide as a public health issue, noting how digital exclusion deepens health inequalities and cuts off access to life-saving services, providing a medical perspective not emphasized by other speakers.


Topics

Development | Human rights


Overall assessment

Summary

Strong consensus exists among speakers on the urgency of addressing the digital divide, the dual nature of AI as both solution and challenge, the importance of multi-stakeholder governance, and the critical role of capacity building and education.


Consensus level

High level of consensus with complementary perspectives rather than conflicting views. The agreement spans technical, policy, and implementation aspects, suggesting a mature understanding of the challenges and potential for coordinated action. The consensus implies strong foundation for international cooperation and coordinated strategies to bridge the digital divide.


Differences

Different viewpoints

Governance approach – Multi-stakeholder vs State sovereignty

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Summary

Tripti Sinha advocates for ICANN’s multi-stakeholder model bringing together governments, private sector, civil society, and technical community, while Qi Xiaoxia emphasizes state sovereignty in cyberspace and countries’ rights to independently choose their own Internet development paths and governance models


Topics

Legal and regulatory | Infrastructure


Risk assessment of fragmentation vs sovereignty protection

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Summary

Tripti Sinha warns that state-led approaches and new multilateral models could fragment the Internet and separate Global South countries from the global Internet, while Qi Xiaoxia frames state sovereignty as protection against cyber hegemony and politicization of technological issues


Topics

Legal and regulatory | Infrastructure


Unexpected differences

Multi-stakeholder governance effectiveness in Global South

Speakers

– Tripti Sinha
– Nii Quaynor

Arguments

Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance


Multi-stakeholder approach has potency but requires good moderation and can face participation challenges in Global South – Governance Implementation Challenges


Explanation

While both speakers support multi-stakeholder approaches, Nii Quaynor provides a more nuanced critique highlighting practical implementation challenges in the Global South, including resource constraints and participation difficulties, which somewhat contradicts Tripti Sinha’s more optimistic view of the model’s universal applicability


Topics

Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on governance approaches (multi-stakeholder vs state sovereignty), risk assessment of Internet fragmentation, and implementation strategies for addressing the digital divide


Disagreement level

Moderate disagreement level with significant implications. While speakers largely agree on the urgency of bridging the digital divide, their fundamental differences on governance models could impact international cooperation efforts. The tension between multi-stakeholder governance and state sovereignty represents a core challenge in global Internet governance that could affect policy coordination and resource allocation for Global South development initiatives


Partial agreements

Partial agreements

Similar viewpoints

Both speakers provide historical and economic context showing that technology divides are persistent challenges that require sustained resources, with current funding crises making the situation more critical.

Speakers

– Francis Gurry
– Nii Quaynor

Arguments

38% reduction in development funding next year creates crisis in addressing digital divide – Development Funding Crisis


Every new technology brings distinct divides and may widen existing ones – Technology Divide Pattern


Topics

Development | Economic


Both speakers emphasize the importance of maintaining global Internet unity while respecting national sovereignty, though from different perspectives – technical stability versus political sovereignty.

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Topics

Legal and regulatory | Infrastructure


Both speakers advocate for inclusive international platforms and human-centric approaches to digital development, emphasizing the importance of ensuring no one is left behind.

Speakers

– Ren Xianliang
– Chern Choong Thum

Arguments

World Internet Conference provides platform for Global South to share digital development dividends – International Platform Creation


Malaysia champions inclusivity and sustainability as ASEAN Chair with human-centric policies – ASEAN Leadership Approach


Topics

Development | Legal and regulatory | Human rights


Takeaways

Key takeaways

The digital divide has evolved beyond infrastructure to encompass affordability, digital skills, and meaningful participation, with 2.6 billion people still offline globally


A critical crisis point exists due to 38% reduction in development funding coinciding with AI’s rapid advancement, which risks dramatically exacerbating existing digital divides


Multi-stakeholder governance models remain essential for maintaining a unified, interoperable global Internet, but face implementation challenges in the Global South


Infrastructure development must be coupled with capacity building, digital literacy programs, and culturally inclusive solutions including multilingual Internet access


Bottom-up, community-driven approaches are foundational to bridging divides, requiring local empowerment alongside global coordination


AI presents both opportunities (network optimization, resource allocation) and threats (high infrastructure costs, technical skill requirements) for addressing digital divides


Regional cooperation and South-South collaboration are crucial, with successful models like Malaysia’s NADI Centers and China’s capacity building initiatives showing practical pathways forward


Resolutions and action items

World Internet Conference commits to deepening cooperation with Global South through continued dialogue and engagement platforms


China announced implementation of UN resolution on AI capacity building with ten major actions and five additional training workshops for Global South countries


Malaysia will leverage its 2025 ASEAN Chairmanship to champion inclusivity and sustainability themes in digital development


ICANN commits to continued support for technical resilience, multilingual access, and global connectivity in underserved regions


Call for international community to join multi-channel exchange platforms and assistance programs for Global South digital capacity building


WSIS Plus 20 review identified as opportunity to renew global commitment to digital inclusion and meaningful access for all


Unresolved issues

How to address the massive development funding crisis while meeting increased needs for AI-era digital infrastructure


Balancing national sovereignty in cyberspace with maintaining unified global Internet standards and interoperability


Making multi-stakeholder governance models work more effectively in Global South contexts where participation can be challenging


Preventing AI advancement from creating new forms of digital colonialism or widening existing technological gaps


Sustainable financing models for ongoing infrastructure maintenance and improvement in resource-constrained environments


Addressing the concentration and consolidation of dominant Internet providers that disadvantage newcomers and Global South participation


Suggested compromises

Respecting national sovereignty in cyberspace while maintaining global technical standards through neutral multi-stakeholder platforms


Leveraging AI technology to address digital divides (network optimization, resource allocation) while simultaneously building capacity to prevent AI from creating new divides


Combining top-down international cooperation frameworks with bottom-up community-driven solutions to ensure local relevance and global coordination


Balancing rapid technological advancement with sustainable, inclusive development that doesn’t leave communities behind


Integrating universal acceptance and internationalized domain names into national ICT strategies while maintaining global Internet interoperability


Thought provoking comments

Despite the progress, I think we are at a real crisis point in relation to the digital divide, and that crisis I think comes from two challenges. The first challenge is the crisis in development funding… And the second problem is that never has funding and development assistance been more needed than at the present time when artificial intelligence is coming online at such a speed that it is baffling to all of us.

Speaker

Francis Gurry


Reason

This comment reframes the entire discussion by identifying a critical paradox: just as AI creates unprecedented opportunities and needs for bridging the digital divide, the resources to address it are dramatically shrinking. Gurry quantifies this with the stark statistic of 38% less development funding, creating urgency around what could otherwise be seen as a gradual progress issue.


Impact

This comment fundamentally shifted the tone from optimistic progress reporting to crisis management. It influenced subsequent speakers to address practical solutions and international cooperation more urgently. The ‘crisis framing’ became a recurring theme, with later speakers like Tripti Sinha acknowledging ‘we are in a financial crisis’ and emphasizing the need for strategic coordination.


As the old adage goes, knowledge begets knowledge, wealth begets wealth, and those who possess these will only have the opportunity to obtain more. Similarly, innovation begets innovation. And for those who are not part of this opportunity ecosystem, you know, they will suffer and they will fall behind.

Speaker

Tripti Sinha


Reason

This philosophical observation introduces a systems thinking perspective that explains why the digital divide is self-perpetuating and accelerating. It moves beyond technical solutions to address the fundamental economic and social dynamics that make digital inequality a compounding problem rather than a static gap.


Impact

This comment deepened the analytical framework of the discussion, moving it from infrastructure-focused solutions to systemic inequality concerns. It provided intellectual foundation for why urgent, coordinated action is needed and influenced the conversation toward more holistic approaches that address root causes rather than just symptoms.


It appears every new technology comes with its distinct divides, and some may widen other divides… The maturing AI technology threatens the digital divide the most, given associated high cost of infrastructure, high power requirements and technical skills needed to be on the supply side.

Speaker

Nii Quaynor


Reason

This historical perspective from someone dubbed the ‘Father of Internet in Africa’ provides crucial context by showing that digital divides are not anomalies but predictable patterns that accompany technological advancement. His ground-level experience adds authenticity to the theoretical discussions and warns that AI represents the most challenging divide yet.


Impact

Quaynor’s historical framing validated the crisis narrative established by earlier speakers while providing practical credibility from someone who has lived through multiple technology transitions. His comment influenced the discussion to consider AI not just as a solution tool but as a potential amplifier of existing inequalities, adding nuance to the technology-optimism expressed by other speakers.


Digital exclusion deepens health inequalities, inequities, cutting off access to life-saving services and vital health education… Digital inclusion is not just an economic imperative, it is also a public health priority.

Speaker

Chern Choong Thum


Reason

As the youth representative and a doctor, Thum brings a human-centered perspective that connects abstract digital policy to tangible life-and-death consequences. His medical background provides unique authority to discuss how digital divides translate into health disparities, making the issue more visceral and urgent.


Impact

This comment humanized the entire discussion by connecting digital access to fundamental human needs like healthcare. It broadened the conversation beyond economic development to include social justice and human rights dimensions, influencing the final framing of digital inclusion as a moral imperative rather than just a development goal.


The multi-stakeholder approach has its potency well-known, but is also known to have requirements… It is necessary to avoid capture and can sometimes result in a decision by fatigue. It also needs a meritorious moderator to call consensus in deliberations.

Speaker

Nii Quaynor


Reason

This is a rare moment of critical self-reflection about the governance model that underlies the entire forum. Quaynor acknowledges the limitations of the multi-stakeholder approach that everyone else takes for granted, introducing necessary skepticism about whether current governance structures are adequate for addressing the digital divide.


Impact

This comment introduced a meta-level critique that challenged the fundamental assumptions of the forum itself. It added complexity to discussions about governance solutions and influenced later speakers to be more specific about implementation mechanisms rather than just advocating for more multi-stakeholder cooperation.


Overall assessment

These key comments transformed what could have been a routine policy discussion into a more urgent, nuanced, and strategically focused conversation. Gurry’s crisis framing established the stakes, Sinha’s systems thinking explained the underlying dynamics, Quaynor’s historical perspective provided credibility and warnings, Thum’s health focus humanized the issues, and Quaynor’s governance critique added necessary self-reflection. Together, these interventions elevated the discussion from incremental progress reporting to strategic crisis response, while maintaining focus on practical solutions and human-centered outcomes. The comments created a progression from problem identification to systemic analysis to implementation challenges, resulting in a more sophisticated understanding of both the urgency and complexity of bridging the digital divide.


Follow-up questions

How can we develop a major international strategic plan to address the digital divide crisis caused by reduced development funding and rapid AI advancement?

Speaker

Francis Gurry


Explanation

Gurry identified a critical crisis point where development funding is decreasing by 38% while AI technology is advancing rapidly, creating an urgent need for coordinated international response


How can we make the multi-stakeholder approach work better in the global south, particularly addressing governance divide issues?

Speaker

Nii Quaynor


Explanation

Quaynor highlighted challenges with multi-stakeholder participation in the global south, including difficulties finding good participants and potential decision fatigue, suggesting this governance approach needs improvement


How can we review and reform frameworks to enable innovation and creation rather than just regulate usage in rapidly evolving technology environments?

Speaker

Nii Quaynor


Explanation

Quaynor noted that non-existence of stimulative and adaptive frameworks for rapidly evolving technology tends to hibernate innovations, requiring policy reform


Where is the revenue to maintain, improve and develop infrastructure services constantly in the global south?

Speaker

Nii Quaynor


Explanation

Quaynor raised concerns about the economic sustainability of internet infrastructure in the global south, questioning the financial model for ongoing maintenance and development


How can AI be harnessed to address the digital divide rather than generate new divides?

Speaker

Nii Quaynor


Explanation

Given AI’s high infrastructure costs, power requirements, and technical skills needed, there’s a need to explore how AI can be leveraged for good and digital unity rather than widening gaps


How can we ensure that universal acceptance and internationalized domain names work across all devices, applications, and platforms through technical community coordination?

Speaker

Tripti Sinha


Explanation

Sinha emphasized that language barriers prevent millions from fully engaging with the Internet, requiring coordinated technical efforts across the technology stack


How can we prevent fragmentation at the technical level and maintain a single interoperable Internet while respecting national interests?

Speaker

Tripti Sinha


Explanation

Sinha warned about the growing risk of technical fragmentation as governments explore state-led approaches, which could separate global south countries from the global Internet


How can we optimize knowledge transfer and capacity building through intergenerational mentorship and coaching in the global south?

Speaker

Nii Quaynor


Explanation

Quaynor suggested this as a key strategy for addressing digital divides, but the specific mechanisms and implementation approaches need further development


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #78 Shaping the Future with Multistakeholder Foresight

Open Forum #78 Shaping the Future with Multistakeholder Foresight

Session at a glance

Summary

This discussion focused on a strategic foresight project commissioned by the German Federal Ministry for Digital Transformation, which developed scenarios for internet governance in 2040. The session featured Philipp Schulte from the German ministry, Julia Pohler who led the scenario development task force, and panelists Anriette Esterhuyse and Gbenga Sesan who were interviewed as part of the process.


Julia Pohler explained that strategic foresight is not about predicting exact futures but rather developing plausible scenarios to help stakeholders prepare for uncertainties and disruptions. The German project created four distinct scenarios exploring different trajectories for internet governance over the next 15 years, ranging from continued geopolitical competition to complete internet fragmentation to over-regulation to transformation toward public goods orientation. The process involved a 15-member German task force representing diverse stakeholder groups, supplemented by interviews with international experts to bring global perspectives.


A key finding was that geopolitics and the role of states emerged as the dominant driving forces across nearly all scenarios, with actions by major powers like the US, China, and Russia being central factors. Pohler noted that geopolitical developments have already moved faster than anticipated when the scenarios were written, suggesting the reality is outpacing their projections. Significantly, none of the scenarios except one showed a bright future for multi-stakeholder internet governance, with most depicting it as either hollowed out or institutionalized to the point of losing meaning.


The panelists found the interview process valuable and intellectually stimulating, though they noted some limitations including the abstract nature of the exercise and concerns about implementation. There was discussion about whether the report would be practically useful, with consensus that while the scenarios themselves might have limited direct application, the participatory process of developing them was extremely valuable for expanding thinking and preparing stakeholders for different possibilities.


The conversation highlighted broader challenges facing internet governance, including the tension between idealistic multi-stakeholder principles and geopolitical realities, the need for more concrete and courageous discussions about desired outcomes, and suggestions for making the IGF more interactive and willing to tackle difficult questions without seeking consensus on everything.


Keypoints

## Overall Purpose/Goal


This discussion centered on a strategic foresight project commissioned by the German Ministry for Digital Transformation, which developed four scenarios for internet governance in 2040. The session aimed to present the methodology, findings, and implications of this foresight exercise while exploring how such approaches could inform future policy-making and multi-stakeholder processes.


## Major Discussion Points


– **Strategic Foresight Methodology and Process**: The panelists explained how strategic foresight works – not to predict exact futures, but to develop plausible scenarios that help stakeholders prepare for uncertainties. The German project involved a 15-member task force representing diverse communities, supplemented by expert interviews and validation workshops to create four distinct scenarios for internet governance.


– **The Dominant Role of States in Future Scenarios**: A key finding was that geopolitical factors and state actions emerged as the primary drivers across most scenarios, rather than civil society or technical community initiatives. This represented a shift from earlier foresight exercises that might have emphasized corporate actors or civil society as main drivers.


– **Crisis of Multi-stakeholder Governance**: The scenarios revealed a troubling pattern where multi-stakeholder processes were either being hollowed out, undermined by state and corporate actors, or becoming so institutionalized that they lost their bottom-up character and meaningful impact. This finding prompted reflection on whether current multi-stakeholder models are living up to their promises.


– **Need for More Concrete and Courageous Approaches**: Panelists emphasized moving beyond abstract ideals to address specific challenges like fair taxation of big tech, data extractivism, and digital barriers created by platform monopolies. They called for “braver” multi-stakeholder forums willing to tackle difficult questions without consensus.


– **Practical Applications and Future Directions**: Discussion focused on how to make foresight exercises more useful through interactive formats like scenario games, better stakeholder engagement, and clearer pathways from analysis to policy implementation. Suggestions included redesigning IGF sessions to be more participatory and innovative.


## Overall Tone


The discussion maintained a constructive but increasingly critical tone. It began with technical explanations of the foresight methodology, but evolved into more pointed critiques of current multi-stakeholder governance limitations. While panelists expressed appreciation for the German government’s initiative, they became more direct about systemic problems and the need for fundamental changes. The tone remained collaborative throughout, with participants building on each other’s observations and offering concrete suggestions for improvement, though there was an underlying urgency about addressing the challenges identified in the scenarios.


Speakers

**Speakers from the provided list:**


– **Philipp Schulte** – Senior policy officer at the Federal Ministry for Digital Transformation and State Modernization in Germany


– **Julia Pohler** – Co-lead at the Berlin Social Center for a research group politics of digitalization, co-author of future scenarios on internet governance, task force lead for the strategic foresight project


– **Anriette Esterhuysen** – Senior advisor for global and regional Internet governance with the Association for Progress Communications, former MAG Chair, long-time IGF participant


– **Gbenga Sesan** – Executive Director of the Paradigm Initiative, IGF MAG leadership panel member


– **Audience** – Various audience members asking questions (roles/titles not specified for most)


**Additional speakers:**


– **Professor Roberta Haar** – Professor at Maastricht University, leading a horizon project called Remit Research, develops scenario testing workshops and games


– **Bertrand de la Chapelle** – Executive director of the Internet and Jurisdiction Policy Network


Full session report

# Strategic Foresight for Internet Governance: Scenarios for 2040 – Discussion Report


## Introduction and Context


This discussion centered on a strategic foresight project commissioned by the German Federal Ministry for Digital Transformation and State Modernization, which developed four scenarios for internet governance in 2040. The session brought together Philipp Schulte, a senior policy officer from the commissioning ministry; Julia Pohler, co-lead at the Berlin Social Center for a research group on politics of digitalization who participated in the scenario development; and experienced internet governance practitioners Anriette Esterhuysen (senior advisor for global and regional Internet governance with the Association for Progressive Communications) and Gbenga Sesan, both of whom were interviewed as experts during the research process.


## Strategic Foresight Methodology and Process


### Understanding Strategic Foresight


Julia Pohler explained that strategic foresight differs fundamentally from prediction or forecasting. Rather than attempting to determine what will happen, the methodology develops plausible scenarios to help stakeholders prepare for uncertainties and potential disruptions. As she noted, “We’re not trying to predict the future, but we’re trying to develop scenarios that are plausible and that help us to think about what could happen and how we can prepare for it.”


The German project employed a structured participatory methodology involving a 15-member task force representing diverse stakeholder groups from German civil society, academia, and technical communities. This core group was supplemented by interviews with international experts to ensure global perspectives were incorporated.


### The Four Scenarios


The task force developed four distinct scenarios for internet governance in 2040:


1. **Continuation of current trends** – Characterized by ongoing geoeconomic competition between major powers


2. **Complete systemic collapse** – Featuring internet fragmentation and breakdown of current governance structures


3. **Over-regulation** – Where everything becomes heavily controlled and regulated


4. **Transformation toward public goods orientation** – A shift toward treating internet infrastructure and governance as public goods


### Expert Perspectives on the Process


Anriette Esterhuysen found the interview process intellectually stimulating, noting that “foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints.” However, she expressed some limitations with the individual interview format, preferring group dynamics for such discussions.


Gbenga Sesan appreciated the intellectual challenge, describing it as valuable for helping organizations adjust their strategies as reality unfolds. He emphasized that such exercises help stakeholders think beyond immediate concerns and consider longer-term implications of current trends.


## Key Finding: The Dominance of Geopolitical Factors


### States as Primary Drivers


One of the most significant findings was the unexpected prominence of geopolitical factors and state actions across nearly all scenarios. Julia Pohler observed that “the most important factor in almost all scenarios is actually the role of states and the role of governments,” which emerged more strongly than anticipated during the scenario development process.


This finding proved particularly prescient given subsequent developments. Pohler noted that “we wrote these scenarios before President Trump took office again. And before we kind of saw this increase of geopolitical tensions… So I think today we would have gone even further in emphasizing the role of geopolitics and geoeconomics… the reality is actually moving faster than we thought it would.”


### Reframing State Involvement


The prominence of state actors prompted important discussions about their role in internet governance. Philipp Schulte used a gardening metaphor to describe the state’s role: “The state should act as a gardener, ensuring that all stakeholder groups can perform their roles effectively.” This positioned governments not as threats to multi-stakeholder governance but as essential facilitators.


Anriette Esterhuysen challenged traditional assumptions, arguing that “states have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals.” She provided concrete examples from WSIS processes where government positions significantly shaped outcomes.


## Challenges to Multi-Stakeholder Governance


### Sobering Scenario Outcomes


Perhaps the most concerning finding was what the scenarios revealed about the future of multi-stakeholder governance. Julia Pohler stated bluntly: “I think in all of these scenarios we ended up writing possible futures in where multi-stakeholder processes are either being hollowed out or kind of completely undermined by corporate actors and state actors… So I would say that in all of these scenarios, somehow multi-stakeholderism and governance has outlived its promises.”


Only one of the four scenarios showed a positive future for multi-stakeholder approaches, with the others depicting processes that had either lost meaning through over-institutionalization or been systematically undermined.


### Calls for More Direct Engagement


Anriette Esterhuysen argued for more concrete and courageous discussions in multi-stakeholder forums. She provided specific examples of topics often avoided: “I want fair tax payment by big tech so that countries who need revenue to actually build a fiber optic backbone… I want data flows that are not based on an extractive sort of colonial type model… but it’s almost impossible to say those things in the context of so many multi-stakeholder fora because you don’t want to offend the private sector.”


She criticized the tendency toward “watered down set of wedding vows” rather than meaningful policy discussions, calling for forums willing to tackle controversial issues without requiring universal agreement.


## Technology’s Complex Role


### Unfulfilled Promises


Anriette Esterhuysen reflected on technology’s role, noting that “early hopes that technology would be an equalizer between rich and poor have not fully materialized.” This observation highlighted how technological development interacts with existing power structures rather than automatically disrupting them.


### Corporate Fragmentation


Julia Pohler highlighted an often overlooked source of digital fragmentation: “Digital barriers created by big tech companies fragment online spaces as much as government regulation.” This expanded the discussion beyond traditional concerns about government restrictions to consider how platform business models contribute to digital division.


## Audience Engagement and Future Directions


### Collaborative Proposals


Professor Roberta Haar, leading a horizon project called Remit Research, proposed collaboration to develop scenario testing workshops and games based on the German project’s findings. She suggested moving beyond traditional report formats toward more interactive methodologies.


Bertrand de la Chapelle contributed observations about the limitations of multilateral systems and the upcoming discussions about the IGF’s future mandate in 2026, emphasizing the need for institutional evolution.


### Making Foresight More Interactive


Several speakers advocated for more engaging approaches to strategic foresight. Anriette Esterhuysen suggested that “the IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything,” proposing “more participative methodologies including scenario games rather than traditional panel formats.”


## Implementation Challenges and Next Steps


### Government Perspective


Philipp Schulte acknowledged the challenge of translating foresight exercises into policy action. He noted that Germany is currently in a government transition period, which affects the timeline for publishing the full report. The new ministry responsible for strategic foresight will handle publication once established.


### Process Improvements


Julia Pohler suggested that “future foresight processes should include government representatives directly in task forces for better implementation.” However, this raised questions about maintaining the independence and multi-stakeholder character of such exercises.


### Updating Scenarios


Given the rapid pace of geopolitical change, speakers acknowledged that the scenarios would benefit from updates reflecting current realities, potentially through addendums or annexes to maintain relevance.


## Practical Applications


### Beyond Traditional Formats


The discussion revealed strong interest in moving beyond conventional panel discussions toward more participatory approaches. The German scenarios could serve as the basis for interactive workshops, games, and simulation exercises that engage stakeholders more directly.


### Institutional Reform


The conversation included specific suggestions for reforming existing governance institutions, particularly the IGF. Speakers called for greater willingness to address controversial topics while maintaining inclusive character, and for adopting new methodologies that move beyond consensus-seeking on every issue.


## Conclusion


This discussion demonstrated the value of strategic foresight exercises both as planning tools and as catalysts for critical reflection on existing governance approaches. The sobering finding that multi-stakeholder governance faces significant challenges in most future scenarios prompted important conversations about reform and renewal.


The unexpected prominence of geopolitical factors across scenarios highlighted the need to better understand and engage with state actors as potential enablers rather than threats to inclusive governance. Similarly, the recognition that corporate actions contribute significantly to digital fragmentation suggested the need for more direct engagement with platform business models and big tech power.


The strong interest in more interactive and participatory methodologies, combined with calls for more courageous discussions of controversial topics, indicated potential pathways for more effective governance approaches. The challenge moving forward will be translating these insights into concrete actions while maintaining the inclusive character that defines multi-stakeholder approaches.


As Julia Pohler noted, the reality of geopolitical change is moving faster than anticipated, making such foresight exercises increasingly valuable for preparing stakeholders to navigate uncertain futures. The German project serves as both a methodological model and a wake-up call for the internet governance community to engage more seriously with the fundamental challenges facing current governance models.


Session transcript

Philipp Schulte: Hello, good morning, good afternoon everybody. Welcome to our open forum, Shaping the Future with Multi-Stakeholder Foresight. My name is Philipp Schulte, I’m a senior policy officer at the Federal Ministry for Digital Transformation and State Modernization in Germany, and I’m happy to see you all here on site and online and also on the panel and on our online panel. I will briefly explain what this session is about, that we have an online moderator with us, Lars. You can ask questions here on site and online after the first round of questions and you are very welcome to ask questions and I will also give you a lot of time for that, since I know some of the people here in the room have been involved in this exercise. I’m happy to discuss with you. So what is this session about? This session is about a project of our ministry which is called Strategic Foresight. Some of you might know that Germany has published last year the first strategy for international policy ever and there were several follow-up messages about this. There was funding for the IGF secretariat which we all really much welcome, but there was also a fellowship for international digital policy for young fellows which are also around here on the IGF and there was a process for strategic foresight we will dive into on this panel. And for that I’m very much excited to have you here on stage and online. We have here Anriett Esterhusen. I’m a senior advisor for global and regional Internet governance with the Association for Progress Communications and a former MAG Chair and around at the IGF since ever. I don’t know. Yes. Yeah, so and next to Anriette there’s Gbenga Sesan Executive Director of the Paradigm Initiative and also IGF MAG leadership panel member and Also, yeah, not new to the ecosystem here I can say and Online we have Julia Pohler. She’s co-lead at the Berlin Social Center for a research group politics of digitalization and co-author of the future scenarios we are discussing here on internet governance and in that role she has been a task force lead and developed the scenarios we we will discuss here. She’s online and hope can hear us Without further ado I will give the word to our panelists and starting with Julia online. Julia, you have been a task force lead in this experiment on strategic foresight with our ministry and Maybe you can explain to the audience which might not really be aware of this project or maybe doesn’t know what strategic foresight is it really is What was your role? What did you do with the task force members? Maybe you can say also word who was on the task force and what was the outcome?


Julia Pohler: Yes, thank you, can you hear me? Yeah, I can hear you well Perfect. Thank you very much. And I’m so sorry that I cannot be with you at the IGF But it’s my son’s birthday tomorrow, and I wouldn’t miss this not even for the IGF. I’m sorry So I’m happy to join online and I’ll be I’m happy to say a few words about the process and the methodology involved Not so much about the scenarios. We can discuss them later But maybe for those who are not familiar with strategic foresight I would like to make a few points of what strategic foresight is about and then Explain them on the example of our task force and what we did So I think what is important to keep in mind when we speak about strategic foresight in the field of Internet governance or elsewhere That strategic foresight is not about predicting the exact future and I think that’s something that we also struggle with in this process So it’s really strategic foresight is a process that helps us deal with Uncertainties by exploring kind of possible possible futures So it’s more about like thinking how we could prepare ourselves for different scenarios rather than trying to kind of guess What will actually happen in the future? So by using strategic foresight, and I think that’s the motivation also of the German Ministry to launch this process. Decision makers and stakeholders can better understand certain uncertainties that there are in the world and in which direction they might develop and then prepare for disruptions before they actually happen. That brings me to my second point. The first one is it’s not about predicting the future. The second one is it’s actually about developing future scenarios. Developing scenarios that are basically stories of plausible futures. That means that these futures that we develop in these scenarios, they don’t have to be realistic. It’s very likely that none of these stories that we develop will ever happen in that way, but they need to be plausible. So in some way they could happen if certain kind of circumstances come together. So these kind of stories that we develop or these scenarios that we develop are designed to highlight in different ways how the future might unfold and then help us understand how we can, with certain actions, kind of go in one direction or the other. So for instance, in the project that we’re discussing here, which was called Strategic Foresight Internet Governance in the year 2040 and was commissioned by the German Ministry for Digitalization and Transport. So before we actually changed the name of that ministry, we created four distinct scenarios for internet governance in the 15 years, in the next 15 years. And so these four scenarios were really kind of plausible stories in which we could explore a range of possible futures, which went from the continuation of trends that we see today, kind of growth and geoeconomic and geopolitical competition, and where this leads. And the second one was more about a complete and total systemic collapse and a fragmentation of the internet in two distinct networks. And the third one was about a regulation of the digital world to a degree that everything becomes controlled in some way. And the fourth one was about a complete transformation of the internet governance structures that we have today. in a turn away from economic competitive logic towards kind of a shared commitment in promoting public goods. So all of these four scenarios are possible futures and none of them will happen, but they helped us kind of understand what we see as trends and how we can deal with these trends. And also, I think what’s important to keep in mind that these scenarios do not exclude each other. Parts of them could coexist, so it could happen a part of one and a part of the other scenario, but it help us discuss what is desirable and what are risks that we want to avoid and where we kind of see opportunities and where we want to go. And for this very reason, I think strategic foresight has been a methodology that has been used by international organizations, also including a lot of UN agencies and by the European Commission, but also a lot by civil society organizations since the early 2000s to kind of inform decisions, inform actions and also inform policies. And I’m sure that Henriette and Brenna will tell us more about that because they probably have used a foresight in the past too. And that brings me to my third point, speaking about civil society and other actors. So the third part, so the first one, it’s not about predicting the future. The second one, it’s about writing possible future stories. And the third one, it’s to do this in a positive participatory way. So we followed a very structured method, but we reached this method through focused discussions with experts and stakeholders from very different backgrounds, which came together to gather insights and then kind of discuss different options and perspectives that we have on where we might go in the future with internet governance. And let me explain just in a few minutes what this meant concretely for our process. I will guide you a little bit through the process that we used to develop these scenarios for the German Ministry of Digitalization. So the entire process was mandated by the ministry, but it was coordinated by the German Agency for International Cooperation. the GESET, and they also provided the method experts who really kind of helped us through this process, all the task force members, and guided us methodologically through this discussion and how we develop the scenario. And for the task force, we were 15 members who were invited and were selected to represent the kind of diverse communities that we have in Germany, in internet governance, academia, business, civil society, and the technical community. And the goal was to develop these kind of four different scenarios for the next 15 years, basically, what will happen in the next 15 years in internet governance. So I was kind of the content lead, and that also meant that I helped drafting the scenarios, but it was really a joint process between all different members of the task force. So the task force members really contributed at every stage of the scenario creation, and we collected influential factors, we discussed what the impact of these factors may be, and then, based on the methodology, drafted these four possible futures, and also in the next step really critically accessed the possibility and constantly refined the writing of these scenarios. What’s also important is that because all members of this task force were from Germany, also they represented different stakeholder groups, still was a kind of very German or European centralized view that we had in this task force. So what we did is that I conducted interviews with specialists from various world regions and stakeholder groups to kind of validate these draft scenarios, bring in new ideas, and bring in also more global and diverse perspectives. So Henriette and Benna were interviewed by me for this process, so that’s also how they were involved in this. And finally, what we did after we had a good draft of the scenarios and they were validated, we had a network kind of workshop in which the different members of the task force, but also a different set of experts, were invited to participate in this workshop. So this was a kind of a virtual workshop, to discuss and also use a certain method to kind of develop ideas how these scenarios, what they mean for their own kind of actions and their own planning. And as far as I know, the scenarios are also now being used by the ministry to discuss potential options for actions in the field of Internet governance. And I leave it at that.


Philipp Schulte: Thank you, Julia. That was really helpful for us all on stage and also in the audience to better understand what you did and what the German government together with stakeholders here proposed. So one important point is that my panelists here on stage were interviewed for these scenarios. So let me turn to you, Anriette Bengar. How was it for you to be interviewed in this project? Was that something familiar to you? Was it completely new? What was your experience during the interview? What were you thinking when you were reading? I mean, we come maybe later to that. The scenarios and yeah, what was your impression?


Anriette Esterhuysen: You want me to start? I have used this methodology before and it was quite interesting a long, long time ago. It was actually in South Africa in around the late 1990s, just shortly after liberation, after the first democratic government was in place. And it was being used in the context of planning for development and inclusion and actually also participative governance. And at the time, I found it immensely frustrating. And I think I wasn’t a very productive participant at all because I found the abstraction very frustrating because I knew exactly, you know, I was much younger. I thought I knew exactly what we need to do, what the problems are. And approaching it in this kind of roundabout way seemed to me, and the facilitator was from the U.S., which frustrated me even more. And I really did not find it very helpful. But now I’m much older, much wiser, and Julia is a very, very good interviewer. And also I think, you know, having been around Internet governance for a long time, I think we have become very, what’s the word, quite boring is maybe the best word, but there are more sophisticated words. I don’t think we’re being creative or innovative enough. I don’t think we’re applying critical thinking enough in how we are evolving Internet governance. So I actually found it very exciting and very interesting and enjoyed the process. I think abstraction is still an issue, and maybe we can talk about that a little bit more later. Yes, I found it really, it was a sort of stream of consciousness approach, but guided by Julia to focus on the plausible, but also not trying to think of what will actually happen, and then playing with those trajectories, but of course with the knowledge of the world that we are living in and working in. So I found it very useful, and I was very impressed actually that Germany had done this. I think my only sort of one, I would have liked to be part of a focus group or a group at some point. I think I found it, I would have found it more interesting in some ways to have a group dynamic. And then I think my only other question about it as well is the way in which you treat multi-stakeholder in how you are approaching the future of Internet governance. And I think in that sense, the study itself, I think, perhaps did not unpack or deconstruct what multi-stakeholder means. I think I would have actually possibly found it more valuable if it was scenarios of governance effective, accountable, whatever governance. Somehow I felt that the focus on multi-stakeholder became a little bit one-dimensional. You know, civil society, business, government, technical, which I think is actually one of the weaknesses in our entire ecosystem.


Gbenga Sesan: Well, it wasn’t my first, but talking to Julia was also very interesting. I do interviews a lot, either from research interviews where people are hoping that you are supporting a thesis, or to media interviews where people are hoping they can pigeonhole you into a position. So this was very helpful that there was no target outcome you could think. And I think it was very helpful to be able to think on your feet, well, maybe on your seat, to think while you’re having the conversation. I’d done this in 2007, a while ago, as part of the Desmond Tutu Leadership Fellowship, and we’re trying to create scenarios for the future of Africa. It was an interesting process because for us at the time, it was like a compromise. There were people who felt things were going to go this way, and there were people who felt things were going to go the other way. The optimist, the pessimist, and maybe a small group in between. So doing futures, possible futures, was sort of a compromise. Like everybody felt heard, and everybody felt that they saw the future in this. And what I also found interesting in this, like Andrea said, it was good that it was a government. Typically, you would have this kind of project by civil society, thinking of the future. But it was good to know it was a government. And one of the things, I don’t know if Julia remembers this, but one of the things that I was very keen on was implementation. Whatever that asks you continue to implement, you’re able to look at the scenarios and adjust. Because one of the beautiful things about possible futures is that it won’t happen exactly the same way. But when something happens close to the scenario that you’ve discussed, then you have an opportunity to either align or to run away from certain things. And I’m glad to hear that that is happening now. So for me, it was fun. There was no exact destination. There was no eating agenda. So it was a better conversation to allow me to think and to speak to the issues as I saw them. and what I thought could happen. The other involvement I had with scenario planning, and I think that Anriette was also involved with this, was I think in 2008, was it Elon University? I don’t remember the name, talking about predicting the future. And I remember one of the things I said about the future at that time was that we will begin to find confusion between work and play. And so sometime during COVID, they sent me an email and said, oh, wow, what you said is what is happening now. And I was like, no, I did not predict the pandemic. I was just a bit scared that that was going to be something that would happen in the future. So I think it was very helpful because when that moment came, I felt prepared because you had thought about it. And I think this is one of the beautiful things about creating possible futures.


Philipp Schulte: Yeah, thank you so much. That was super rich already and it highlighted some of our ideas, but also some of the challenges within the process, but also in the outcomes or which the outcomes highlighted some of the challenges of the current environment, current community. And yeah, thank you so much for your thoughts here. Julia, do you want to react directly or otherwise I would ask you, what are your key takeaways or the defining event in describing these scenarios? What do you consider to be the defining characteristics of the multistakeholder governance of internet, the multistakeholder model of internet governance, which you had pointed out was a bit one dimensional. I think that’s what you said. And maybe actually it’s fun that you said that after reading the reports, because that’s what actually our finding was before starting this process. But so you might have now a better basis for discussion. That’s at least our hope, but I don’t know, you are heavily thinking about the multistakeholder process and having published. about it. What is your opinion?


Julia Pohler: Yeah, thanks. Well, I also really kind of enjoyed for me, as I usually, I’m a researcher, I work in academia, we don’t usually think about potential futures, right? We look at the past and the present. So for me, it was also a very interesting kind of exercise. And I’ve done this three times now, also in internet governance before in different contexts. And I thought it was very interesting. Also, I really enjoyed the interviews, because they really broadened my perspective. And I learned a lot from these interviews. But I think when kind of I we get to the stage where we really wrote the scenarios, and I looked at them with some distance after a while, I think what strikes me most is that the most important factor in almost all scenarios, maybe a little bit less the last one, which is about a complete transformation. But the first three one, the key driving factors is actually the role of states and the role of governments. So in each scenario, the kind of actions of states of in particular of important states, the US, China, but also Russia, then, of course, we are from the EU. So we kind of looked also at the EU, but the kind of the actions of particular states, including also emerging kind of powers, and the relationship between states and governance was really the key defining factor in each of the scenarios. So geopolitics, and so also geoeconomics, in some way, were the main drivers for transformations in these scenarios. And they’re all the main drivers of the future and the main kind of key factors for the future that we imagined. And we, that’s to say, we actually started writing them. And we wrote these scenarios before President Trump took office again. And before we kind of saw this increase of geopolitical tensions and economic competition that followed his taking office again. So I think Today we would have gone even further in emphasizing the role of geopolitics and geoeconomics in these scenarios and kind of written them even more around these kind of tensions that we see. In some way I would even say that the actual geopolitical developments have already overtaken the scenarios that we’ve written only six months ago or eight months ago. So the reality is actually moving faster than we thought it would. And it’s my assumption that actually 10 years ago had we written these scenarios 10 years ago we would have given a less much less prominent role to states and to governments and into the relationship between states. And I’m actually I don’t only think so I actually I’m sure we would have given this because I did such a process a foresight process in 2013, 14, 15 and there the key driver was actually the corporate actors and civil society. So it has changed and I think this is also kind of a finding that we see like on how we see the world as well that actually states and geopolitical and geoeconomic actions do have become more important again. As for multi-stakeholderism I kind of have to share also Anriette’s observation and I think what also with some distance looking back at these scenarios what is kind of striking and maybe even frightening that in none of these scenarios except maybe for the last one which is very different there is a bright future for multi-stakeholderism and internet governance. So I think in all of these scenarios we ended up writing possible futures in where multi-stakeholder processes are either being hollowed out or kind of completely undermined by corporate actors and state actors. To some degree basically the commitment of to multi-stakeholderism and internet governance remains at least at the discursive level but we wrote scenarios where this commitment is either only a lip service or where multi-stakeholder processes are being so institutionalized and professionalized and becoming so predictable that they actually lose their meaningness and they lose the kind of bottom-up character and the possibility to also include voices that might diverge from the mainstream kind of perspectives in these processes. So I would say that in all of these scenarios, somehow multi-stakeholderism and governance has outlived its promises. And I think that’s something that, since we ended up writing them, not really with that intention in mind, but this is how we kind of ended up seeing the future, I think that’s something that should give us some reflection on what we are doing and how we maybe can also transform our current model to make it more meaningful.


Philipp Schulte: See a lot of nodding here. Do you want to direct directly?


Gbenga Sesan: I mean, it’s, you know, I started nodding when you were talking about how fast the geopolitics have played out. And I think I remember part of our conversation at a time, we talked a bit about it, but I don’t think that anyone sort of could predict that things will move this fast. In fact, you know, by November, there were people who were doing scenarios, I mean, within organizations, we had to do some planning at Paradigm Initiative in November, you know, but there wasn’t that sense. And I think this is the relationship between insight and scenario planning. And this is why adjusting as you go is critical. So there are things that you sort of plan for and you dial them to a level seven and they get to a level nine and you have to tell yourself, listen, we can’t, I mean, it will be insanity for you to then take the actions you planned for level seven when you are at a level nine. I would say for multi-stakeholderism right now, not only is it not living up to some of the lofty, you know, definitions and branding it did, it’s also been threatened because of that reason, there are people who are then saying, yes, we’ve talked about the ideal multi-stakeholderism where everyone is an equal partner. Of course, we know not everyone is equal around the table. It hasn’t worked. So let’s try this other less perfect, but pragmatic model. And that itself is a challenge because I think, and two things to me, one is, yes, we must adjust, but we must also never lose that opportunity to dream of, to wish for a better scenario. It won’t be perfect. We have to adjust, we have to be realistic, but we shouldn’t move from optimism straight to pessimism. We should maintain a healthy dose of realism and say that some things may not be working now, but it is still possible to get things to become better.


Anriette Esterhuysen: Let me comment on what you said emerged about the role of states. I think absolutely, now that’s not a surprise to me. And in fact, what is a surprise to me is that there’s still reluctance to talk about enhanced cooperation in this space, which is, you know, one of the worst things not to be named. Words not to say. Because the reality is that how states engage or not engage with one another has profound impact on how inclusive governance is, how strong civil society can be, to what extent human rights are respected or democratic institutions are able to grow and play their role. So much as we like this, I don’t know, there’s this kind of fairytale notion of multi-stakeholder governance as this alternative dimension of perfect governance. I mean, I see it as a way, a way of arriving at more accountable, inclusive, effective governance. And states are a big part of that. I think what the multi-stakeholder approach gives us is a way of really putting on the table that states cannot do this on their own. And if they do it on their own, they’re probably not gonna do it very well. But that doesn’t mean that states do not still have quite a profound role. And I think the other thing that multi-stakeholder also gives us, or what I think the way in which the IGF and IG has evolved is the fact that it’s a diverse ecosystem. Internet governance has many. types of decision-making processes, types of development and standard-making processes. Some of them might be led by governments, some of them could be completely technical community-driven, and some might be more society-driven or private-sector-driven. I think what the multi-stakeholder approach gives us is the constant reminder that we need to connect these with one another and that they need to overlap and engage with one another. But it doesn’t mean that there’s this new sort of amorphous multi-stakeholder ideal which has to operate across the board. So I do think it’s interesting that the role of states is important and I don’t think we should feel that that undermines the ideals that we are striving for in this space, which is to have inclusive and participative governance that achieves good results, public interest results.


Philipp Schulte: Yeah, I guess I couldn’t agree more here since… I mean, to the world of the digital or to the internet, the state was maybe a foreign player for a long time. And now just… I mean, I share the observation here that the state, as also the ministries, show up more and more. They show up more to the IGF, but they show up more to ICANN, they show up more even to the IETF right now. And they get involved and I think that might be maybe a usual process since the state was reluctant to show up compared to other political areas or fields of politics. So that might be, I don’t know, it can be healing, because as you said, the state is still important and can play a role. And this also triggered a bit our thinking, what our role is and what a good role for us is. And I think, when I think about it, I think the responsibility of the state is more like… There’s a garden of multi-stakeholders and the state is… Maybe the responsibility of the state is to make sure that all floors, all the different stakeholder groups can perform in their role they want to perform and they perform best. And maybe these scenarios can help the different groups and that was one idea behind it. So this leads me to my next question. Is that… I mean, the report is not published yet, but it will be published at some time. Is that something you could use in your daily work?


Gbenga Sesan: Absolutely. Not just because I contributed to it, but one of the things I was very keen to see was how all the ideas would come together to define what the scenarios would be. I think, well, at the risk of giving you more work, I think it needs to be updated very quickly with some new realities, maybe like an addendum or an annex or something like that. And I believe that that’s something that we will do, most likely. Anyone who picks that report, who looks at the scenarios, will be able to situate those scenarios in our current reality. So some of the geopolitics that we talked about at the time, it wasn’t as deep as some of the experiences we have right now. So I can imagine that it will be at least a starter for conversations. But absolutely, I think this is something that will be useful, not just the content of it, but also the principle behind it. The principle of creating possible futures and adjusting your strategy as you continue to see what has emerged and how close they are to the possible futures that you predicted. One key role was the role of the state, but another key role is the role of technology. And so, Anayat, you worked a lot on connectivity and worked with a lot of technologies here also in this area.


Philipp Schulte: What’s your assessment of the role of technology in these reports or also in real life? And what can we learn from all the technologies, how they were implemented, how they were introduced for new technologies? And where do you see the dangers and opportunities?


Anriette Esterhuysen: You’re taking me away from scenarios now and foresight to reality, you know, in the present. I mean, I think that one of the things that we need to do and I think one of the strengths of the report is that it does allow us to think of technology in both as a force that has actually impact on its own, as well as a sector that interacts with geopolitical conflicts, with different forms of societal change and organization. I mean, I think… I mean, you were also going to ask me at one point when I was so actively involved in trying to build Internet connectivity in Africa in the sort of 80s, 90s, early 2000s, what our hopes were. And I think this is also a shift from WSIS and WSIS Plus 20. I think it was very much a belief, naive, obviously, that access to technology and particularly access to communications technology will be an equalizer. That it will be an equalizer between rich and poor, the center and the periphery, men, women, non-binary, that individuals… That it would be this set of tools and processes that creates engagement and cooperation. And of course, it didn’t quite pan out that way, but that is still part of what technology gives us. So I think, I mean, the hard part about foresight, but also the interesting part is to look at how this complex way in which individuals and societies engage with technology and are changed by technology, how that will play out in different scenarios. And maybe that’s also one of the reasons why the role of states emerged as important, because I think when faced with unpredictability, there is also, I guess, a tendency to look at who are the institutions in this context of unpredictability and insecurity that have the capacity and the responsibility to make sure things don’t go wrong. And I guess that’s also naive, because we also know that both corporations and states are unpredictable themselves. So I’m not giving you a good answer here, because I think that it is… So I’m going to actually answer the question you asked, Gbenga, which is, is this useful in my work? Not particularly, because I think it could. Is the report useful? I’m not sure if the report will be useful. I think the exercise is enormously useful. I think participatory processes like this are very valuable to the people that are part of them. So I think to make the report useful, you’d have to find a way of using it in a context where people are actually able to discuss and think about it and engage those scenarios. And then I think it could be very useful, because I think we do need to think more creatively. And I’m just going to give one example. We probably don’t have much time, but this year, for those of you who don’t know, but you’ve probably all heard so much about the World Summit on the Information Society, by the way, the action line on enabling environment, that’s what governments are supposed to do, create an enabling environment. But when the WSIS was reviewed by the Commission on Science for Technology, which is a UN body, it’s part of ECOSOC, it was shortly after the US government had taken a position on not wanting to support the sustainable development goals or use the concepts of developments and sustainability. It was also shortly after the US had pronounced that gender is biological and they are just too sexist. And so these featured in the negotiations around the WSIS, where people were talking about, have we got digital inclusion? Is there security? Are we achieving development goals? And it was really, I was there as a civil society participant, and to see the European states in particular, shell-shocked, because it was so difficult for them to operate in this context, when a long-time partner in the Internet governance and World Summit process, the US, was moving outside or taking on a different position. My first thought during that entire week was, I wish these governments had all done some foresight work. And maybe if they had, they’d actually be able to take advantage of this shift, be creative, form new alliances. And I think that’s why, certainly for diplomats, I think certainly for governments, anyone who is involved in negotiation in a geopolitical or even in a multi-stakeholder context, I think it’s a very, very useful technique to use.


Philipp Schulte: You gave us a lot of homework here. Speaking about time, we still have some time and I’m happy to take questions from the audience. So if you have prepared already a question, please line up here. We have a mic here. Otherwise, we are also able to take online questions and we are more than happy if you are in the discussion. Otherwise, I will pick up on another point you said that’s, I mean, level of abstract, like the reports are abstract. Oh, we have already. Yeah, please introduce yourself and…


Audience: Yes, good morning. My name is Professor Roberta Haar. I’m at Maastricht University. And I’m also leading, I was on a panel in day zero with my, I’m also leading a horizon project, Remit Research. I encourage you all to look at it. And I’m very excited about the work that you’re doing because we’re also, part of what we’re trying to develop is something called scenario testing workshops. And in those workshops, we’ve also developed games and we developed them with the joint research center at the EU Commission. They also have this scenario exploration system. You’re shaking your head, so I guess you’re aware of it. And so we have taken their system and used the data from our research and developed scenario games. And we’ve already played now the first one, which we had on military AI at Erasmus University in Rotterdam. And we had extremely good results. And we did exactly what Anriette said, as we took the data and we brought it to people to play and discuss. And we had four scenarios that we have developing with our different data. And we still have four workshops to go. And we will have one in Rome in April, in Helsinki in September, so April of next year, obviously. And then we’ll also have summit in Brussels. And I know Anriette is shaking your head because she’s on our supervisory board. She also pointed out for me to come today. So my question is, and my first question is, is the report accessible? And then you already answered that question. But then my next one is, can we sort of also adapt your data and maybe also have some collaborative in taking your data onto the next step into a game and so that we can integrate it and then indeed have policy ideas to invite policy stakeholders to our games and to play it? So I’m hoping that I’ve noted all your details out. I want to write to Julia and hoping that we can maybe have some collaborative work there. Is that something that you find interesting? So thank you.


Philipp Schulte: I guess I have to take the question first, but I’m happy to step in. So on the report, yes, it’s true. It’s not published yet. as you might know we are in a government transition period in Germany and we set up a new ministry but this new ministry will also be responsible for strategic foresight so that’s a lucky coincidence in this case and so we are really optimistic that we can proceed at some level with this report and also with the methodology for sure and also with our work we have done however one idea behind it was that this report is not only for the government but also for all stakeholders so happy to reach out to all involved in the program on the project and I know that some civil societies organization in Germany for example Wikimedia already taking the work and trying to to work with it with the reports and with the methodology so I’m happy


Julia Pohler: to connect you. Julia do you want to add anything? No you just mentioned what I would like to mention Julia I mean I know about the Remit project I went to one of your conferences last year discussing multilateralism, multi-stakeholderism and I would be happy to also be involved I think we have to go through the ministry if it will confirm this since they probably have some kind of control over the material we produce but I think it would be very helpful to kind of take this further and develop it into a game which would be fun and I think that also kind of connects to what I wanted to say about stakeholder engagement during this exercise maybe I can come to that later because it is also a challenge to keep people involved in these kind of exercises and I think making it into some bringing it to another format also could be very helpful in the learning process for us all and how we can do this differently as well to maybe make it more fun for everybody involved and make it more meaningful also the output and take the output to something that can then be kind of used elsewhere yeah Bertrand


Audience: good morning my name is Bertrand de la Chapelle I’m the executive director of the internet and of the German Jurisdiction Policy Network. Two things. One, first of all, congratulations to the German government for having undertaken this thing because I think it’s a perfect place and venue for discussing also the meta level of how our institutions are going. The tool of scenario or foresight is definitely a good one. I am extremely frustrated that you cannot present this because it would have been a perfect session to build the session around this. So I’m waiting impatiently for the release of the scenarios of the foresight report. The second thing is precisely about these kind of exercises. And they are extremely important. We know the limitations of those things. You know the benefit of engaging the people. It’s mostly the process of developing those things that is the most interesting because it allows people to express what they see as the trend, what they see as the drivers, pro or negatively. However, there are things that are always extremely difficult to anticipate in those environments. Call them the black swan or the unexpected events. For those of us who are old enough, we can remember that when the World Wide Web emerged, everybody was talking about America Online and the domination of America Online and how the future of electronic communications was going to be those mammoth companies or the telcos. And then something happened on the side. I want to keep faith in the fact that the multistakeholder spirit, not the model because there is no such thing as a multistakeholder model, but the multistakeholder spirit not only will be alive. but that it will ultimately permeate everything because the reality is today because of those geopolitical tensions we are seeing more than ever that the governments together cannot solve those problems. I want to highlight and I’ve said that in other sessions in 20 years since the WSIS there hasn’t been one single agreement among all governments on digital issues except a cybercrime convention sponsored by one of the countries that is the most present behind cybercrime. That’s the ultimate irony of the limits of the multilateral system which has to be preserved don’t get me wrong the states are absolutely fundamental but our inability so far to bring the different actors around the table in environments like the IGF and other venues is one of the reasons why we’re struggling to address those problems. At this juncture this exercise about scenarios we need to also think a little bit more about what we want not only what the trends are and to finish the WSIS plus 20 process at the moment is entirely focused on producing another resolution in December. There’s one thing that it should say and set the stage for which is what is going to be the future of the IGF. When do we discuss in 2026 and where the evolution of the mandate and the evolution of the structure of the IGF and as you’re discussing scenarios thinking about the institutional arrangements internationally is a core follow-up I think for what you’ve been trying to achieve.


Philipp Schulte: So I couldn’t track a real question here but provoking


Gbenga Sesan: Thanks a lot for that, and I underlined want here, when you were saying we should keep in mind what we want, and that’s something I was speaking to earlier when I said there is reality, there is history, there is data, but there is also the desire that we have, and we may be faced with challenges, but we need to come to the table with an ideal scenario that we want. What do we want? Because the challenge is, if you’re frustrated by history, historical data, if you’re frustrated by some scenarios that paint a bleak future, then there’s no point. We might as well just throw up our hands and say, let’s sit down and watch the TV, but if there is something that we want, what this brings to mind for me is if you’re running or sailing or flying against the wind, you could either submit to the direction of the wind, which then means you will go anywhere the wind takes you, or you could drive against the wind. My mathematics interest comes into play here. You think of the velocity to use, you think of the direct angle of inclination, so that worst-case scenario, you will not be pushed away and you will end up where you want to go. I think it’s really important that we know what we want, and knowing what we want has to come from everyone on the table. It cannot be what the government wants. It cannot be what only one stakeholder wants. Of course, we all come with what we want. We have conversations. In some cases, we’ll have consensus, and we will come together and agree on some things, but it is absolutely important, for the want of another phrase, to just keep dreaming.


Anriette Esterhuysen: Sometimes I’ve known Benga since he was very young, and sometimes he makes me feel very old and sometimes not, but today you make me feel very old, because I think that, of course, we have to dream, but it’s not just about dreaming. It’s about concrete things. What is the WSIS all about? It’s about a people-centered development, human rights-oriented information society, where people can use technology to improve their lives. To me, that’s more important than having an IGF, frankly, but I believe we need the IGF to get there, and I do agree with Batra that we have to renew the IGF, and I think that’s actually an interesting point about the foresight exercise as well. I think all of those scenarios, as Juli has said, they all depict a fairly not such a positive picture of multi-stakeholder, which I think we should interpret as a real indicator that we need the IGF, and we need forums like the IGF. I think for me, the important thing, though, is that it is an IGF which allows the wind in and doesn’t close all the windows so that we can sit in our sort of safe, comfortable, multi-stakeholder space, because I think, Benga, the reality is we don’t all want the same thing, and we’re not always going to have consensus. That doesn’t mean that we shouldn’t be in active, open conversation with one another, so for me, an IGF that actually allows us to tackle the big issues.


Julia Pohler: to do this kind of process or even take what we did now and move forward and see how why this matters also to the members who tasked with me who wrote this report and kind of show them the real kind of clear benefits. And I think one of the ideas on how to make this more meaningful would be to actually see what now for example the German government who mandated this process is doing with whatever ideas we developed and how these ideas actually kind of help people within the ministry, people within the government to figure out what they want as you just said it or figure out what they don’t want and how this impacts whatever the government should be doing and should not be doing. And I think that’s the kind of opaque part for us and this is also meant as a criticism on our task force itself because we had difficulties kind of picturing where we were going. It’s not meant as a criticism of the process itself because I also know that how it was organized we had a government change in between and whatever and the funding was also meant to engage stakeholders with each other. But I think what would be really helpful is to actually have people from the ministry in the task force next time you do this or when you take this to the next step because I think it would be extremely interesting to actually have people talking to each other about where this leads us and where would we want to go and I think it would also help the task force member better understand what their contributions are leading to and it will help the government who is mandating this process better understand where these ideas are coming from, what kind of competing visions are behind this, what kind of competing perspectives or even compatible perspectives are behind this. So I think this is one of the ways I would say take this forward or make it better in the next round.


Philipp Schulte: Yeah, that are valid points and it’s good to hear that because we were a bit reluctant to be on this task force because we don’t want it to have a government-steered process so writing our own scenarios by stakeholders and then use them as a lip service. So we really wanted not really to get into the scenarios. I mean and I agree that it was a lot of work and this was also a reason why the stakeholder group was mainly people from Germany or Europe because otherwise it would be even more. like harder to bring them all together to Berlin like two to three times and to work on a scenario. So there were some restrictions as I said, but our hope is that with the new government and with the new responsibilities in the ministry that we can learn from this process and take it to the next level. But coming back to my original question about like implementation and alternatives, what is your…


Gbenga Sesan: So I can go back to this word want and of course I agree with you on starting at times from what we don’t know what we don’t want. In fact in itself knowing what you don’t want is like knowing what you want. I want not to have what I don’t want. And we had this conversation on leadership panel, you know, we were inaugurated in August 2022 and we had a lot of conversations and it almost always ended with we don’t want this, we don’t want that. Many of you I hope have seen the internet we want paper that the leadership panel put out. That was the idea behind it. We have to at some point define certain things. There are certain things that we agree on. We don’t all agree on everything, but there are certain things that people will not feel too strongly against and we could start with that. The internet we want paper talks about certain things that I’m sure some people will read and say, hey, rights online. Maybe we don’t want that, but at least it is out there as something that certain stakeholders and majority of people desire, you know, to have. So I think it’s absolutely important. Yes, optimism, yes, dreaming, but also putting down in clear terms what we want. Because at times what that does is when you go into situations and you see reality, you can then say this is the reality, this is what I want and your task, your action is then creating a pathway. between where you are at and where you want to be. If I want to face reality, I definitely will resign from my job right now. I mean, I work on a continent to talk about digital rights and inclusion where every other conversation I have with governments in the region is about clampdowns or about explaining away, you know, clampdowns that they have, but it helps that we know that this is the desired destination. This is where we’re at and this is the tough work we have to do from point A, which is where we’re at, to point B, where we’d like to be.


Anriette Esterhuysen: You know, I think that, I think sometimes we say what we want and particularly when we try and say it in a multi-stakeholder way, you know, it sounds like some kind of sort of watered down set of wedding vows or whatever. I can’t think of a good analogy. I mean, I want fair tax payment by big tech so that countries who need revenue to actually build a fiber optic backbone so that there’s feasible, reasonable internet for institutions, for universities in a country to be able to have some access to resources. I, you know, I want data flows that are not based on an extractive sort of colonial type model, you know. I also want competition between the private sector and I want local private sector operators in developing countries. There are lots of things I want that I think will create an enabling open and inclusive internet, but it’s almost impossible to say those things in the context of so many multi-stakeholder fora because you don’t want to offend the private sector. You don’t want to offend governments that shut down the internet. You know, you don’t want to talk about the great firewall of this or that country. And I think we have to be able to be willing to use this sort of multi-stakeholder modality with a little bit more. I think it will help us get there, but I do have a concrete suggestion for the IGF, because I think this methodology is so powerful. I think one of the things that makes multi-stakeholder fora, or has the IGF, as it’s evolved, made it maybe also more difficult, is that it’s much more now not about individuals, but about institutions. I mean, if Philipp came to an IGF 15 years ago, he might have just been there as an individual, rather than as a representative of the German government. Now, there are pros and cons both ways, but I think if we could maybe collaborate, yourselves collaborate, with Roberta and her team, come up with a game that at the next IGF we play not in rooms like this, where we sit here and talk and you all sit there and listen, but actually engage with one another in an interactive way, and everyone participates in thinking about foresight and changes. And there’s no reason why you can’t do that in a room with 500 people, actually. There are methodologies that allow that. So, I’d like to see a redesigned IGF, a redesigned and a braver IGF, redesigned in terms of making it much more participative and innovative, in terms of the methodologies we use for our sessions, and a braver IGF, more willing to actually ask difficult questions around which there’s not going to be consensus.


Philipp Schulte: Absolutely. I think that’s a really good proposal to have not only workshop and lightning talks, but also games. That might be a really good new session format for the next IGF. Are there any other questions in the audience or online? I’m happy to take them now. Otherwise, I would invite my panelists for the final remarks. And partly you have answered them already, but you might summarize it and make it a bit more precise, so I can write that down. So, you articulated wishes for the IGF, but you might also articulate wishes for the German government or other governments when we now would get funding for another process. What are your three main wishes? What should be the outcome? And would you support us? Gbenga Sesan, you want me to start this time? Why don’t we let Julia start? As you want. You want me to start? Yes, please.


Julia Pohler: Okay. That’s a tough question when I have to think that the German government should be doing this. I was actually… thinking that what would I would like to have this kind of exercise on. But yeah, as Anriette just said, maybe we have to be a bit more courageous and in kind of tackling the elephants in the room. So I think what would be one of the things I would like to see a foresight exercise on is the practices of big technology companies in creating digital barriers and closed ecosystems. Because I think there’s a lot of talk recently about the potential fragmentation of our digital space due to governments and government regulation and digital sovereignty and a lot of fear related to this. Much of it is coming out of a particular idea that we need a certain kind of digital space that is even free from governments. I think we had this discussion right now. But I would like to see more attention being paid to how the dominant business models of our current platform economy fragment our online spaces and lead to many of the phenomenon that Anriette also just mentioned. So I think that would be one of the issues that should be tackled. Whether the German government is in a position to do this, I don’t know. But maybe we have to be courageous.


Anriette Esterhuysen: I think it would be interesting to to I support what Julia just said. I think because the role of states emerged as so important in the exercise. Maybe some activity to look at the role of states, but in a more creative way, not just look at, you know, digital services, digital market, you know, not just look at. I mean, often I feel governments feel that in their toolbox, there’s basically repression and regulation. In fact, governments have a huge toolbox that they can use. They can do so many good and engaging. And exactly. And as Philip is saying, this kind of thing, but maybe to use this in the IGF context, perhaps to work with other governments about, you know, what what is it really that governments can do to help us to enable this, what the multistakeholder ideal represents, which is inclusion, accountability and creativity. In other words, so instead of always, you know, governments being kind of the the silent partner or sometimes the problematic partner in this multistakeholder journey.


J

Julia Pohler

Speech speed

175 words per minute

Speech length

2908 words

Speech time

991 seconds

Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties

Explanation

Julia explains that strategic foresight is not about predicting the exact future but rather a process that helps deal with uncertainties by exploring possible futures. It’s about thinking how to prepare for different scenarios rather than trying to guess what will actually happen.


Evidence

The German task force created four distinct scenarios for internet governance in 2040, ranging from continued geoeconomic competition to complete systemic collapse and fragmentation


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Strategic foresight methodology is valuable for participatory processes and creative thinking


Disagreed with

– Anriette Esterhuysen

Disagreed on

Level of abstraction in foresight methodology and its practical utility


The German task force developed four distinct scenarios for internet governance in 2040 through structured participatory discussions

Explanation

Julia describes how 15 task force members from diverse German communities (academia, business, civil society, technical community) worked together to develop scenarios. The process involved collecting influential factors, discussing impacts, and drafting four possible futures for the next 15 years.


Evidence

The four scenarios covered: continuation of geoeconomic competition trends, complete systemic collapse and internet fragmentation, total regulation and control of the digital world, and complete transformation away from economic competitive logic toward public goods


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Geopolitics and state actions emerged as the primary driving factors across most scenarios, more than anticipated

Explanation

Julia notes that the most important factor in almost all scenarios was the role of states and governments, particularly major powers like the US, China, Russia, and the EU. Geopolitics and geoeconomics were the main drivers for transformations in the scenarios they developed.


Evidence

The scenarios were written before President Trump took office again and before increased geopolitical tensions, yet state actions still emerged as key factors. Julia believes they would have emphasized geopolitics even more if writing today


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Geopolitical developments and state actions are increasingly important drivers in internet governance


Current geopolitical developments are moving faster than the scenarios predicted, with increased state involvement

Explanation

Julia observes that actual geopolitical developments have already overtaken the scenarios written only 6-8 months ago, with reality moving faster than anticipated. She contrasts this with a 2013-15 foresight process where corporate actors and civil society were the key drivers instead of states.


Evidence

The scenarios were developed before recent geopolitical tensions escalated, and Julia notes that 10 years ago in a similar process, corporate actors and civil society were the main focus rather than states


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Geopolitical developments and state actions are increasingly important drivers in internet governance


All scenarios except one showed a bleak future for multistakeholderism, with processes being hollowed out or institutionalized beyond meaning

Explanation

Julia explains that in none of the scenarios except the complete transformation one was there a bright future for multistakeholder governance. The scenarios depicted futures where multistakeholder processes are either undermined by corporate and state actors or become so institutionalized that they lose their bottom-up character and meaningful participation.


Evidence

In the scenarios, commitment to multistakeholderism either becomes lip service or processes become so predictable and professionalized that they lose the ability to include divergent voices from mainstream perspectives


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Current multistakeholder governance faces significant challenges and needs renewal


Future foresight processes should include government representatives directly in task forces for better implementation

Explanation

Julia suggests that having people from the ministry directly in the task force would be extremely helpful for future exercises. This would allow better dialogue about where the scenarios lead and help both task force members understand their contributions and help government understand the competing perspectives behind the ideas.


Evidence

Julia notes the current process had difficulties with task force members picturing where they were going, and there was an opaque part regarding how the government would use the developed ideas


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Audience

Agreed on

Future processes should be more interactive and participatory


Disagreed with

– Philipp Schulte

Disagreed on

Government participation in foresight task forces


Digital barriers created by big tech companies fragment online spaces as much as government regulation

Explanation

Julia argues that there should be more attention paid to how dominant business models of the current platform economy fragment online spaces. She suggests this creates barriers similar to those created by government regulation, challenging the idea that digital spaces need to be free from all government involvement.


Evidence

Julia notes there’s much discussion about potential fragmentation due to governments and digital sovereignty, but less attention to how big tech business models create similar fragmentation effects


Major discussion point

Technology’s Impact and Implementation


Topics

Economic | Legal and regulatory


Future exercises should examine big tech business models and government enabling roles more courageously

Explanation

Julia suggests that future foresight exercises should more courageously tackle issues like the practices of big technology companies in creating digital barriers and closed ecosystems. She acknowledges uncertainty about whether the German government is positioned to do this but emphasizes the need for courage in addressing these topics.


Major discussion point

Practical Applications and Future Directions


Topics

Economic | Legal and regulatory


A

Anriette Esterhuysen

Speech speed

155 words per minute

Speech length

2510 words

Speech time

969 seconds

Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints

Explanation

Anriette explains that while she initially found foresight methodology frustrating in the 1990s, she now sees its value for encouraging creative and innovative thinking in internet governance. She believes the abstraction helps move beyond current boring approaches and enables critical thinking about governance evolution.


Evidence

Anriette contrasts her earlier frustrating experience with foresight in post-apartheid South Africa with her current appreciation, noting that internet governance has become ‘boring’ and needs more creativity


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Gbenga Sesan

Agreed on

Strategic foresight methodology is valuable for participatory processes and creative thinking


Disagreed with

– Julia Pohler

Disagreed on

Level of abstraction in foresight methodology and its practical utility


States have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals

Explanation

Anriette argues that how states engage with one another has profound impact on inclusive governance, civil society strength, human rights respect, and democratic institutions. She sees the multistakeholder approach as a way to remind that states cannot govern alone, but this doesn’t diminish states’ important role.


Evidence

She points to the reluctance to discuss ‘enhanced cooperation’ and notes that multistakeholder governance should connect diverse decision-making processes rather than create a uniform alternative dimension


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Julia Pohler
– Gbenga Sesan

Agreed on

Geopolitical developments and state actions are increasingly important drivers in internet governance


The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance

Explanation

Anriette explains that internet governance is a diverse ecosystem with many types of decision-making processes, some led by governments, others by technical communities or private sector. The multistakeholder approach should remind us to connect these processes and ensure they overlap and engage with one another.


Evidence

She contrasts this with the ‘fairytale notion’ of multistakeholder governance as a perfect alternative dimension, arguing instead for practical connection of existing diverse governance processes


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Early hopes that technology would be an equalizer between rich and poor have not fully materialized

Explanation

Anriette reflects on the naive belief from the WSIS era that access to communications technology would be an equalizer between rich and poor, center and periphery, and different genders. While technology still provides some of these benefits, it didn’t pan out as expected.


Evidence

She references her work building internet connectivity in Africa in the 1980s-2000s and the shift from WSIS to WSIS Plus 20, noting the belief that technology would create engagement and cooperation


Major discussion point

Technology’s Impact and Implementation


Topics

Development | Human rights


Technology interacts complexly with geopolitical conflicts and societal changes rather than operating independently

Explanation

Anriette emphasizes that technology should be viewed both as a force with its own impact and as a sector that interacts with geopolitical conflicts and different forms of societal change. The challenge is understanding how individuals and societies engage with and are changed by technology in different scenarios.


Evidence

She notes that when faced with unpredictability, there’s a tendency to look for institutions with capacity to prevent things from going wrong, but corporations and states are also unpredictable


Major discussion point

Technology’s Impact and Implementation


Topics

Legal and regulatory | Sociocultural


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything

Explanation

Anriette calls for a redesigned and braver IGF that is more participative and innovative in its methodologies, and more willing to ask difficult questions where there won’t be consensus. She argues that the reality is stakeholders don’t all want the same thing and won’t always have consensus.


Evidence

She suggests using scenario games with 500 people in interactive formats rather than traditional panel discussions, and notes the need for an IGF that ‘allows the wind in’ rather than staying in a safe multistakeholder space


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Gbenga Sesan

Agreed on

Current multistakeholder governance faces significant challenges and needs renewal


Disagreed with

– Gbenga Sesan

Disagreed on

Approach to defining stakeholder goals – dreaming vs. concrete specificity


The IGF should adopt more participative methodologies including scenario games rather than traditional panel formats

Explanation

Anriette proposes collaborating to create games for the next IGF that would engage participants interactively rather than having traditional sessions where panelists talk and audiences listen. She suggests there are methodologies that allow participative engagement even with 500 people.


Evidence

She contrasts current IGF format where people sit and talk while others listen with proposed interactive methodologies that would allow everyone to participate in foresight and change discussions


Major discussion point

Practical Applications and Future Directions


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Audience

Agreed on

Future processes should be more interactive and participatory


G

Gbenga Sesan

Speech speed

163 words per minute

Speech length

1762 words

Speech time

646 seconds

The methodology helps organizations adjust strategies as reality unfolds compared to predicted scenarios

Explanation

Gbenga explains that one of the beautiful things about possible futures is that when something happens close to a discussed scenario, organizations have the opportunity to either align with or move away from certain developments. He emphasizes the importance of adjusting strategies as scenarios unfold rather than rigidly following original plans.


Evidence

He gives the example of planning for ‘level seven’ scenarios but needing to adjust when reality reaches ‘level nine,’ noting it would be insanity to take actions planned for level seven when at level nine


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Anriette Esterhuysen

Agreed on

Strategic foresight methodology is valuable for participatory processes and creative thinking


Multistakeholder governance faces threats from those seeking more pragmatic but less inclusive models

Explanation

Gbenga notes that because multistakeholder governance hasn’t lived up to its ideals of equal partnership, some people are advocating for less perfect but more pragmatic models. He sees this as a challenge because while adjustment is necessary, there’s a risk of moving from optimism straight to pessimism without maintaining realistic hope.


Evidence

He acknowledges that not everyone is equal around the multistakeholder table and that the ideal hasn’t worked perfectly, but argues for maintaining a healthy dose of realism while believing things can become better


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Anriette Esterhuysen

Agreed on

Current multistakeholder governance faces significant challenges and needs renewal


The scenarios can serve as conversation starters but need updating with current geopolitical realities

Explanation

Gbenga believes the report will be useful as a starter for conversations and appreciates both its content and underlying principles. However, he suggests it needs quick updating with new realities, possibly through an addendum, because geopolitical developments have moved faster than anticipated in the scenarios.


Evidence

He notes that geopolitics discussed in the scenarios wasn’t as deep as current experiences, and emphasizes the principle of creating possible futures and adjusting strategies as developments emerge


Major discussion point

Practical Applications and Future Directions


Topics

Legal and regulatory


Stakeholders must maintain optimism and define what they want while being realistic about challenges

Explanation

Gbenga emphasizes the importance of coming to the table with desired scenarios and ideals, even when faced with frustrating realities. He uses the analogy of sailing against the wind – you can either submit to the wind’s direction or calculate the right approach to reach your desired destination despite opposition.


Evidence

He references his work on digital rights in Africa where conversations with governments often involve clampdowns, but having a clear desired destination helps create pathways from current reality to goals. He also mentions the ‘Internet We Want’ paper from the leadership panel


Major discussion point

Practical Applications and Future Directions


Topics

Human rights | Legal and regulatory


Disagreed with

– Anriette Esterhuysen

Disagreed on

Approach to defining stakeholder goals – dreaming vs. concrete specificity


A

Audience

Speech speed

152 words per minute

Speech length

890 words

Speech time

351 seconds

Reports should be made accessible and used in interactive formats like games for broader stakeholder engagement

Explanation

Professor Roberta Haar from Maastricht University proposes collaboration to adapt the scenario data into games for policy stakeholder engagement. She describes their successful experience with scenario testing workshops and games developed with the EU Commission’s Joint Research Center, suggesting this approach could make the German scenarios more interactive and useful.


Evidence

She provides examples of their scenario games on military AI played at Erasmus University with extremely good results, and mentions upcoming workshops in Rome, Helsinki, and Brussels


Major discussion point

Practical Applications and Future Directions


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Anriette Esterhuysen

Agreed on

Future processes should be more interactive and participatory


Governments need enhanced cooperation mechanisms as they cannot solve digital issues alone

Explanation

Bertrand de la Chapelle argues that geopolitical tensions demonstrate that governments together cannot solve digital problems, noting the irony that the only agreement among all governments on digital issues in 20 years since WSIS was a cybercrime convention sponsored by a country heavily involved in cybercrime. He emphasizes the need for multistakeholder environments to bring different actors together.


Evidence

He points to the lack of government agreements on digital issues except the cybercrime convention, and notes the limitations of the multilateral system while emphasizing the fundamental importance of preserving states’ role


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory | Cybersecurity


P

Philipp Schulte

Speech speed

150 words per minute

Speech length

1668 words

Speech time

663 seconds

States should act as enablers ensuring all stakeholder groups can perform their roles effectively

Explanation

Philipp suggests that the responsibility of the state in the multistakeholder environment is like tending a garden – ensuring that all different stakeholder groups can perform in the roles they want to perform and perform best. He sees the state’s role as facilitative rather than directive in multistakeholder governance.


Evidence

He uses the metaphor of the state as a gardener in a garden of multistakeholders, responsible for creating conditions where all groups can flourish in their respective roles


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory


Disagreed with

– Julia Pohler

Disagreed on

Government participation in foresight task forces


Agreements

Agreement points

Strategic foresight methodology is valuable for participatory processes and creative thinking

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties


Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints


The methodology helps organizations adjust strategies as reality unfolds compared to predicted scenarios


Summary

All speakers agree that strategic foresight is a valuable methodology that helps stakeholders think creatively about possible futures and prepare for uncertainties, with the process itself being as important as the outcomes


Topics

Legal and regulatory


Geopolitical developments and state actions are increasingly important drivers in internet governance

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Geopolitics and state actions emerged as the primary driving factors across most scenarios, more than anticipated


States have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals


Current geopolitical developments are moving faster than the scenarios predicted, with increased state involvement


Summary

There is consensus that states and geopolitical factors have become more prominent in internet governance than previously anticipated, with this trend accelerating faster than expected


Topics

Legal and regulatory


Current multistakeholder governance faces significant challenges and needs renewal

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

All scenarios except one showed a bleak future for multistakeholderism, with processes being hollowed out or institutionalized beyond meaning


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Multistakeholder governance faces threats from those seeking more pragmatic but less inclusive models


Summary

All speakers acknowledge that multistakeholder governance is facing serious challenges and requires significant reform to remain relevant and effective


Topics

Legal and regulatory


Future processes should be more interactive and participatory

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Audience

Arguments

Future foresight processes should include government representatives directly in task forces for better implementation


The IGF should adopt more participative methodologies including scenario games rather than traditional panel formats


Reports should be made accessible and used in interactive formats like games for broader stakeholder engagement


Summary

There is agreement that future governance processes and forums should move beyond traditional formats to more interactive, participatory approaches that engage all stakeholders more meaningfully


Topics

Legal and regulatory


Similar viewpoints

Both speakers believe that future governance discussions need to be more courageous in addressing difficult topics, including the role of big tech companies and challenging existing assumptions about multistakeholder processes

Speakers

– Julia Pohler
– Anriette Esterhuysen

Arguments

Future exercises should examine big tech business models and government enabling roles more courageously


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Topics

Legal and regulatory | Economic


Both speakers acknowledge that early optimistic visions about technology’s impact haven’t fully materialized, but emphasize the importance of maintaining hope and clear goals while being realistic about current challenges

Speakers

– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Early hopes that technology would be an equalizer between rich and poor have not fully materialized


Stakeholders must maintain optimism and define what they want while being realistic about challenges


Topics

Development | Human rights


Both speakers view the role of states and multistakeholder governance as facilitative and connecting, rather than controlling or replacing existing governance mechanisms

Speakers

– Anriette Esterhuysen
– Philipp Schulte

Arguments

The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Topics

Legal and regulatory


Unexpected consensus

States playing a more prominent role in internet governance is not necessarily negative

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Philipp Schulte

Arguments

Geopolitics and state actions emerged as the primary driving factors across most scenarios, more than anticipated


States have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Explanation

Despite the traditional internet governance community’s wariness of state involvement, there was unexpected consensus that increased state engagement could be positive if states act as enablers rather than controllers, and that their involvement was perhaps inevitable and necessary


Topics

Legal and regulatory


The need for more courageous and direct discussions in multistakeholder forums

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Future exercises should examine big tech business models and government enabling roles more courageously


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Stakeholders must maintain optimism and define what they want while being realistic about challenges


Explanation

There was unexpected consensus that the multistakeholder community needs to move away from seeking consensus on everything and instead engage in more direct, potentially confrontational discussions about difficult issues like big tech power and government overreach


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The speakers demonstrated strong consensus on several key issues: the value of strategic foresight methodology, the increasing importance of geopolitical factors in internet governance, the need for multistakeholder governance reform, and the importance of more participatory processes. There was also unexpected agreement that increased state involvement isn’t necessarily negative if properly channeled, and that the community needs more courageous discussions about difficult topics.


Consensus level

High level of consensus with constructive disagreement mainly on implementation details rather than fundamental principles. This suggests the internet governance community is ready for significant reforms and new approaches, with broad agreement on the direction of needed changes. The consensus around the need for renewal and more direct engagement indicates potential for meaningful evolution of governance processes.


Differences

Different viewpoints

Level of abstraction in foresight methodology and its practical utility

Speakers

– Anriette Esterhuysen
– Julia Pohler

Arguments

Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints


Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties


Summary

Anriette acknowledges that abstraction is still an issue with foresight methodology and questions whether the report itself will be particularly useful, emphasizing that the exercise process is more valuable than the output. Julia focuses more on the methodology’s value in creating plausible scenarios for preparation, suggesting the reports can be practically useful for decision-making.


Topics

Legal and regulatory


Approach to defining stakeholder goals – dreaming vs. concrete specificity

Speakers

– Gbenga Sesan
– Anriette Esterhuysen

Arguments

Stakeholders must maintain optimism and define what they want while being realistic about challenges


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Summary

Gbenga emphasizes the importance of maintaining optimism and ‘dreaming’ about desired outcomes even when facing challenges, while Anriette argues that it’s not just about dreaming but about concrete things, and criticizes that multistakeholder fora often produce watered-down statements to avoid offending stakeholders. She wants more specific, potentially controversial positions.


Topics

Legal and regulatory | Human rights


Government participation in foresight task forces

Speakers

– Julia Pohler
– Philipp Schulte

Arguments

Future foresight processes should include government representatives directly in task forces for better implementation


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Summary

Julia advocates for direct government participation in task forces to improve dialogue and implementation, while Philipp expresses reluctance about government involvement, stating they were hesitant to be on the task force to avoid having a government-steered process that would use stakeholder scenarios as lip service.


Topics

Legal and regulatory


Unexpected differences

Value and utility of the foresight report output versus process

Speakers

– Anriette Esterhuysen
– Julia Pohler
– Gbenga Sesan

Arguments

Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints


Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties


The scenarios can serve as conversation starters but need updating with current geopolitical realities


Explanation

Unexpectedly, the panelists who participated in the foresight exercise disagreed on its practical utility. Anriette, despite finding the process valuable, questioned whether the report itself would be useful and emphasized that participatory processes like this are mainly valuable to participants. Gbenga was more optimistic about the report’s utility as conversation starters, while Julia focused on the methodology’s value. This disagreement is unexpected because all three were involved in the same process but came away with different assessments of its practical value.


Topics

Legal and regulatory


Overall assessment

Summary

The main areas of disagreement centered on methodological approaches to foresight exercises, the balance between idealism and pragmatism in multistakeholder governance, and the appropriate level of government involvement in stakeholder processes. Despite participating in the same foresight exercise, speakers had different views on its practical utility and implementation.


Disagreement level

The level of disagreement was moderate and constructive rather than fundamental. Speakers shared common concerns about the future of multistakeholder governance and the increasing role of states, but differed on approaches and solutions. The disagreements reflect different perspectives on how to strengthen and evolve internet governance rather than fundamental opposition to shared goals. This suggests a healthy debate within the community about methods and strategies rather than irreconcilable differences on core principles.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers believe that future governance discussions need to be more courageous in addressing difficult topics, including the role of big tech companies and challenging existing assumptions about multistakeholder processes

Speakers

– Julia Pohler
– Anriette Esterhuysen

Arguments

Future exercises should examine big tech business models and government enabling roles more courageously


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Topics

Legal and regulatory | Economic


Both speakers acknowledge that early optimistic visions about technology’s impact haven’t fully materialized, but emphasize the importance of maintaining hope and clear goals while being realistic about current challenges

Speakers

– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Early hopes that technology would be an equalizer between rich and poor have not fully materialized


Stakeholders must maintain optimism and define what they want while being realistic about challenges


Topics

Development | Human rights


Both speakers view the role of states and multistakeholder governance as facilitative and connecting, rather than controlling or replacing existing governance mechanisms

Speakers

– Anriette Esterhuysen
– Philipp Schulte

Arguments

The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Topics

Legal and regulatory


Takeaways

Key takeaways

Strategic foresight is a valuable methodology for exploring plausible futures rather than making predictions, helping stakeholders prepare for uncertainties and disruptions


Geopolitics and state actions have emerged as the primary driving factors in internet governance scenarios, with current developments moving faster than anticipated


All developed scenarios except one showed a bleak future for multistakeholder governance, with processes being either hollowed out or over-institutionalized


The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance structures


States play a crucial enabling role in internet governance and should not be viewed as undermining multistakeholder ideals


The participatory process of developing scenarios is often more valuable than the final report itself


Technology’s role as an equalizer has not materialized as hoped, and big tech business models create digital barriers that fragment online spaces


The IGF needs renewal and redesign to become more participative, innovative, and willing to tackle difficult questions without requiring consensus


Resolutions and action items

The German government will proceed with publishing the strategic foresight report under the new ministry responsible for strategic foresight


Collaboration proposed between the German foresight project and the Remit Research project to develop scenario testing workshops and games


Future foresight processes should include government representatives directly in task forces for better implementation and understanding


The IGF should consider adopting new session formats including scenario games rather than traditional panel discussions


The scenarios need updating with current geopolitical realities through addendums or annexes


Stakeholders should work together to define ‘what we want’ in concrete terms rather than abstract ideals


Unresolved issues

How to make foresight reports more accessible and useful for daily work beyond the participatory process


The challenge of maintaining stakeholder engagement throughout lengthy foresight exercises


How to balance the need for government involvement with maintaining genuine multistakeholder processes


The tension between being realistic about current challenges while maintaining optimism for desired outcomes


How to address the role of big technology companies in fragmenting digital spaces


The future mandate and structure of the IGF, particularly regarding the 2026 discussions


How to make multistakeholder forums more willing to tackle controversial issues without losing participants


Suggested compromises

Governments should act as enablers ensuring all stakeholder groups can perform their roles effectively, rather than dominating processes


Multistakeholder governance should embrace diverse decision-making processes that overlap and engage with each other rather than seeking uniform approaches


Future scenario exercises should balance German/European perspectives with global and diverse viewpoints through targeted interviews and validation


The IGF should become ‘braver’ in asking difficult questions while maintaining its inclusive character


Foresight exercises should examine both government regulation and big tech business models as sources of digital fragmentation


Strategic foresight should be used as an ongoing adjustment tool rather than a one-time prediction exercise


Thought provoking comments

I think my only sort of one, I would have liked to be part of a focus group or a group at some point. I think I found it, I would have found it more interesting in some ways to have a group dynamic. And then I think my only other question about it as well is the way in which you treat multi-stakeholder in how you are approaching the future of Internet governance. And I think in that sense, the study itself, I think, perhaps did not unpack or deconstruct what multi-stakeholder means.

Speaker

Anriette Esterhuysen


Reason

This comment was insightful because it identified a fundamental methodological limitation and conceptual weakness in the foresight exercise. Esterhuysen pointed out that the study treated ‘multi-stakeholder’ as a one-dimensional concept without deconstructing its complexity, which is crucial given that multi-stakeholderism is central to internet governance discussions.


Impact

This critique shifted the conversation toward examining the limitations of current multi-stakeholder approaches and sparked deeper reflection on whether the focus should be on ‘multi-stakeholder governance’ or simply ‘effective, accountable governance.’ It also influenced Julia’s later admission that multi-stakeholderism appeared to have a bleak future in most scenarios.


So I think when kind of I we get to the stage where we really wrote the scenarios, and I looked at them with some distance after a while, I think what strikes me most is that the most important factor in almost all scenarios… is actually the role of states and the role of governments… And we wrote these scenarios before President Trump took office again. And before we kind of saw this increase of geopolitical tensions… So I think Today we would have gone even further in emphasizing the role of geopolitics and geoeconomics… the reality is actually moving faster than we thought it would.

Speaker

Julia Pohler


Reason

This observation was particularly thought-provoking because it revealed how rapidly geopolitical realities were outpacing even recent foresight exercises. It highlighted the dominance of state actors over other stakeholders in shaping internet governance futures, which challenges traditional multi-stakeholder ideals.


Impact

This comment fundamentally reframed the discussion around the central role of states in internet governance, leading other panelists to acknowledge this reality rather than resist it. It sparked a conversation about how to work with, rather than around, state power in multi-stakeholder processes.


I think in all of these scenarios we ended up writing possible futures in where multi-stakeholder processes are either being hollowed out or kind of completely undermined by corporate actors and state actors… So I would say that in all of these scenarios, somehow multi-stakeholderism and governance has outlived its promises.

Speaker

Julia Pohler


Reason

This was a stark and honest assessment that challenged the fundamental assumptions of the internet governance community. The admission that their scenarios showed multi-stakeholderism failing across different futures was a sobering reality check for the field.


Impact

This comment created a turning point in the discussion, moving from abstract scenario planning to concrete concerns about the viability of current governance models. It prompted other speakers to defend and redefine multi-stakeholderism, leading to more nuanced discussions about what effective governance actually means.


So much as we like this, I don’t know, there’s this kind of fairytale notion of multi-stakeholder governance as this alternative dimension of perfect governance. I mean, I see it as a way, a way of arriving at more accountable, inclusive, effective governance. And states are a big part of that.

Speaker

Anriette Esterhuysen


Reason

This comment was insightful because it reframed multi-stakeholder governance from an idealistic end goal to a pragmatic methodology. It challenged the community’s tendency to romanticize multi-stakeholderism while acknowledging the legitimate and necessary role of states.


Impact

This reframing helped move the conversation away from defending an idealized model toward discussing practical approaches to inclusive governance. It provided a more mature perspective that influenced subsequent discussions about how different stakeholders can work together effectively.


I think we do need to think more creatively. And I’m just going to give one example… to see the European states in particular, shell-shocked, because it was so difficult for them to operate in this context, when a long-time partner in the Internet governance and World Summit process, the US, was moving outside or taking on a different position. My first thought during that entire week was, I wish these governments had all done some foresight work.

Speaker

Anriette Esterhuysen


Reason

This concrete example powerfully illustrated the practical value of foresight exercises. By describing how unprepared governments were for geopolitical shifts, it demonstrated why scenario planning is essential for effective governance and diplomacy.


Impact

This example shifted the discussion from abstract methodology to concrete applications, helping participants understand the real-world value of foresight work. It reinforced the argument for more widespread adoption of these techniques in government and international relations.


I think sometimes we say what we want and particularly when we try and say it in a multi-stakeholder way, you know, it sounds like some kind of sort of watered down set of wedding vows… I want fair tax payment by big tech so that countries who need revenue to actually build a fiber optic backbone… I want data flows that are not based on an extractive sort of colonial type model… but it’s almost impossible to say those things in the context of so many multi-stakeholder fora because you don’t want to offend the private sector.

Speaker

Anriette Esterhuysen


Reason

This comment was particularly provocative because it exposed the tendency of multi-stakeholder processes to avoid difficult topics in favor of consensus-building, resulting in bland, ineffective outcomes. The specific examples made abstract governance discussions concrete and political.


Impact

This critique sparked a broader conversation about the need for ‘braver’ multi-stakeholder processes that can tackle controversial issues. It influenced the final recommendations about redesigning the IGF to be more participatory and willing to address difficult questions without requiring consensus.


Overall assessment

These key comments fundamentally shifted the discussion from a celebratory presentation of a foresight exercise to a critical examination of the current state and future viability of internet governance models. The conversation evolved through several phases: initial methodological critiques led to acknowledgment of the dominant role of states, which prompted honest assessment of multi-stakeholderism’s limitations, ultimately resulting in calls for more pragmatic, brave, and creative approaches to governance. The panelists’ willingness to challenge sacred assumptions about multi-stakeholder governance created space for more mature and realistic discussions about how to achieve effective, inclusive governance in a rapidly changing geopolitical landscape. The discussion demonstrated how foresight exercises can serve not just as planning tools, but as catalysts for fundamental reconsideration of existing approaches and assumptions.


Follow-up questions

How can the scenarios be updated to reflect rapidly changing geopolitical realities, particularly after recent political developments?

Speaker

Gbenga Sesan


Explanation

The scenarios were developed before recent geopolitical changes and may need updating as reality is moving faster than anticipated, requiring an addendum or annex to maintain relevance


How can multi-stakeholder processes be transformed to make them more meaningful and avoid being hollowed out or institutionalized to the point of losing their bottom-up character?

Speaker

Julia Pohler


Explanation

All scenarios except one showed a bleak future for multi-stakeholderism, suggesting current models may be outliving their promises and need transformation


How can strategic foresight exercises be made more participatory and engaging, potentially through gaming methodologies?

Speaker

Professor Roberta Haar


Explanation

There’s interest in adapting the scenario data into interactive games and collaborative workshops to make policy discussions more engaging and meaningful


What concrete actions should the German government take based on the scenarios developed, and how can this be made more transparent to stakeholders?

Speaker

Julia Pohler


Explanation

There’s a need for clearer understanding of how the scenarios will be implemented and what specific policy actions will result from the foresight exercise


How can government representatives be better integrated into future foresight task forces to improve mutual understanding?

Speaker

Julia Pohler


Explanation

Having government officials directly participate in scenario development could help both stakeholders understand the impact of their contributions and help governments understand different perspectives


What is the future mandate and structure of the IGF, and when will this be discussed?

Speaker

Bertrand de la Chapelle


Explanation

The evolution of IGF’s institutional arrangements needs to be addressed, particularly in the context of WSIS+20 discussions and the need for more effective multi-stakeholder governance


How can the IGF be redesigned to be more participative, innovative, and willing to tackle difficult questions without consensus?

Speaker

Anriette Esterhuyse


Explanation

Current IGF formats may be too institutionalized and risk-averse, requiring new methodologies and greater courage to address contentious issues effectively


What role should governments play in creating enabling environments for multi-stakeholder governance beyond just regulation and repression?

Speaker

Anriette Esterhuyse


Explanation

Governments have broader toolkits available and could be more creative partners in enabling inclusive, accountable, and creative governance processes


How do dominant technology companies’ business models fragment digital spaces and create barriers, and what are the implications for internet governance?

Speaker

Julia Pohler


Explanation

There’s insufficient attention to how platform economy business models contribute to digital fragmentation, compared to focus on government-driven fragmentation


How can foresight methodologies better account for ‘black swan’ events and unexpected developments that are difficult to anticipate?

Speaker

Bertrand de la Chapelle


Explanation

Historical examples show that major technological and social changes often come from unexpected directions, challenging the predictive capacity of scenario planning


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.