Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices

Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices

Session at a glance

Summary

This discussion focused on parliamentary approaches to regulating harmful online content while balancing digital safety with freedom of expression and human rights. Parliamentarians from Pakistan, Argentina, Nepal, Bulgaria, and South Africa shared their experiences with digital legislation and the challenges of creating effective regulatory frameworks.


Anusha Rahman Ahmad Khan from Pakistan highlighted the urgent need for social media platforms to respond more quickly to content removal requests, particularly regarding AI-generated harassment content targeting women and girls. She emphasized that delayed responses can have devastating consequences, including suicide, and called for platforms to be more culturally sensitive. Franco Metaza from Argentina’s Mercosur Parliament discussed harmful content including “fatphobia” that promotes eating disorders among young girls, and shared how disinformation led to an assassination attempt on a political leader, demonstrating the real-world dangers of fake news.


Yogesh Bhattarai from Nepal stressed the importance of regulating rather than controlling digital platforms while strengthening democratic institutions and maintaining constitutional protections for freedom of expression. Tsvetelina Penkova from Bulgaria outlined the European Union’s comprehensive legislative approach, including the Digital Services Act, Digital Markets Act, and GDPR, emphasizing human-centric digital transformation and the challenges of enforcement across 27 member states.


Ashley Sauls from South Africa discussed how disinformation about his country affected international relations and highlighted the need for balanced approaches that don’t infringe on privacy and human rights. The discussion also addressed youth engagement in policymaking, cybercrime legislation challenges, and the role of private sector companies in content moderation and capacity building. Participants emphasized the need for multi-stakeholder approaches, international cooperation, and the recognition of digital rights as a potential fourth generation of human rights.


Keypoints

## Major Discussion Points:


– **Platform accountability and content moderation challenges**: Multiple speakers highlighted the struggle with social media platforms’ responsiveness to government requests for harmful content removal, particularly regarding gender-based violence, harassment, and culturally sensitive content. Pakistan’s experience showed platforms treating regulatory requests as optional rather than legally binding.


– **Balancing freedom of expression with protection from harm**: Parliamentarians emphasized the need to protect vulnerable groups (especially women, children, and minorities) from online harassment, disinformation, and harmful content while preserving democratic freedoms and human rights. This tension between regulation and liberty was a central theme across different regions.


– **Legislative frameworks and enforcement challenges**: Speakers shared experiences with cybercrime laws, digital services acts, and content regulation, noting that having laws is insufficient without proper enforcement mechanisms and capacity. The EU’s comprehensive approach (DSA, DMA, GDPR, AI Act) was contrasted with implementation challenges in smaller countries.


– **Youth engagement and digital literacy**: The discussion emphasized involving young people in policymaking processes and the critical need for digital literacy programs to help users identify misinformation, develop critical thinking skills, and navigate online spaces safely.


– **Multi-stakeholder cooperation and capacity building**: Speakers called for enhanced collaboration between governments, civil society, private sector, and international organizations, with particular emphasis on the need for capacity building for parliamentarians and public officials to understand emerging technologies like AI.


## Overall Purpose:


The discussion aimed to facilitate knowledge sharing among parliamentarians from different regions about their experiences with digital governance, content regulation, and creating safer online environments while maintaining democratic principles and human rights protections.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with speakers sharing both challenges and solutions from their respective contexts. While there were moments of criticism directed at tech platforms and concerns about enforcement gaps, the overall atmosphere remained professional and solution-oriented. The tone became slightly more technical and urgent when discussing specific harms (suicide, harassment, disinformation affecting democracy) but concluded on a forward-looking note emphasizing cooperation and shared responsibility.


Speakers

**Speakers from the provided list:**


– **Anusha Rahman Ahmad Khan** – Former Minister for Technology (Pakistan), worked on cybercrime legislation including the Prevention of Electronic Crimes Act 2016


– **Sorina Teleanu** – Session moderator/chair


– **Franco Metaza** – Parliamentarian from Mercosur (regional parliament of South America covering Brazil, Argentina, Uruguay, Paraguay, and Bolivia)


– **Yogesh Bhattarai** – Member of Parliament representing the Federal Democratic Republic of Nepal


– **Tsvetelina Penkova** – Member of the European Parliament representing Bulgaria


– **Ashley Sauls** – South African parliamentarian


– **Raoul Danniel Abellar Manuel** – Member of the Philippine House of Representatives


– **Bibek Silwal** – Advocate for youth in policy from Nepal


– **Olga Reis** – Private sector representative from Google, covers AI opportunity agenda for emerging markets


– **Anne McCormick** – Private sector representative from Ernst & Young (EY)


– **Amy Mitchell** – Representative from Center for News Technology and Innovation (United States)


– **Audiance** – Honorable representative from the Democratic Republic of Congo, Kinshasa


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Parliamentary Approaches to Digital Governance: Balancing Online Safety with Democratic Freedoms


## Executive Summary


This comprehensive discussion brought together parliamentarians from across the globe to examine the complex challenges of regulating harmful online content whilst preserving fundamental democratic principles. The session, moderated by Sorina Teleanu, featured representatives from Pakistan, Argentina, Nepal, Bulgaria, South Africa, and the Philippines, alongside private sector voices and civil society advocates. The dialogue revealed both shared concerns and divergent approaches to digital governance, with particular emphasis on protecting vulnerable populations, enhancing platform accountability, and fostering international cooperation.


## Key Themes and Regional Perspectives


### Platform Accountability and Cultural Sensitivity


The discussion opened with a powerful intervention from Anusha Rahman Ahmad Khan, Pakistan’s former Minister for Technology, who articulated a fundamental challenge facing governments worldwide. She emphasised that the core issue is not a geopolitical struggle between East and West, but rather “a fight between revenue generation entities versus a revenue curbing request.” This economic framing of platform behaviour resonated throughout the session, highlighting how social media companies prioritise profit over cultural sensitivity and public safety.


Khan explained that “every single post on the social media platform is a revenue generating mechanism,” which creates inherent conflicts with content moderation requests. She shared disturbing examples of platforms’ inadequate responses to government requests for content removal, particularly regarding AI-generated harassment targeting women and girls. She noted that delayed responses can have devastating consequences, including suicide, and called for platforms to demonstrate greater cultural awareness in their content moderation decisions.


Khan also highlighted Pakistan’s innovative approach to AI governance, including an AI-powered Senate chatbot project. She referenced Pakistan’s Prevention of Electronic Crimes Act, which was developed over two years starting in 2014 and enacted in 2016. Her frustration was palpable as she declared: “We are now tired of waiting and I would urge and request all the other parliamentarians to come together to make a joint strategy where we can collectively speak to the social media platforms.”


Franco Metaza from Argentina’s Mercosur Parliament—the regional parliament of South America comprising Brazil, Argentina, Uruguay, Paraguay, and Bolivia with 100 parliamentarians—reinforced these concerns with specific examples of harmful content. Speaking in Spanish as he announced he would do, he detailed how platforms promote “racism, xenophobia, homophobia, explicit violence, banalization of the use of drugs, and fatphobia.” He provided particularly disturbing examples of “fatphobia” affecting young girls, noting that 13-14-year-old girls in Brazil are seeking aesthetic surgeries due to harmful content promoting unrealistic body standards.


Metaza also shared how disinformation led to an assassination attempt on a political leader, demonstrating the real-world dangers of inadequately moderated content. He offered a compelling metaphor, comparing unregulated social media consumption to “going at full speed with a vehicle without knowing what is in front of us or without having a traffic light,” which helped frame regulation as a safety necessity rather than freedom restriction.


### Legislative Frameworks and Implementation Challenges


The discussion revealed significant variation in legislative approaches across different regions. Tsvetelina Penkova from Bulgaria outlined the European Union’s comprehensive strategy, including the Digital Services Act (DSA), Digital Markets Act (DMA), GDPR, the European Democracy Action Plan, Media Freedom Act, and the emerging AI Act. Despite criticism of the AI Act, she defended it as “probably the best one which protects people” while ensuring “innovation and growth” alongside “protecting citizens’ rights.” She emphasised the EU’s commitment to human-centric digital transformation whilst acknowledging the substantial challenges of enforcement across 27 member states with different legal traditions and capacities.


In contrast, Yogesh Bhattarai from Nepal advocated for a more collaborative approach, arguing that “digital platforms should be regulated, not controlled.” He stressed the importance of cooperation and collaboration rather than strict governmental control, whilst ensuring constitutional compliance with freedom of speech guarantees. Nepal’s approach involves engaging youth through national and internet governance forums in legislative processes. Bhattarai noted Nepal’s linguistic diversity, with “125 languages and 125 castes,” which adds complexity to content moderation challenges.


Ashley Sauls from South Africa provided multilingual greetings and highlighted his country’s multi-faceted legislative response, including the Protection of Personal Information Act, Cyber Crimes Act, and Fullerman Publications Act. He emphasised the importance of multi-stakeholder approaches and warned against policies that might infringe on privacy and human rights. Sauls also introduced the concerning concept of “digital apartheid,” highlighting how AI training can perpetuate historical biases and discrimination.


Sauls shared a powerful example of how disinformation about “white minority genocide in South Africa” affected US government decisions and led to the cancellation of a rugby match between Atlanta Secondary School and Liff Burra Grammar School due to safety concerns. He quoted the Minister of Sport’s philosophy that “a child in sport is a child out of court,” emphasising sport’s role in social cohesion. He also noted South Africa’s recent transition from majority government to a “government of national unity.”


The Philippines’ experience, as shared by Raoul Danniel Abellar Manuel from the House of Representatives, provided a cautionary tale about legislative overreach. He criticised the country’s Cybercrime Prevention Act of 2012, particularly its cyber libel provisions that have been misused against journalists and teachers, demonstrating how well-intentioned legislation can be weaponised against legitimate expression.


### International Cooperation and Human Rights Framework


A particularly thought-provoking intervention came from the representative from the Democratic Republic of Congo, who proposed that the United Nations should recognise digital rights as a fourth generation of human rights. Speaking in French, he argued that such recognition would provide a common framework for national legislation, similar to existing human rights generations, and could lead to constitutional incorporation in many countries.


This proposal received immediate support from Tsvetelina Penkova, who acknowledged it could resolve many enforcement challenges currently faced by individual nations. The suggestion elevated the discussion from practical policy implementation to fundamental questions about the nature of rights in the digital age.


### Youth Engagement and Digital Literacy


A significant portion of the discussion focused on the critical role of young people in digital policymaking. Franco Metaza argued that “youth participation should be transversal across all policy-making rather than segregated into youth-only discussions,” advocating for integrated rather than separate consultation processes.


Bibek Silwal, an advocate for youth in policy from Nepal, emphasised that young people serve as “positive catalysts in policy implementation” and should be involved from initial policymaking through public outreach. He highlighted the importance of digital literacy programmes and critical thinking skills development to help users identify misinformation.


Tsvetelina Penkova noted that young people understand the economic implications of poor regulation, recognising that “the digital economy will shrink without proper regulation.” This insight challenged assumptions about youth attitudes towards digital governance, suggesting greater sophistication in their policy preferences than often assumed.


### Private Sector Responsibility and AI Governance


The session included notable contributions from private sector representatives, creating moments of both tension and unexpected consensus. Olga Reis from Google presented current content moderation efforts, citing statistics about video removal from YouTube, with “55% removed before being watched, 27% removed with less than 10 views.” She also mentioned Google’s AI Campus programme, which has already trained “500,000 officials” in AI literacy.


Anne McCormick from Ernst & Young highlighted the private sector’s need for clarity on AI liability frameworks as adoption spreads beyond large technology companies to smaller economic actors. She emphasised the importance of independent oversight and transparency mechanisms throughout the AI lifecycle.


The discussion revealed interesting tensions in platform relationships. Franco Metaza both praised YouTube Kids as a successful model of controlled digital ecosystems for children whilst simultaneously criticising Google for allowing defamatory content in search results. He specifically cited how searching for Cristina Fernández de Kirchner showed “ladrona de la nación argentina” (thief of the Argentine nation) in Google’s knowledge panel, demonstrating inconsistent standards across different Google services.


## Areas of Consensus and Disagreement


### Strong Agreements


The discussion revealed remarkable consensus on several fundamental principles. All speakers agreed that social media platforms need to take greater responsibility for content moderation and harm prevention. There was universal acknowledgement that protecting vulnerable populations, particularly children and women, must be a priority in digital governance frameworks.


Youth engagement emerged as another area of strong agreement, with all speakers supporting meaningful integration of young people into policymaking processes. Similarly, there was consensus on the necessity of multi-stakeholder approaches involving government, civil society, private sector, and international organisations.


### Key Disagreements


Despite broad agreement on principles, significant disagreements emerged regarding implementation approaches. The most notable tension concerned decision-making authority for content removal, with Anusha Rahman Ahmad Khan advocating for stronger government authority whilst Tsvetelina Penkova insisted on judicial oversight to prevent government overreach.


Speakers also disagreed on the appropriate level of regulation, with Yogesh Bhattarai emphasising light-touch regulation focusing on cooperation, whilst Franco Metaza supported stronger parliamentary regulation comparable to traffic laws. These disagreements reflect deeper tensions between national sovereignty and international coordination.


## Unresolved Challenges and Future Directions


### Implementation and Enforcement


The discussion highlighted persistent challenges in translating legislative frameworks into effective enforcement. Multiple speakers noted that having laws is insufficient without proper implementation mechanisms and capacity. Cultural sensitivity in content moderation remains particularly challenging, with platforms making uniform decisions without considering local contexts that could have severe consequences for users.


### Capacity Building and Education


Several speakers emphasised the critical need for capacity building amongst parliamentarians and public officials to understand emerging technologies like AI. Digital literacy emerged as equally important for general populations, with speakers calling for educational campaigns to help users identify misinformation and develop critical thinking skills.


### Economic and Social Justice Considerations


Ashley Sauls’s introduction of the “digital apartheid” concept highlighted how AI systems can perpetuate historical injustices and create new forms of discrimination. This concern extends beyond technical bias to fundamental questions about who benefits from digital transformation and who bears its costs.


## Recommendations and Action Items


### Immediate Actions


Parliamentarians agreed on the need to create joint strategies for collectively addressing social media platforms, recognising that individual national approaches lack sufficient leverage against global technology companies. Educational initiatives emerged as a priority, with speakers calling for campaigns to teach young people to identify fake news and develop critical thinking skills.


### Medium-term Developments


The proposal for UN recognition of digital rights as a fourth generation of human rights represents a significant medium-term objective that could provide clearer frameworks for national legislation. Platform accountability mechanisms need strengthening, with the YouTube Kids model suggested as a template for broader child protection measures.


### Long-term Structural Changes


The discussion pointed towards the need for coordinated international frameworks rather than individual national approaches. The integration of digital rights into constitutional frameworks could provide more robust protection against governmental overreach whilst ensuring consistent protection standards.


## Conclusion


This parliamentary discussion revealed both the urgency and complexity of digital governance challenges facing democracies worldwide. The session’s most valuable contribution was its reframing of digital governance from technical issues to fundamental questions about power, economics, and human rights in the digital age. Anusha Rahman Ahmad Khan’s characterisation of the struggle as one between “revenue generation entities versus revenue curbing requests” identified core tensions that must be addressed.


The proposal for digital rights as a fourth generation of human rights offers a potential framework for achieving balance between competing interests, but implementation will require unprecedented levels of international coordination. As parliamentarians continue to grapple with these challenges, the experiences shared provide valuable insights into both successful approaches and cautionary tales about legislative overreach.


The path forward requires sustained commitment to multi-stakeholder dialogue, international cooperation, and innovative approaches that can balance platform accountability with democratic freedoms whilst protecting the most vulnerable members of society.


Session transcript

Anusha Rahman Ahmad Khan: But there is this responsibility that goes with the governments that the digital progress is upholding human dignity. It is upholding democratic freedom and giving access to everybody. So we experience that in Pakistan, for example, that even if when we made the law, the social media platforms continue to govern our request as if we were two kilometers not above the ground, not impacted by the law. And they decided to choose what content they were going to remove and what content they were going to keep on the social media platforms. So there has been and is a continuous issue in our country that when the regulator sends out the request to remove the content, and I will give you by example that this content relates to a girl and she’s a university student and she’s being harassed by somebody and they have created a content on AI or any other means which looks like real. By the time that content is removed, the life of that girl is gone. So when I was legislating, I had dozens of examples of girls actually jumping off the wall, killing themselves, committing suicide and stuff like that. And in my country, even an aspersion on a girl is good enough to kill her. So even if they would not die physically, but they are dead emotionally. So we need to be very careful of the fact that the culture in which we are living, the social media platforms have to be sensitive about that culture. And this is the real challenge, how to make the social media platforms sensitive to the cultures. For them, it’s a revenue. Every single post on the social media platform is a revenue generating mechanism. It’s not a fight between East or the West. It’s a fight between revenue generation entities versus a revenue curbing request. So we need and we expect that this platform today, where the UN enters with the IGF, we can together use the technology platforms without fearing that these technology platform contents are going to be harmful for our children, for our girls, for our women, for the vulnerable. And believing in technology and being a former minister for technology, we all believe that we have to explore and we promise and we want to ensure that we are going to use the technology for shaping the future of the legislation, transparency and bringing more efficiency and effectiveness in the functioning of the way the parliamentarians work. So in the Senate of Pakistan, for example, advancing the vision of technology adaption, the chairman of the Senate has for the first time taken a concrete step towards developing an AI-powered Senate chatbot. It’s a virtual assistant designated to support lawmakers, secretariat staff and citizens with real-time access to legislative data, procedural guidance and multilingual services. This project proposes full-scale design, development and institutional deployment of the Senate chatbot, transforming it from a promising prototype into a high-impact digital parliamentary assistance tool. So we are working using technology at the same time. My concern and your question would still take us again back to this discussion table, that for how long are we going to continue to wait for the use and absorption of the technology positively when abused online, to continue to be abused online by the vested stakeholders and how long would social media platforms would take to listen to the governments and their requests to remove objectionable content and to secure the vulnerable groups and the non-vulnerable groups equally online. And we are now tired of waiting and I would urge and request all the other parliamentarians to come together to make a joint strategy where we can collectively speak to the social media platforms and help our vulnerable citizens in our respective countries to ensure that their offline rights are as secure online. This is what my humble request is. Thank you.


Sorina Teleanu: Thank you so much for bringing to the table so many issues. If I may ask a follow-up question, you mentioned you work on a cybercrime law. Was that law passed already in the parliament? Sorry, Ms. Rahman? If I may ask a follow-up question. No, I have to, your voice is actually echoing, yes. Yeah, you have to listen. Yeah. Let me also remove this. So, if I may ask a follow-up question. You mentioned you were working on a cybercrime law. Was it passed in the parliament? Is it approved?


Anusha Rahman Ahmad Khan: Yes, it was made in 2016. I started working on it in 2014. It took me two years to bring in a collaborative effort, bringing all the parliamentarians on board, listening to the civil society, listening to the NGOs, listening to the independent groups, listening to the media, because, as I said, the Internet is perceived, anything that will happen on the Internet is perceived as an activity, as if it is going to curb the freedom of expression, which is not the case. So, it is not about curbing the freedom of expression. We believe in it and we uphold it. It’s about protecting children, women, girls, and all vulnerable segments, and in this case, men and boys are equally vulnerable. Because the dignity of a natural person is extremely important to protect. So, this is what we made the media to understand, that we are not targeting electronic media. We are not targeting print media. We are talking about social media, which is full of misinformation, propaganda, and fake news. And we need to protect our citizens from this, because it leads to harassment and it leads to other kinds of vulnerable situations, which need to be looked at and criminalized in our law. So, in 2016, we made the cybercrime law. It’s called the Prevention of Electronic Crimes Act, and it’s a consensus document of the entire 240 million people representation in the parliament by their MPs in both the National Assembly and the Senate.


Sorina Teleanu: Thank you. I’ll get back later to you with one more question, but let me give the floor to Mr. Metaza to share your experiences.


Franco Metaza: Hello. Good afternoon to everybody. I’m going to speak in Spanish, which is one of the official languages of my regional parliament. I want to start by thanking the Department of Economic and Social Affairs of the United Nations, the Parliament of Norway, and the Inter-Parliamentary Union, the UIP, for organizing this important parliamentary tracking in the framework of the Forum for Global Internet Governance. My parliament is the Parliament of the Marcosur, it’s the regional parliament of South America. It’s made up of Brazil, Argentina, Uruguay, Paraguay, and Bolivia. It’s finishing its internal legislation to be able to be a full member. We are 100 parliamentarians, and we are having a very, very heated debate at the moment in our region regarding these topics. To answer one of the questions that Sorina asked at the beginning of this panel, well, the harmful content that we see with a lot of concern in our region, we are starting to list them and to be able, in some way, to encode what they are about. We are talking about racism, we are talking about xenophobia, we are talking about homophobia, we are talking about explicit violence, banalization of the use of drugs, and one in particular, as you asked for examples, Sorina, I’m going to give an example of a project that I presented in my parliament regarding something that perhaps does not have an exact translation in the world of the speaker. We are talking about fatphobia. What we are proposing is that there is content that is so harmful on social media, specifically on Instagram, specifically on TikTok, that leads young girls, the general population, but we are very concerned about young girls, to have behaviors that lead to anorexia and bulimia, that lead to what we call eating disorders. In this sense, we are very concerned about what the images generate. We have done tests and registering you on social media, saying that you are a 13-year-old girl, the content they bombard you with is content, images. In some real cases, in other fake cases, made with artificial intelligence of extremely skinny bodies, impossible to achieve in a natural way, and then advice to have extreme diets, and then advice or advertisements about surgeries. In Brazil, for example, 13-14-year-old girls have started going to the doctor on their own to consult for the possibility of having aesthetic surgeries. We are entering a very, very complex situation in terms of harmful content. And this dichotomy that exists between, well, regular or freedom of expression, I want to tell you that it seems to me that the regulations, when they are given in parliaments where all the social statements are represented, all the political expressions, will never go against freedom because the regulations express the will of the majority. I give you an example. There began to be cars, motor vehicles, in our societies, in the real world. We had to put a speed limit. We had to prohibit children from driving. And that was not against anyone’s freedom. Well, I think that today the permanent scrolling that we are all subjected to is as much or more harmful as going at full speed with a vehicle without knowing what is in front of us or without having a traffic light. And finally, I want to pick up something that I heard a lot yesterday, that I heard this morning, and it seems to me that it is a concern that we all have. At least, well, here there are many stakeholders in the auditorium, but perhaps it is something that I heard, especially from parliamentarians, and it is how the other issue, which is disinformation and fake news, affect our democracy. Democracy that we, as parliamentarians, have the obligation to protect and take care of. Look, I’m going to give you an example. In my country, in Argentina, a fake news began to circulate systematically about one of the leaders of my country, who was president twice and who is the main leader of the opposition, whose name is Cristina Fernández de Kirchner, about corruption, with many hate messages, and this went viral all over the networks. What ended up happening? Well, a person who consumed so many hate messages appeared at the door of the house and shot him in the head. Fortunately, the bullet did not come out, but look how far fake news and disinformation can go. Today, he has been arrested by the current president, in the framework of a great confusion of fake news and disinformation. So, we have the obligation, as parliamentarians, to put a stop to it, or at least nuance what social networks are, what Internet without government is, as the senator said here, to protect our democracies. It is not fair that our democracies, which is the last thing we have left, in this complex moment, where at any time a nuclear bomb explodes, the only thing we have left to safeguard humanity are democracies. Let’s please take care of democracies. Thank you.


Sorina Teleanu: Thank you also, including for bringing up some of the metaphors you raised. I like the one about having rules for the use of social media, similar to having rules for driving a car on the road. We can unpack that a little later. One curiosity, within the parliament of Mercosur, are you having these kind of debates, and are you looking into doing something collaboratively across the countries in dealing with harmful online content and creating more safe online? Again, not necessarily only in terms of legislation, but also looking at how to build more awareness, more capacity among users themselves, so they can be better prepared to deal with harmful content. Because, I guess, not all the answers are in passing a law and then expecting for it to be applied, but also seeing how you can prepare people to deal with these kind of things. Any reflections?


Franco Metaza: Yes, Sorina, what you ask is important. Today, in the parliament of Mercosur, if there is something in which we have consensus, it is that companies can do more than what they are doing. That they have the budget, that they have the money, and that they are not making enough efforts to be able to put a stop to harmful content, harmful content and fake news. In that we are absolutely in agreement. We believe that we have to go there. Thank you.


Sorina Teleanu: I think I’m already seeing a common thread about more responsibility for the private sector. And I know we have some private sector in the room and I think we will want to hear from them as well, but later on that. All right, let us continue. And let’s hear from Mr. Bhattarai, please.


Yogesh Bhattarai: Thank you very much, Sorina. Excellencies, distinguished delegates, fellow parliamentarians, ladies and gentlemen, friends from media. Good afternoon. It is a profound honor to be here today. Representing the Federal Democratic Republic of Nepal, I extend my sincere gratitude to the organizers, especially the UN and the government of Nauru, for creating this space where we can share our experiences and the collective Sikh wisdom. The topics before us, building the healthy information ecosystem, is one of the defined challenges of our era. This is not only the technical issues. It is a milestone for our ongoing democratic journey. Like many of you, we have witnesses to transformative power of the digital age. The Internet and social media have opened up avenues for the experiences, connected our diverse communities, and given a powerful platform to citizens to engage in the civic life of our nation. It has been a remarkable force for democratization. However, this progress is accompanied by complex challenges. We grapple with the very real harms of disinformation that can fear our social fabric, and the rise of online harassment that seeks to silence vulnerable voices. These are legitimate concerns that a very responsible government must address. In Nepal, we are currently in the midst of a profound national conversation about how to best delegate the balance between upholding the freedom of experience and protecting our citizens from harm. This debate is reflected in the legislative proposal currently under discussion, including the proposal Social Media Bill and the Information Technology Bill. This proposal streams for the genuine desire to create a safe digital environment. As a parliamentarian committed to the universal values of human rights and people-based democracy, I believe we must proceed with utmost care. The Constitution of Nepal guarantees the freedom of speech and expression, and Article 19 establishes the right to information and communication as fundamental rights. Parliament will not accept any law that contradicts the provision of the Constitution. In Nepal, we have a National Information Commission and the Press Council Nepal as an independent oversight agency. Myself and other MPs have been participating in the program organized by the Civil Society Organization, where there are discussions on the right to information and communication. We are concerned about the negative impact of misinformation and disinformation on society. Everyone should be aware of the possibility that it can divide society by spreading the confusion about caste, religious, racism, gender, and professions. Misinformation and disinformation are also having an impact on the tension and war taking place in different parts of the world today. This has also become the challenge for national security. I am convinced that only civil liberties, human rights, open societies, democratic competition, equal access, and citizen resilience can make the state accountable to its citizens. The challenges brought about by the revolution in the digital sector should strengthen the sovereignty of the nation and the people. It should support world peace and humanity. For this, digital platforms should be regulated, not controlled. Cooperation, collaboration, and solidarity should be strengthened. I am firmly convinced that the most effective and sustainable path forward lies in the empowerment and strengthening of democratic institutions. We believe that only a healthy information ecosystem can make healthy democratic practices strong and accountable. I believe that the Internet and digital platforms will connect the people’s hearts. It will make life easier. It will bring marginalized communities into the mainstream. Let us reform our collective commitment to this principle. Let us share not just our challenges, but our highest aspirations. I am confident that through the collaboration and the shared dedication to human rights, we can build a digital future that is not only safe and orderly, but also open, vibrant, and fundamentally free.


Sorina Teleanu: Thank you so much. I took quite a lot of notes while you were speaking, and I would like to get back to some points, maybe also later. But right now, I like how you said that we need responsible governance, and that states can and should be accountable to their own citizens. I think those are important points to keep in mind also when you work on legislation. And because you said that social media has to be regulated but not controlled, my question would be, how are you interacting with technology platforms as you’re working on this legislation in the country? you have any discussions with them, how is the relation being?


Yogesh Bhattarai: Yeah. Recently, the government submitted the bill about social media and digital platform, and we discussed with so many stakeholders and the different part and different organization, and the government requested the suggestion from the different stakeholder. It is ongoing, but not in the conclude. So, I hope we make the more effective law about social media and internet access, and especially in the cyber security also.


Sorina Teleanu: Thank you. Moving on to Ms. Penkova. Thank you.


Tsvetelina Penkova: Thank you, Serena. I will touch upon the question about the specific legislations, because we heard a lot of many examples, but we have the strong belief that Europe is still playing the leading role when we are speaking about digital legislation. Of course, we are at the stage of implementation still, but let’s keep in mind that when we speak about the EU legislations, we are representing 27 member states. So, if you allow me, I would start by emphasizing on some of the key and the most important EU legislations, and of course, it’s not going to surprise many people in the room, as it was mentioned many times. I’m starting with the Digital Services Act, which is the flagship EU legislation when we’re speaking about the regulation of the digital space. So, basically, the DSA is meant to be tackling a lot of the issues and problems that were already mentioned throughout the whole day’s discussion, at least today. So, we’re speaking about protecting minors and vulnerable groups, tackling cyber violence, tackling harmful content and disinformation. So, everything, more or less, is part of the content of this very key and important legislative framework of the EU. But, of course, it cannot act on its own, so we need some supportive legislations, and that’s where we come to the Digital Markets Act, for instance, which needs to complement the DSA by ensuring a fair competition in the digital economy. I mean, we believe that this is key, so the DMA does promote greater choice for consumers and the interoperability of the digital service providers. Here, I will also have to mention the data governance and the GDPR. Those are key legislations to reinforce individuals’ control over their data, while at the same time it promotes a trustworthy data sharing. So, we are speaking about protecting human rights in the digital space. When I mention GDPR, I’m sure this is probably one of the most popular legislations of the EU and the most controversial one, but it also needs approves, updates, additions. Only last week, for instance, in the EU Institution, we finished the negotiations of the procedural rules for handling cross-border cases. So, when we’re speaking about digital legislations, you have to be very pragmatic that the problems that we’re resolving today would be very different in tomorrow’s reality. So, we have to be very flexible and that has been extremely challenging for the regulators across the world, I would say, that you cannot always foresee the challenges in such a fast-growing and developing segments of the economy as it’s the digital field. I’ve spoken a lot about the human protection and the main legislative framework that the EU provides, but allow me to mention two other key legislations that we are working on. One of them is still a work in progress, but they’re focusing on the media freedom. So, the European Democracy Action Plan, it’s a plan actually, not a legislation necessarily. It needs to strengthen media freedom, but at the same time promoting pluralism. So, basically, it combats disinformation in very specific cases that we’ve been observing happening quite a lot, especially in the context of elections, for instance, and foreign interference. So, those are quite significant global challenges that we see at the moment. Of course, the Media Freedom Act, which is at the moment under negotiations, of course, tries to protect the independence of media freedom, transparency in media ownership and advertisement as well. So, there are many, many specific examples which are trying to resolve all the common problems that we are facing across the globe, but if you ask me to summarize the four key priorities which we’re having as members of the European Parliament, when we’re working on those legislations, I’ll probably focus on, I mean, there are many more than four, but I’ve chosen to focus on four main ones for today’s session. The first one, we really try to keep with the human-centric digital transformation. So, digital transition while protecting citizens’ rights. The second, combating online hate and disinformation. I’ve mentioned again, the DSA’s probably main goal is to ensure that there are stronger enforcement mechanisms against cyber violence, and we can get in more details if we need to. Third, it was mentioned many times, digital literacy and resilience. Without resolving and tackling this issue, none of those legislation or enforcements would be perceived, accepted and successful. So, the EU strategy at the moment is focused quite significantly on digital education. And last, but not least important, of course, is the children online safety. There are many sessions dedicated to this in the IGF. Tomorrow, we’re having one with the Utah IGF, and the younger generation really has a very significant role to play in ensuring that this protection is actually targeted to the most vulnerable and to the minors. And if you allow me the last 30 seconds, because when I’m speaking from the EU perspective, as I said, we have to take into account that we have 27 member states. So, each one of them has a very different experience in enforcing those legislations. In the last 30 seconds, I will just share, I’m from a small member state, actually, I’m from Bulgaria. So, we’re still struggling, actually, to enforce and implement all those legislations that I’ve listed. But we have a very active civil society sector that at the moment, for instance, is launching a lot of campaigns on teaching the youth generation to identify fake news and to be a bit more aware of the critical thinking. What I mean by that is basically try to allocate or analyze the sources of the information, where the information is coming, because we have observed that the younger generation lacks that question. So, they just see the information and they perceive it. I just wanted to mention that example because, yes, we do face a lot of challenges. Some of those legislations are very, very complicated, but once you have the state support, the regional governance and the active civil society, nothing is impossible.


Sorina Teleanu: Thank you so much for raising, I think, two important points. First of all, being the one on enforcement, it’s one thing to have laws in place, but then are authorities at the national level, as you’re saying, empowered to actually put that law into practice? I’m from your neighboring country, Romania, and I’m seeing the same challenges. It’s not easy to put in place all of that. Excellent points on literacy and capacity building and education and building critical thinking in young users, but also all of us. I think we would all benefit from a bit more critical thinking when we interact with digital technologies. If I may add one more thing that you might want to reflect on, how is the AI Act also connecting to all of these when it comes to more safe online environments and more transparency from the side of the private actors, for instance?


Tsvetelina Penkova: I’m sure maybe some people would not agree with me, but I think the AI Act as it stands in the European legislation is probably the best one which, again, protects the people, and of course we’ve seen a lot of criticism, but what we want to do is ensure that there is innovation and growth, but of course the first thing is protecting citizens’ rights and ensure that there is enough time for the consumers to understand all the risks and challenges before we put a technology for very wide use. I think this was a bit the protection mechanism and the way of thinking of the European Parliament when we were working on that legislation, and that’s why it did face a lot of criticisms.


Sorina Teleanu: Thank you. I think we can also come back to that later. All right, let’s hear from our… Final, but not the least, speaker, Mr. Ashley Sauls, please.


Ashley Sauls: Thank you very much, Serena. Esteemed parliamentarians, distinguished guests and fellow stakeholders in the realm of digital governance, being from a country where our constitution protects religious freedom, in line with my faith, I want to greet you in the name of our Lord and Saviour, Jesus Christ, and in my First Nation Indigenous Bushman mother tongue, at the brink of extinction, Mwenke Awunneki. It is with great honour, thank you, it is with great honour that I address you today at this pivotal forum where we converge to deliberate on the future of digital policy practices. As we navigate the complexities of our increasingly interconnected world, South Africa stands as a testament to the transformative potential of digital technologies while simultaneously confronting unique challenges that demand thoughtful and inclusive policy frameworks. A practical example is the recent disinformation about white minority genocide in South Africa where an executive decision by the US government was largely made based on online information. The ripple effect of that was also a fuelled narrative about my race classified as coloured, as a violent and gangster-ridden group. As a result, a local school, as an example, Atlanta Secondary School, had to host Liff Burra Grammar School from the UK in South Africa for a rugby match this July, but it was cancelled, and it was cancelled because parents feared for what was said in the White House about our country. Our current Minister of Sport says a child in sport is a child out of court. My fellow parliamentarians in the UK, maybe you guys can help us convince management and parents to contribute to a different narrative through sport by ensuring that that match actually takes place. I hope somebody is listening to me about that. In South Africa, we recognise the profound impact of the digital landscape on our socio-economic fabric with over 60% of our population accessing the internet. We see an unparalleled opportunity to enhance educational outcomes, stimulate economic growth and promote social inclusion. However, these opportunities are accompanied by significant obstacles, including the digital divide issues, cyber security threats and the need for robust regulatory frameworks that protect the rights of all citizens. Thus far, we have enacted the Protection of Personal Information Act, the Cyber Crimes Act and the Fullerman Publications Act that regulates these platforms. To effectively enhance our digital policy practices, we adopted a multi-stakeholder approach that engages government, civil society, the private sector and the public. This collaborative model is essential for fostering an inclusive digital economy where benefits are equitably distributed and innovation is harnessed to address local challenges. Moreover, as we advocate for robust cyber security measures, we must ensure that they do not infringe upon the rights to privacy and freedom of expression. South Africa is committed to aligning its digital policies with international standards, promoting a balanced approach that safeguards both security and fundamental human rights, as we would say in my mother tongue, goutes moet balance. By leveraging our position as a member of the African Union, we aim to encourage a continental dialogue that addresses common challenges and fosters regional cooperation in the realm of digital policy. In conclusion, the South African experience underscores the need for a proactive rather than the current reactive approach to digital governance. As we gather here today, let us reaffirm our commitment to a digital future that is inclusive, secure and respects the rights of all citizens. Together, we can craft policies that not only uplift our respective nations but also contribute to a more equitable global digital landscape that expresses the heart of politics, that prioritizes people above profits. Eyo. Thank you. Siyabonga kakulu.


Sorina Teleanu: Thank you as well. Also for highlighting what some of the previous speakers have mentioned, the fact that there can be and there should be a balance between protecting safety and security and also ensuring protection of human rights. And we don’t have to give one up to protect the other and the other way around. So thank you for highlighting that again. You mentioned the African Union, so my follow-up question to you would be, are there any examples of initiatives or discussions or projects being implemented or even put in place right now at the African Union level dealing with these issues that you might want to share with everyone? Sorry, again, I’m doing that. Apologies.


Ashley Sauls: Well, not specific programs. I don’t think it is as aggressive as one would want it to be. But I’d like to rather touch on the IGF approach. We have the South African Internet Governance Forum. And on that level, regionally and also continentally, there’s a lot of programs and initiatives around that. And which, for the first time, I think, because of that approach, we now have us as parliamentarians participating, I think it’s for the first time, that we actually join in the forum. And it is because of those engagements on that level.


Sorina Teleanu: Thank you. I’m glad to have you on board. All right. We’re kind of running out of time. And there are interventions that want to be made from the room. So let’s see. If you could introduce yourself, please. Thank you, Sorin.


Bibek Silwal: My name is Vivek Silwala and I’m from Nepal. I’m an advocate for youth in policy. So thank you very much to all of the member of parliament, senators. I think it was very enlightening in terms of what other work that has been going, not just across the region, but across the continents. And, you know, very much different focus areas and different reasons. So my question is regarding the involvement of youth in digital policymaking or, let’s say, just policymaking. You know, each and every processes where youth are involved, I think it amplifies the impact of the policymaking, whether is it in terms of implementations or whether in terms of the policymaking in the initial process. Youth are always the positive catalyst of reaching out to the end mile. So my question is to all of the parliaments in your region. How have you been involving youths in the days to come in policymaking or just, let’s say, a public outreach program where awareness, how do you plan to engage and indulge the youth in your specific events or policy? Thank you.


Sorina Teleanu: Thank you. Let’s take the second question as well.


Raoul Danniel Abellar Manuel: Hello to fellow parliamentarians. I’m Raul from the Philippines, member of the Philippine House of Representatives. I’d like to ask for your ideas or maybe concrete experiences about combating cybercrime because in the case of the Philippines, we’ve had the Cybercrime Prevention Act in 2012. Unfortunately, it contained a cyber libel provision that we already flagged years ago. We have foreseen that it can be used by abusive or repressive leaders and it turned out that it was actually used in recent times to go after journalists, even teachers who just had opinions that contradict those of the government of the day. So right now, we are discussing possible review and amendments to the cybercrime law that we have and you really are very guarded with that process. So there might be thoughts coming from fellow parliamentarians. Thank you.


Sorina Teleanu: Thank you as well. Shall we take these two questions and then continue? Anything on youth engagement and experiences with cybercrime law? Anyone would like to share?


Franco Metaza: Well, regarding young people, in the case of my country, we have a lot of encouragement for the participation of young people. You can vote from the age of 16 and we are proud to have a large number of young parliamentarians in our two chambers, in the House of Representatives as well. So we understand the participation of young people from a transversal point of view. Every time we make a law, when we listen to all parties, we obviously listen to young people. What we don’t like is to segregate young people. I mean, well, young people talk among young people and solve the problems of young people and the rest of the world follows separate paths. For us, youth must be transversal to political construction in society.


Tsvetelina Penkova: So they had many key messages, but the ones I remember were like, the digital economy will shrink if we don’t regulate it. So the young generation understands that and they want to be involved and they want to be asked and they want to be part of the conversation. Children, they do require protection. that is something which is limiting their rights. An interesting point that they brought, actually, was that the bloggers are not restricted in terms of the content that they’re publishing, so this is something that regulators should pay attention. They signaled about that. So the young generation is very much ahead of many of the legislators when it comes to the new trends, and they are an active stakeholder in that, so it’s not a matter if they want to be involved, it’s a matter of us going there and reaching out and asking them. And the comment on the cyber crime, I’m just going to give an example again how it is dealt with in the DSA, for instance, when we’re speaking about cyber violence. So illegal content has to be taken without a delay from the platforms once it’s detected, and this decision has to be taken by the judge, not by the government, so this is one way to tackle the specific example that was given.


Yogesh Bhattarai: In Nepal, you know, Nepal is a very specific country, because we have so many languages, like 125 languages and 125 castes, different castes, and Nepal is between China and India, you know, the very big countries, so that issue is very special for us also. And the youth engagement, it is very important. We have a national internet governance forum, and also the youth internet governance forum in Nepal, so our parliamentarians are engaged with them to make the law and other legislative process. And second thing, which is the cyber crime issues, we have a cyber crime law, and there is a branch within the police, police have a cyber crime branch, and they investigate the issues about the cyber crime, and they submit any case in the court, and the court maybe judge about the cyber crime.


Ashley Sauls: Thank you, Sorina. I think maybe I should start off by saying, let me be honest and transparent that in South Africa, we haven’t really had an emphasis on youth involvement as much, but with the dawn of the shift of our governmental structure nationally, that’s beginning to change. Many of you would know that we had a majority government, one party majority government, since 1994, and now there’s what is called the government of national unity, it’s a coalition government, and so there’s different lines, different approaches coming together, forming one government, and this has assisted to make it practical. We’ve got two, the leaders of the South African IGF, both the chairperson and the deputy chairperson are young people, they are here at the conference, and like I said, this is the first time that us as parliamentarians are forming part of this, and that is because of this different approach to government, which now includes the voices of young people, and because we listen, I’m here today. So I think that’s a good step in the right direction for South Africa.


Sorina Teleanu: Thank you, everyone, for sharing the experiences. I think there were a few more points, if you would like to get back to the mic before we wrap up the session. There’s a mic next to you.


Audiance: Merci beaucoup, Madame. Je suis l’honorable voie de la République démocratique du Congo, Kinshasa. J’aimerais aborder une question, je ne sais pas, I want to speak in French, please. Je peux? I can? Je voudrais aborder une question un peu plus technique et vraiment intense, parce que à chaque fois que l’on parle de la législation et des droits humains, on sait tous que c’est vrai que nous sommes des législateurs, mais ne votent pas ou bien n’écrit pas une loi qui veut. Et ça a été dit ici que la plupart des parlementaires ne sont pas outillés, mais j’aimerais évoquer ici le rôle que les Nations Unies ont joué sur la reconnaissance des droits humains. Lorsque les Nations Unies ont compris les réalités du monde, vers 1945-1948, ils ont parlé des droits civils et politiques. Ils les ont reconnus comme la première génération des droits humains. Ensuite, il y a eu les droits liés à l’économie, au travail. Ils ont créé la deuxième génération des droits humains, les droits économiques et sociaux, et sociales, si on peut le dire ainsi. Ensuite, en parlant de l’écologie, les Nations Unies ont reconnu les défis du moment. Ils ont créé la troisième génération des droits humains, les droits collectifs liés à l’environnement. Mais ici, il est clair que, concrètement, les Nations Unies n’ont pas encore reconnu les droits numériques comme étant, faisant partie de la quatrième génération des droits humains. Pourquoi je le dis ? Parce que lorsqu’on lit les constitutions de notre pays, on va voir qu’il y a beaucoup de travaux, beaucoup des pays ont repris les travaux des Nations Unies, donc comme droits fondamentaux. Il est clair qu’ici, si on parle des droits civils et politiques, nous savons de quoi nous parlons. Si nous parlons des desques droits économiques, nous savons de quoi nous parlons, effectivement. Mais lorsqu’il s’agit des droits numériques, on parle des droits humains, on parle des légiférés, mais en réalité, on ne s’est pas encore mis autour d’une table pour dire, en effet, que les droits numériques faisaient partie de la quatrième génération des droits humains. Si les Nations Unies passent cette étape, vous allez remarquer que beaucoup de pays vont enjamber et peut-être ça sera même inscrit dans nos constitutions et dans nos lois. Parce que je l’ai dit tout à l’heure, légiférer, c’est bien, c’est un vÅ“u, mais il faut avoir la technique pour légiférer. Et il est clair que notre contribution en tant que parlementaire est qu’il faut que nous ayons un élément, un instrument commun comme on l’a avec les trois générations des droits humains et avec la génération des droits numériques. Merci.


Sorina Teleanu: Thank you. We’ll get back to that. And the second point, please.


Olga Reis: Thank you so much. My name is Olga Reis and I represent the private sector here. I work at Google and I cover AI opportunity agenda for the region of emerging markets. I wanted, first of all, thank you so much for this insightful conversation. And I wanted to highlight a couple of points and then maybe react to some of the points that were raised during the panel discussion. We at Google look at such technologies as AI as a really transformative technology once in a generation opportunity, especially for the region of emerging markets. But we also recognize that such technology should be developed and deployed boldly, responsibly and together with international community, with public sector, with civil society and our users. And one of the ways of how I believe this technology should be used in the context of content regulation, because there was a great deal of discussion around content regulation during the session, is that we as a company that manages YouTube platform utilize AI to tackle bad content on our platform. And I just pulled out statistics as we were talking about content moderation in the first quarter of 2025, so January, February, March this year, we pulled down or took down 8.6 million videos on YouTube that didn’t comply either with our own policies or with respective policies on the markets where we operate. And 55% of this 8.6 million videos were removed before they were watched at all, meaning that they were uploaded by content creators but not published and we detected them automatically. And further 27% of this bad content was removed with less than 10 views. So that just shows the scale of how we can actually use such technology as AI to tackle bad content on our platforms. However, I wanted to use this opportunity to speak not to talk about content, but I felt compelled to bring up these statistics, but about the need, I would say, of companies like Google to actually work on capacity building of public officials and parliamentarians, including around the use of AI for their work, but also around how AI can contribute to driving economic growth, especially in emerging markets. And just two points to highlight or two programs to highlight that we as a company are doing. Next week, we’re actually gathering a number of regulators from the MENA region and our London office to talk for three days about, you know, regulations, challenges around regulations, including on AI and cloud and content issues. This is something that we’ve been doing for many years already. And we have similar programs around the world, especially in emerging markets. And there is a second program that I wanted to highlight that is actually available for everyone. And specifically built for public officials. This is a program that is called AI campus that we as a company built in cooperation with a political which is an NGO based out of UK and This is a program that is built specifically for public officials to upskill them on AI issues we have already seven courses available on different aspects of AI and 500,000 officials have already gone through this online training and I would encourage and invite all public officials that are present in this room to You know to make use of this of this content and we will make sure that we actually update and develop it Because the technology does move and involve in the world very quickly. Thank you so much once again


Sorina Teleanu: Thank you. Also, and we have one more point Okay, a few more points if we can be quick, otherwise I will be taken down here


Anne McCormick: Hello, I’m Anne McCormick from Ernst & Young EY. Also private sector, but a different perspective. It’s very important not to simplify private sector small and big enterprises innovators, but also Organizations we work with clients across very different sectors across almost all the countries in this room we see there is a need for Clarity on what is reliable AI the liability gets transferred to more and more Economic actors in the economy small and big as we adopt embed and deploy AI So I would urge policymakers legislators to look at the health and dynamism of their economy and Consider the different aspects of the private sector not just the large tech companies that are the ones in the headlines but the similar and in some cases very different needs and very different interests of Economic players and companies that are adopting and embedding AI and who are increasingly concerned about AI governance we see company leaders board members, but also investors and insurance companies Asking about how do you know that the AI that you’re buying no matter what its brand, right? How do you know its limits its risks? Are you potentially liable? How are you going to deploy it with confidence? How are you going to make sure that your employees your own clients your reputation as a company? Right is maintained. So it is very important not to over regulate it Irregulate, but it is important that there are the right mechanisms to Encourage disclosure Encourage transparency not the black box encourage accountability through the life cycle of an AI with independent oversight possibly independent assurance or assessments so that everybody can use this extraordinary technology with Confidence and we get the best out of it. There are multiple economic voices The private sector has many different facets and I really want to emphasize this as adoption grows. Thank you very much


Sorina Teleanu: Thank you. And there was one more point if we can be very fast, please


Amy Mitchell: Thank you so much, I’ll be fast Amy Mitchell with Center for News technology and innovation in the States and I had a quick follow-up thought and question I guess if you have a second to to respond to it on The digital information space and it was great to hear topics of the freedom of expression of thinking about the balance of Safeguarding as well as maintaining freedom of expression and then media freedom specifically and in the new EU initiative that’s being passed and I’m curious as to the thoughts that are in place on How one puts definitional language around those things as we know The public access to information Is is vast today and the range of those that are producing journalism moving from the very traditional space to all kinds of different Producers including in some cases citizens themselves that can be need of that safeguard and protection of freedom so I’d be curious as you’re developing these acts the thought around the definitional language and then also on Safeguarding in the enforcement side so that things like that cannot per later government structures as we know government Governments can change over time inside a country To be sure that they don’t end up being any used in a way that can be used to harm. Thank you


Sorina Teleanu: Thank you. I’m looking at the hosts I just say it and I feel asking if we can take one more minute per speaker to try to reflect on the Points. Okay. Thank you so much Let’s try to reflect on some of these issues


Franco Metaza: Bueno, dos remarks para la ejecutiva de Google What’s your name? Olga Una buena y una mala. I’m going to speak in Spanish. Sorry una buena y una mala Creo que la experiencia de YouTube kids es una experiencia muy muy virtuosa porque genera un ecosistema digital distinto Las redes sociales no, no lo tienen. Esto es como si en la vida real nosotros permitiéramos que los niños entren al casino. Bueno cuando entran a Instagram es como un niño entrando a un casino o a una discoteca La experiencia de YouTube kids de generar un ecosistema distinto con un algoritmo absolutamente controlado me parece muy exitosa Y para que le podamos exigir al resto de las empresas. Y después una crítica Yo no sé si usted se acuerda Olga Pero en el año agosto del 20 en el panel de conocimiento de Google de Argentina cuando usted buscaba el nombre de quien era la vicepresidenta de ese momento Cristina Fernández de Kirchner figuraba ladrona de la nación argentina Ladrona. Eso en el panel de conocimiento hubo un juicio al respecto. Bueno, este tipo de cosas si le pasan a una persona que es la vicepresidenta del país y le pasa a una empresa tan grande como Google Imagínense la desprotección que hay hacia abajo, ¿no? Lo que ha generado el sentido común en la sociedad argentina para que hoy el presidente Milley haya osado ponerla en prisión siendo la principal líder opositora. Gracias


Ashley Sauls: I think just on the AI element from a African perspective There should be a balance also around the importance of the well-being of people and profits My country is known for an apartheid history and And we’ve realized that in the AI training That there is Still the presence of a risk of what I would call digital apartheid And this is quite a unique danger for us and there is no attention given especially by The private sector on the importance and on the importance of this in terms of even profiling based on historic racial desperation where if AI Just to make it practical if an AI like me. I look around the room. I’m like the darkest in the room and if AI picks up they differentiate between even on that level a Darker profiling for someone that looks like me and If that’s a repeat now Then we cannot rejoice for a digital future. We should be concerned that that digital future is repeating an Ugly history. Thank you


Sorina Teleanu: Thank you as well


Tsvetelina Penkova: Just very very briefly on the remark for the for a generation of rights and the need for a common approach Absolutely agree because I Actually believe that this is probably gonna resolve the enforcement issues Well, because you’ve made a very valid point on the fact that it’s very difficult to enforce something Which is not not defined or not. Well understood so point taken


Sorina Teleanu: Just to add quickly on the point on digital rights, it’s true We don’t really have new you an instrument dealing with them But there are quite a few Human Rights Council resolutions for instance Which say clearly the same rights that people have offline must also be protected online So at least we can use that as a starting point. We’ve taken 15 more minutes of everyone’s time. Thank you so much Thank you so much to all of you for contributing and for still being in the room We hope this has been useful as a last session of the parliamentary track. I know something will still happen in this room So please do not leave and my colleagues will be telling you a bit more Thank you so much, and good luck with the rest of the IGF You You


A

Anusha Rahman Ahmad Khan

Speech speed

141 words per minute

Speech length

902 words

Speech time

382 seconds

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases

Explanation

Social media platforms treat government requests for content removal as revenue-curbing measures rather than legitimate regulatory concerns. They decide independently what content to remove or keep, showing insensitivity to local cultural contexts where even minor aspersions can have devastating consequences.


Evidence

Example of a university student being harassed with AI-generated fake content that looked real – by the time content was removed, her life was destroyed. Dozens of examples of girls committing suicide due to online harassment. In Pakistan, even an aspersion on a girl can be enough to kill her emotionally if not physically.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Human rights | Sociocultural | Legal and regulatory


Agreed with

– Franco Metaza
– Olga Reis

Agreed on

Social media platforms need to take greater responsibility for content moderation and harm prevention


Disagreed with

– Tsvetelina Penkova

Disagreed on

Decision-making authority for content removal


Pakistan’s Prevention of Electronic Crimes Act (2016) was developed through collaborative effort to protect vulnerable segments while upholding freedom of expression

Explanation

The law took two years to develop through extensive consultation with parliamentarians, civil society, NGOs, independent groups, and media. It specifically targets social media misinformation, propaganda, and fake news rather than traditional media, aiming to protect children, women, girls, and all vulnerable segments including men and boys.


Evidence

Law passed in 2016 after starting work in 2014. Described as a consensus document representing 240 million people through their MPs in both National Assembly and Senate. Focus on protecting natural person dignity and criminalizing harassment leading to vulnerable situations.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Cybersecurity


Parliamentarians should create joint strategies to collectively address social media platforms and protect vulnerable citizens globally

Explanation

Individual government requests to social media platforms are often ignored or inadequately addressed. A collective approach by parliamentarians across countries would have more leverage to ensure that offline rights are equally protected online for both vulnerable and non-vulnerable groups.


Evidence

Personal experience as former technology minister showing that social media platforms continue to ignore government requests and treat them as if they were ‘two kilometers above the ground, not impacted by the law.’


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Legal and regulatory | Human rights | Sociocultural


Agreed with

– Ashley Sauls
– Olga Reis
– Sorina Teleanu

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


F

Franco Metaza

Speech speed

155 words per minute

Speech length

1301 words

Speech time

501 seconds

Companies like Google can do more than they are currently doing to tackle harmful content, as they have the budget and resources but are not making sufficient efforts

Explanation

The Mercosur parliament has reached consensus that technology companies possess sufficient financial resources and technical capabilities to address harmful content and fake news more effectively than their current efforts demonstrate. There is agreement that these companies should increase their commitment to content moderation.


Evidence

Consensus reached within Mercosur parliament (representing Brazil, Argentina, Uruguay, Paraguay, and Bolivia with 100 parliamentarians) that companies have the budget and money but are not making enough efforts.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Legal and regulatory | Sociocultural | Economic


Agreed with

– Anusha Rahman Ahmad Khan
– Olga Reis

Agreed on

Social media platforms need to take greater responsibility for content moderation and harm prevention


Content targeting young girls promotes extreme dieting, impossible body standards, and leads to eating disorders, with 13-14 year olds seeking aesthetic surgeries

Explanation

Social media platforms, particularly Instagram and TikTok, bombard young users with harmful content promoting unrealistic body images through AI-generated or real images of extremely thin bodies. This content includes advice for extreme diets and advertisements for surgeries, leading to serious eating disorders among young girls.


Evidence

Tests conducted by registering as a 13-year-old girl on social media showed bombardment with images of extremely skinny bodies (some AI-generated), extreme diet advice, and surgery advertisements. In Brazil, 13-14 year old girls have started consulting doctors independently about aesthetic surgeries.


Major discussion point

Harmful Content and Its Impact on Vulnerable Groups


Topics

Human rights | Sociocultural | Cybersecurity


Disinformation can lead to real-world violence, as seen with assassination attempts on political leaders fueled by fake news and hate messages

Explanation

Systematic circulation of fake news and hate messages on social networks can escalate to physical violence. The spread of disinformation creates an environment where individuals consume so much hateful content that they may act violently against targeted individuals.


Evidence

Example from Argentina where fake news about corruption and hate messages against Cristina Fernández de Kirchner went viral on networks. A person who consumed these hate messages appeared at her house and shot at her head – fortunately the bullet did not fire. She has since been arrested by the current president amid confusion of fake news and disinformation.


Major discussion point

Harmful Content and Its Impact on Vulnerable Groups


Topics

Cybersecurity | Human rights | Sociocultural


Regulation through democratic parliaments representing all social and political expressions will never go against freedom, similar to traffic laws for vehicles

Explanation

When regulations are created in parliaments where all social statements and political expressions are represented, they express the will of the majority and therefore cannot be against freedom. Just as society created speed limits and age restrictions for driving when cars were introduced, similar reasonable regulations are needed for social media.


Evidence

Analogy provided: when motor vehicles appeared in society, speed limits were established and children were prohibited from driving – this was not against anyone’s freedom. Comparison made that permanent scrolling on social media is as harmful or more harmful than driving at full speed without knowing what lies ahead.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Legal and regulatory | Human rights | Sociocultural


Disagreed with

– Yogesh Bhattarai

Disagreed on

Level of regulation needed for digital platforms


Youth participation should be transversal across all policy-making rather than segregated into youth-only discussions

Explanation

Rather than creating separate spaces where young people only discuss among themselves and solve youth-specific problems, young people should be integrated across all political construction in society. This transversal approach ensures youth perspectives are included in all policy areas rather than being isolated.


Evidence

Argentina allows voting from age 16 and has a large number of young parliamentarians in both chambers. Every time they make a law and listen to all parties, they include young people in the consultation process.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Tsvetelina Penkova
– Yogesh Bhattarai
– Ashley Sauls
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


Disagreed with

– Bibek Silwal

Disagreed on

Approach to youth engagement in policymaking


YouTube Kids provides a successful model of controlled digital ecosystem for children that other platforms should emulate

Explanation

YouTube Kids creates a separate digital ecosystem with a completely controlled algorithm specifically designed for children, unlike other social media platforms that allow children into adult-oriented spaces. This approach should be demanded from other companies as it provides appropriate protection for minors.


Evidence

Comparison made that allowing children on regular Instagram is like allowing a child to enter a casino or nightclub, while YouTube Kids provides a virtuous experience with a controlled algorithm creating a distinct ecosystem for children.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Human rights | Sociocultural | Cybersecurity


O

Olga Reis

Speech speed

141 words per minute

Speech length

592 words

Speech time

250 seconds

AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed

Explanation

Google utilizes AI technology to automatically detect and remove content that violates policies before it can cause harm. The majority of problematic content is identified and removed before users can view it, demonstrating the effectiveness of AI-powered content moderation systems.


Evidence

In Q1 2025 (January-March), 8.6 million videos were removed from YouTube for policy violations. 55% were removed before being watched at all, and an additional 27% were removed with less than 10 views, showing early detection capabilities.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Cybersecurity | Legal and regulatory | Sociocultural


Agreed with

– Anusha Rahman Ahmad Khan
– Franco Metaza

Agreed on

Social media platforms need to take greater responsibility for content moderation and harm prevention


AI development should be bold, responsible, and collaborative with international community, public sector, and civil society

Explanation

AI represents a transformative once-in-a-generation opportunity, especially for emerging markets, but must be developed and deployed through collaboration with multiple stakeholders. This approach ensures that the technology benefits society while addressing potential risks and concerns.


Evidence

Google’s approach to AI development emphasizes working together with international community, public sector, civil society, and users. Specific mention of AI as transformative technology with particular opportunities for emerging markets.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Development | Legal and regulatory | Economic


Agreed with

– Anusha Rahman Ahmad Khan
– Ashley Sauls
– Sorina Teleanu

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


Y

Yogesh Bhattarai

Speech speed

108 words per minute

Speech length

831 words

Speech time

461 seconds

Digital platforms should be regulated but not controlled, requiring cooperation and collaboration rather than strict control

Explanation

Nepal’s approach emphasizes that digital platforms need regulatory frameworks that provide guidance and boundaries without imposing excessive control that could stifle innovation or freedom. The focus should be on collaborative governance that strengthens democratic institutions while ensuring platforms serve public interests.


Evidence

Nepal’s Constitution guarantees freedom of speech and expression, with Article 19 establishing right to information and communication as fundamental rights. Parliament will not accept any law that contradicts constitutional provisions. Nepal has National Information Commission and Press Council Nepal as independent oversight agencies.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Legal and regulatory | Human rights | Infrastructure


Agreed with

– Tsvetelina Penkova
– Ashley Sauls
– Sorina Teleanu

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


Disagreed with

– Franco Metaza

Disagreed on

Level of regulation needed for digital platforms


Nepal is currently discussing Social Media Bill and Information Technology Bill while ensuring constitutional compliance with freedom of speech

Explanation

Nepal is developing legislation to address digital challenges while maintaining strict adherence to constitutional protections for freedom of expression and communication rights. The legislative process involves extensive stakeholder consultation to ensure balanced approaches that protect both safety and rights.


Evidence

Social Media Bill and Information Technology Bill currently under discussion. Government has requested suggestions from different stakeholders. Nepal has National Information Commission and Press Council Nepal as independent oversight agencies. MPs participate in programs organized by Civil Society Organizations discussing right to information and communication.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Sociocultural


Constitutional guarantees of freedom of speech must be upheld while addressing legitimate concerns about harmful content

Explanation

Nepal’s legislative approach prioritizes constitutional protections for freedom of speech and expression while acknowledging the need to address misinformation, disinformation, and content that can divide society along caste, religious, racial, and gender lines. The challenge is creating effective governance without compromising fundamental rights.


Evidence

Constitution of Nepal guarantees freedom of speech and expression with Article 19 establishing right to information and communication as fundamental rights. Concerns identified about misinformation and disinformation spreading confusion about caste, religious, racism, gender, and professions, potentially dividing society and impacting national security.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Legal and regulatory | Sociocultural


Nepal engages youth through national and youth internet governance forums in legislative processes

Explanation

Nepal has established both national and youth-specific internet governance forums that provide platforms for young people to participate in policy discussions and legislative processes. Parliamentarians actively engage with these forums to ensure youth perspectives are incorporated into law-making.


Evidence

Nepal has a national internet governance forum and a youth internet governance forum. Parliamentarians are engaged with these forums to make laws and other legislative processes. Nepal is described as having 125 languages and 125 different castes, making youth engagement particularly important for inclusive policy-making.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Human rights | Legal and regulatory | Development


Agreed with

– Franco Metaza
– Tsvetelina Penkova
– Ashley Sauls
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


T

Tsvetelina Penkova

Speech speed

137 words per minute

Speech length

1437 words

Speech time

628 seconds

The Digital Services Act serves as flagship EU legislation tackling harmful content, disinformation, and protecting minors and vulnerable groups

Explanation

The DSA represents the EU’s comprehensive approach to regulating digital spaces, addressing multiple challenges including protection of minors and vulnerable groups, cyber violence, harmful content, and disinformation. It serves as the cornerstone of EU digital regulation covering 27 member states.


Evidence

DSA described as flagship EU legislation representing 27 member states. Specifically tackles protecting minors and vulnerable groups, cyber violence, harmful content and disinformation – issues mentioned throughout the day’s discussions.


Major discussion point

Regulation of Social Media Platforms and Content Moderation


Topics

Legal and regulatory | Human rights | Sociocultural


EU has comprehensive digital legislation including DSA, DMA, GDPR, and AI Act working together to create protective frameworks

Explanation

The EU has developed an interconnected system of digital laws where each piece of legislation complements others to create comprehensive protection. The Digital Markets Act ensures fair competition, GDPR protects data rights, and the AI Act provides safeguards for AI development, all working alongside the DSA.


Evidence

Digital Markets Act promotes greater choice for consumers and interoperability. GDPR reinforces individuals’ control over their data while promoting trustworthy data sharing. AI Act described as protecting people and ensuring enough time for consumers to understand risks before wide technology deployment. European Democracy Action Plan and Media Freedom Act address media freedom and pluralism.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Economic


Digital transition must be human-centric while protecting citizens’ rights, requiring balance between innovation and protection

Explanation

The EU’s approach prioritizes human-centric digital transformation that protects citizens’ rights while allowing for innovation and growth. This involves ensuring that technological advancement serves human needs rather than compromising fundamental rights and protections.


Evidence

Four key priorities identified: human-centric digital transformation, combating online hate and disinformation, digital literacy and resilience, and children online safety. EU strategy focuses significantly on digital education as essential for successful legislation and enforcement.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Development | Legal and regulatory


Agreed with

– Yogesh Bhattarai
– Ashley Sauls
– Sorina Teleanu

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


Young people understand that the digital economy will shrink without proper regulation and want to be actively involved in policy conversations

Explanation

Youth recognize that lack of appropriate regulation will harm the digital economy and actively seek participation in policy-making processes. They bring valuable insights about new trends and are ahead of many legislators in understanding digital developments, making their involvement essential rather than optional.


Evidence

Key messages from youth consultations included that the digital economy will shrink without regulation, children require protection without limiting rights, and bloggers are not restricted in content publishing. Young generation is described as being ahead of legislators on new trends and wanting to be part of conversations.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Economic | Human rights | Development


Agreed with

– Franco Metaza
– Yogesh Bhattarai
– Ashley Sauls
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power

Explanation

To prevent government overreach and protect against potential abuse of cybercrime laws, the EU framework requires that decisions about illegal content removal be made by judicial authorities rather than government officials. This provides an important check on executive power and protects democratic freedoms.


Evidence

In the DSA framework, illegal content must be taken down without delay once detected, but this decision has to be taken by a judge, not by the government. This addresses concerns about cybercrime laws being misused against journalists and opposition voices.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Legal and regulatory | Human rights | Cybersecurity


Disagreed with

– Anusha Rahman Ahmad Khan

Disagreed on

Decision-making authority for content removal


A

Ashley Sauls

Speech speed

151 words per minute

Speech length

1066 words

Speech time

423 seconds

South Africa has enacted multiple acts including Protection of Personal Information Act, Cyber Crimes Act, and Fullerman Publications Act

Explanation

South Africa has developed a comprehensive legal framework to address digital governance challenges through multiple pieces of legislation that regulate different aspects of digital activity. These laws work together to provide protection for personal information, address cybercrime, and regulate digital publications.


Evidence

Specific mention of Protection of Personal Information Act, Cyber Crimes Act, and Fullerman Publications Act as enacted legislation regulating digital platforms and activities in South Africa.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Cybersecurity


Disinformation about South Africa led to international consequences, including cancelled educational exchanges and reinforced negative stereotypes

Explanation

False information spread online about South Africa, including claims about white minority genocide and stereotypes about racial groups, influenced international decisions and relationships. This demonstrates how disinformation can have real-world diplomatic and social consequences beyond national borders.


Evidence

US executive decision was largely based on online disinformation about white minority genocide in South Africa. This fueled narratives about the ‘coloured’ racial group as violent and gangster-ridden. A rugby match between Atlanta Secondary School and Liff Burra Grammar School from the UK was cancelled because parents feared for safety based on what was said in the White House about South Africa.


Major discussion point

Harmful Content and Its Impact on Vulnerable Groups


Topics

Sociocultural | Human rights | Cybersecurity


Multi-stakeholder approach engaging government, civil society, private sector, and public is essential for inclusive digital economy

Explanation

South Africa recognizes that effective digital governance requires collaboration between all sectors of society rather than top-down government regulation alone. This collaborative model ensures that benefits of digital transformation are equitably distributed and innovation addresses local challenges.


Evidence

South Africa has adopted a multi-stakeholder approach engaging government, civil society, private sector, and public. This collaborative model is described as essential for fostering inclusive digital economy where benefits are equitably distributed and innovation addresses local challenges.


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Development | Economic | Legal and regulatory


Agreed with

– Anusha Rahman Ahmad Khan
– Olga Reis
– Sorina Teleanu

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


Policies must safeguard both security and fundamental human rights without infringing on privacy and freedom of expression

Explanation

South Africa’s approach to digital governance emphasizes that security measures and human rights protection are not mutually exclusive. The country is committed to creating policies that enhance cybersecurity while maintaining strong protections for privacy and freedom of expression.


Evidence

South Africa is committed to aligning digital policies with international standards, promoting balanced approach that safeguards both security and fundamental human rights. Emphasis on proactive rather than reactive approach to digital governance that is inclusive, secure and respects rights of all citizens.


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Yogesh Bhattarai
– Tsvetelina Penkova
– Sorina Teleanu

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


South Africa is beginning to emphasize youth involvement more with the new government of national unity structure

Explanation

While South Africa previously had limited youth involvement in digital policy, the shift from single-party majority government to a coalition government of national unity has created opportunities for different approaches that include youth voices. This change is already showing practical results in internet governance leadership.


Evidence

South Africa had one party majority government since 1994, now has government of national unity (coalition government) with different approaches coming together. Both the chairperson and deputy chairperson of South African IGF are young people present at the conference. This is the first time parliamentarians are participating in IGF because of this different approach that includes youth voices.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Human rights | Development | Legal and regulatory


Agreed with

– Franco Metaza
– Tsvetelina Penkova
– Yogesh Bhattarai
– Bibek Silwal

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


There should be balance between people’s well-being and profits, with attention to preventing digital apartheid and racial profiling

Explanation

From an African perspective, AI development must consider the risk of perpetuating historical discrimination through digital means. South Africa’s apartheid history makes it particularly sensitive to the possibility that AI training could embed racial biases that create new forms of digital discrimination.


Evidence

South Africa’s apartheid history creates unique concerns about AI training containing risks of digital apartheid. Example given of AI potentially profiling based on historic racial separation, with concern that darker-skinned individuals might face discriminatory profiling. Warning that if AI repeats ugly history, we cannot rejoice for digital future.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Human rights | Development | Sociocultural


R

Raoul Danniel Abellar Manuel

Speech speed

163 words per minute

Speech length

139 words

Speech time

51 seconds

The Philippines’ Cybercrime Prevention Act (2012) contains problematic cyber libel provisions that have been misused against journalists and teachers

Explanation

The Philippines’ cybercrime law includes cyber libel provisions that were anticipated to be problematic and have indeed been abused by repressive leaders to target journalists and teachers who express opinions contradicting the government. This demonstrates how cybercrime laws can be weaponized against legitimate expression.


Evidence

Cybercrime Prevention Act passed in 2012 with cyber libel provision that was flagged years ago as potentially problematic. It has been used by abusive or repressive leaders to go after journalists and even teachers who had opinions contradicting the government. Philippines is now discussing possible review and amendments to the law.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Human rights | Cybersecurity


B

Bibek Silwal

Speech speed

222 words per minute

Speech length

185 words

Speech time

50 seconds

Youth serve as positive catalysts in policy implementation and should be involved from initial policymaking through public outreach

Explanation

Youth involvement amplifies the impact of policymaking processes, whether in initial policy development or implementation phases. Young people serve as effective bridges to reach end-mile communities and enhance the overall effectiveness of policy initiatives through their engagement and outreach capabilities.


Evidence

Every process where youth are involved amplifies the impact of policymaking, whether in implementation or initial policymaking process. Youth are described as positive catalysts for reaching out to the end mile and enhancing policy effectiveness.


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Development | Human rights | Legal and regulatory


Agreed with

– Franco Metaza
– Tsvetelina Penkova
– Yogesh Bhattarai
– Ashley Sauls

Agreed on

Youth engagement in digital policymaking should be meaningful and integrated


Disagreed with

– Franco Metaza

Disagreed on

Approach to youth engagement in policymaking


A

Audiance

Speech speed

158 words per minute

Speech length

442 words

Speech time

166 seconds

UN should recognize digital rights as the fourth generation of human rights to provide common framework for legislation

Explanation

The UN has historically recognized three generations of human rights (civil and political; economic and social; collective environmental rights) which have been incorporated into national constitutions. Digital rights should be formally recognized as a fourth generation to provide legislators with common technical frameworks and instruments for creating effective digital legislation.


Evidence

UN recognized civil and political rights as first generation (1945-1948), economic and social rights as second generation, and collective environmental rights as third generation. Many countries have incorporated these into their constitutions based on UN frameworks. Digital rights lack this formal recognition, making it difficult for legislators to have common technical instruments for lawmaking.


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Human rights | Legal and regulatory | Development


A

Anne McCormick

Speech speed

141 words per minute

Speech length

318 words

Speech time

135 seconds

Private sector needs clarity on reliable AI and liability frameworks as AI adoption spreads across different economic actors

Explanation

As AI technology becomes embedded across various sectors and company sizes, there is growing concern among business leaders, board members, investors, and insurance companies about AI governance, liability, and risk management. Small and large enterprises alike need clear frameworks to deploy AI with confidence while managing potential risks to their operations and reputation.


Evidence

Ernst & Young works with clients across different sectors and countries, observing need for clarity on reliable AI. Company leaders, board members, investors and insurance companies are asking about AI limits, risks, and potential liability. Concerns about deploying AI while maintaining employee safety, client relationships, and company reputation.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Economic | Legal and regulatory | Development


Independent oversight and transparency mechanisms are needed to ensure accountability throughout AI lifecycle

Explanation

Rather than over-regulation or under-regulation, there should be appropriate mechanisms that encourage disclosure, transparency, and accountability for AI systems throughout their development and deployment lifecycle. This includes independent oversight and assessment to ensure all economic actors can use AI technology with confidence.


Evidence

Emphasis on not over-regulating or under-regulating, but having right mechanisms to encourage disclosure and transparency rather than black box approaches. Need for independent oversight, possibly independent assurance or assessments, so everyone can use AI technology with confidence and get the best outcomes.


Major discussion point

Private Sector Responsibility and AI Governance


Topics

Legal and regulatory | Economic | Development


A

Amy Mitchell

Speech speed

173 words per minute

Speech length

223 words

Speech time

77 seconds

S

Sorina Teleanu

Speech speed

224 words per minute

Speech length

1102 words

Speech time

294 seconds

Enforcement of digital laws is challenging and requires empowered national authorities to put legislation into practice

Explanation

Having laws in place is only the first step – the real challenge lies in ensuring that national authorities have the capacity and resources to actually implement and enforce these digital regulations effectively. This is a common challenge across countries, including in the EU region.


Evidence

Reference to challenges in Romania and Bulgaria where it’s not easy to put digital legislation into practice, despite having comprehensive EU frameworks like DSA, DMA, and GDPR in place.


Major discussion point

Legislative Frameworks and Cybercrime Laws


Topics

Legal and regulatory | Development


Critical thinking and digital literacy education should benefit all users, not just young people

Explanation

While there is significant focus on educating young users about digital technologies, all users across age groups would benefit from improved critical thinking skills when interacting with digital platforms and content. Digital literacy should be a universal priority.


Evidence

Acknowledgment of excellent points on literacy, capacity building, and education for building critical thinking in young users, with extension that ‘we would all benefit from a bit more critical thinking when we interact with digital technologies.’


Major discussion point

Youth Engagement in Digital Policymaking


Topics

Development | Human rights | Sociocultural


Safety and security protection can coexist with human rights protection without requiring trade-offs

Explanation

There is a false dichotomy in thinking that protecting safety and security requires giving up human rights protections, or vice versa. Effective digital governance should achieve both objectives simultaneously through balanced approaches.


Evidence

Highlighting speaker comments about finding balance between protecting safety and security while ensuring protection of human rights, emphasizing ‘we don’t have to give one up to protect the other and the other way around.’


Major discussion point

Balancing Freedom of Expression with Safety and Protection


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Yogesh Bhattarai
– Tsvetelina Penkova
– Ashley Sauls

Agreed on

Balance between freedom of expression and safety protection is essential and achievable


Technology platforms should engage in discussions with legislators during the law-making process

Explanation

As countries develop digital legislation, it’s important to have dialogue and engagement with technology platforms to ensure that regulations are practical and effective. This interaction helps create more informed and implementable laws.


Evidence

Question posed to speakers about ‘how are you interacting with technology platforms as you’re working on this legislation in the country? Do you have any discussions with them, how is the relation being?’


Major discussion point

International Cooperation and Multi-stakeholder Approaches


Topics

Legal and regulatory | Economic


Agreed with

– Anusha Rahman Ahmad Khan
– Ashley Sauls
– Olga Reis

Agreed on

Multi-stakeholder approaches are necessary for effective digital governance


Agreements

Agreement points

Social media platforms need to take greater responsibility for content moderation and harm prevention

Speakers

– Anusha Rahman Ahmad Khan
– Franco Metaza
– Olga Reis

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Companies like Google can do more than they are currently doing to tackle harmful content, as they have the budget and resources but are not making sufficient efforts


AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed


Summary

All speakers agree that social media platforms have both the capability and responsibility to do more in addressing harmful content, though they approach it from different perspectives – regulatory demands, parliamentary consensus, and industry acknowledgment of current efforts.


Topics

Legal and regulatory | Cybersecurity | Sociocultural


Balance between freedom of expression and safety protection is essential and achievable

Speakers

– Yogesh Bhattarai
– Tsvetelina Penkova
– Ashley Sauls
– Sorina Teleanu

Arguments

Digital platforms should be regulated but not controlled, requiring cooperation and collaboration rather than strict control


Digital transition must be human-centric while protecting citizens’ rights, requiring balance between innovation and protection


Policies must safeguard both security and fundamental human rights without infringing on privacy and freedom of expression


Safety and security protection can coexist with human rights protection without requiring trade-offs


Summary

Speakers consistently emphasize that protecting safety and security does not require sacrificing freedom of expression or human rights, and that balanced regulatory approaches can achieve both objectives simultaneously.


Topics

Human rights | Legal and regulatory | Cybersecurity


Youth engagement in digital policymaking should be meaningful and integrated

Speakers

– Franco Metaza
– Tsvetelina Penkova
– Yogesh Bhattarai
– Ashley Sauls
– Bibek Silwal

Arguments

Youth participation should be transversal across all policy-making rather than segregated into youth-only discussions


Young people understand that the digital economy will shrink without proper regulation and want to be actively involved in policy conversations


Nepal engages youth through national and youth internet governance forums in legislative processes


South Africa is beginning to emphasize youth involvement more with the new government of national unity structure


Youth serve as positive catalysts in policy implementation and should be involved from initial policymaking through public outreach


Summary

All speakers agree that youth should be meaningfully integrated into digital policymaking processes rather than marginalized, recognizing their unique insights and catalytic role in policy implementation.


Topics

Human rights | Development | Legal and regulatory


Multi-stakeholder approaches are necessary for effective digital governance

Speakers

– Anusha Rahman Ahmad Khan
– Ashley Sauls
– Olga Reis
– Sorina Teleanu

Arguments

Parliamentarians should create joint strategies to collectively address social media platforms and protect vulnerable citizens globally


Multi-stakeholder approach engaging government, civil society, private sector, and public is essential for inclusive digital economy


AI development should be bold, responsible, and collaborative with international community, public sector, and civil society


Technology platforms should engage in discussions with legislators during the law-making process


Summary

Speakers consistently advocate for collaborative approaches involving multiple stakeholders rather than unilateral action by any single actor, recognizing the complexity of digital governance challenges.


Topics

Legal and regulatory | Development | Economic


Similar viewpoints

Both speakers from developing countries (Pakistan and Argentina) share concerns about social media platforms’ inadequate response to harmful content that leads to real-world violence and harm, particularly affecting vulnerable populations.

Speakers

– Anusha Rahman Ahmad Khan
– Franco Metaza

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Disinformation can lead to real-world violence, as seen with assassination attempts on political leaders fueled by fake news and hate messages


Topics

Cybersecurity | Human rights | Sociocultural


Both speakers recognize the risk of cybercrime laws being abused by governments and emphasize the need for judicial oversight to prevent misuse against legitimate expression and press freedom.

Speakers

– Tsvetelina Penkova
– Raoul Danniel Abellar Manuel

Arguments

Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power


The Philippines’ Cybercrime Prevention Act (2012) contains problematic cyber libel provisions that have been misused against journalists and teachers


Topics

Legal and regulatory | Human rights | Cybersecurity


Both speakers emphasize the need for responsible AI development that considers broader societal impacts beyond profit motives, with attention to equity and clear governance frameworks.

Speakers

– Ashley Sauls
– Anne McCormick

Arguments

There should be balance between people’s well-being and profits, with attention to preventing digital apartheid and racial profiling


Private sector needs clarity on reliable AI and liability frameworks as AI adoption spreads across different economic actors


Topics

Human rights | Economic | Development


Unexpected consensus

Private sector acknowledgment of need for greater responsibility

Speakers

– Franco Metaza
– Olga Reis

Arguments

Companies like Google can do more than they are currently doing to tackle harmful content, as they have the budget and resources but are not making sufficient efforts


AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed


Explanation

It’s unexpected to see a Google representative (Olga Reis) essentially agreeing with parliamentary criticism by acknowledging current efforts while implicitly accepting that more can be done, rather than defending current practices as sufficient.


Topics

Legal and regulatory | Cybersecurity | Economic


Recognition of digital rights as fundamental human rights requiring formal framework

Speakers

– Audiance
– Tsvetelina Penkova

Arguments

UN should recognize digital rights as the fourth generation of human rights to provide common framework for legislation


Digital transition must be human-centric while protecting citizens’ rights, requiring balance between innovation and protection


Explanation

The consensus between a civil society representative calling for formal UN recognition of digital rights and an EU parliamentarian’s human-centric approach suggests growing recognition that digital rights need formal international frameworks similar to other human rights generations.


Topics

Human rights | Legal and regulatory | Development


Overall assessment

Summary

The discussion revealed strong consensus on several key areas: the need for greater platform responsibility, balanced approaches to regulation that protect both safety and freedom, meaningful youth engagement, and multi-stakeholder governance. Speakers consistently emphasized human-centric approaches to digital governance.


Consensus level

High level of consensus on fundamental principles, with differences mainly in implementation approaches rather than core objectives. This suggests potential for collaborative international action on digital governance frameworks, particularly around platform accountability, youth engagement, and balanced regulatory approaches that protect both safety and human rights.


Differences

Different viewpoints

Approach to youth engagement in policymaking

Speakers

– Franco Metaza
– Bibek Silwal

Arguments

Youth participation should be transversal across all policy-making rather than segregated into youth-only discussions


Youth serve as positive catalysts in policy implementation and should be involved from initial policymaking through public outreach


Summary

Franco Metaza argues against segregating youth into separate discussions, preferring transversal integration across all policy areas. Bibek Silwal advocates for dedicated youth involvement and specialized engagement processes.


Topics

Human rights | Legal and regulatory | Development


Level of regulation needed for digital platforms

Speakers

– Yogesh Bhattarai
– Franco Metaza

Arguments

Digital platforms should be regulated but not controlled, requiring cooperation and collaboration rather than strict control


Regulation through democratic parliaments representing all social and political expressions will never go against freedom, similar to traffic laws for vehicles


Summary

Bhattarai emphasizes light-touch regulation focusing on cooperation, while Metaza supports stronger parliamentary regulation comparing it to necessary traffic laws.


Topics

Legal and regulatory | Human rights


Decision-making authority for content removal

Speakers

– Anusha Rahman Ahmad Khan
– Tsvetelina Penkova

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power


Summary

Khan advocates for stronger government authority over content removal decisions, while Penkova insists judicial oversight is necessary to prevent government overreach.


Topics

Legal and regulatory | Human rights | Cybersecurity


Unexpected differences

Private sector engagement and criticism

Speakers

– Franco Metaza
– Olga Reis

Arguments

YouTube Kids provides a successful model of controlled digital ecosystem for children that other platforms should emulate


AI technology is being used effectively for content moderation, with 8.6 million videos removed from YouTube in Q1 2025, 55% before being viewed


Explanation

Unexpectedly, Metaza both praised Google’s YouTube Kids as a virtuous model while simultaneously criticizing Google for allowing defamatory content about Argentine political leaders in search results. This shows complex relationship between acknowledging good practices while holding companies accountable for failures.


Topics

Legal and regulatory | Sociocultural | Human rights


Constitutional vs. practical approaches to digital rights

Speakers

– Yogesh Bhattarai
– Audiance

Arguments

Constitutional guarantees of freedom of speech must be upheld while addressing legitimate concerns about harmful content


UN should recognize digital rights as the fourth generation of human rights to provide common framework for legislation


Explanation

While both support strong digital rights protection, they disagree on whether existing constitutional frameworks are sufficient (Bhattarai) or whether new international frameworks are needed (Audiance). This represents a fundamental disagreement about legal foundations for digital governance.


Topics

Human rights | Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on regulatory approaches (light-touch cooperation vs. stronger parliamentary control), decision-making authority (government vs. judicial oversight), youth engagement methods (integrated vs. specialized), and legal frameworks (existing constitutional vs. new international instruments).


Disagreement level

Moderate disagreement level with significant implications. While speakers share common goals of protecting vulnerable groups and balancing rights with safety, their different approaches could lead to incompatible policy frameworks. The disagreements reflect deeper tensions between national sovereignty and international coordination, government authority and judicial independence, and regulatory approaches across different legal and cultural contexts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from developing countries (Pakistan and Argentina) share concerns about social media platforms’ inadequate response to harmful content that leads to real-world violence and harm, particularly affecting vulnerable populations.

Speakers

– Anusha Rahman Ahmad Khan
– Franco Metaza

Arguments

Social media platforms prioritize revenue over cultural sensitivity and fail to respond adequately to government requests for content removal, leading to serious harm including suicide cases


Disinformation can lead to real-world violence, as seen with assassination attempts on political leaders fueled by fake news and hate messages


Topics

Cybersecurity | Human rights | Sociocultural


Both speakers recognize the risk of cybercrime laws being abused by governments and emphasize the need for judicial oversight to prevent misuse against legitimate expression and press freedom.

Speakers

– Tsvetelina Penkova
– Raoul Danniel Abellar Manuel

Arguments

Cyber violence decisions should be made by judges rather than governments to prevent abuse of regulatory power


The Philippines’ Cybercrime Prevention Act (2012) contains problematic cyber libel provisions that have been misused against journalists and teachers


Topics

Legal and regulatory | Human rights | Cybersecurity


Both speakers emphasize the need for responsible AI development that considers broader societal impacts beyond profit motives, with attention to equity and clear governance frameworks.

Speakers

– Ashley Sauls
– Anne McCormick

Arguments

There should be balance between people’s well-being and profits, with attention to preventing digital apartheid and racial profiling


Private sector needs clarity on reliable AI and liability frameworks as AI adoption spreads across different economic actors


Topics

Human rights | Economic | Development


Takeaways

Key takeaways

Social media platforms prioritize revenue over cultural sensitivity and public safety, often failing to respond adequately to government requests for harmful content removal


Effective content moderation requires a balance between protecting vulnerable groups and preserving freedom of expression, with decisions ideally made by judicial rather than governmental authorities


Legislative frameworks must be developed through multi-stakeholder collaboration including parliamentarians, civil society, NGOs, and media to ensure comprehensive protection while upholding democratic values


Youth engagement in digital policymaking should be transversal across all policy areas rather than segregated, as young people understand digital challenges and want active involvement in solutions


International cooperation and joint parliamentary strategies are essential for addressing global digital challenges that transcend national boundaries


AI technology shows promise for automated content moderation but requires responsible development with attention to preventing digital discrimination and ensuring transparency


Digital rights may need formal recognition as a fourth generation of human rights to provide a common international framework for legislation


Capacity building for parliamentarians and public officials is crucial for effective digital governance and understanding emerging technologies


Resolutions and action items

Parliamentarians should create joint strategies to collectively address social media platforms and protect vulnerable citizens globally


UN should consider recognizing digital rights as the fourth generation of human rights to provide common legislative framework


Private sector should increase efforts and investment in tackling harmful content despite having adequate resources


Governments should engage with technology platforms during legislation development processes


Educational institutions and civil society should launch campaigns teaching youth to identify fake news and develop critical thinking skills


Capacity building programs for public officials should be expanded, including Google’s AI Campus training program


YouTube Kids model of controlled digital ecosystem should be adopted by other social media platforms for child protection


Unresolved issues

How to make social media platforms more culturally sensitive and responsive to local government requests for content removal


Enforcement challenges at national levels for implementing comprehensive digital legislation frameworks


Definitional language around media freedom and journalism in the digital age as content creators diversify beyond traditional media


Prevention of legislative abuse by future governments that might use cybercrime laws to suppress opposition voices


Addressing digital apartheid and racial profiling risks in AI training and deployment


Balancing innovation and economic growth with necessary protective regulations


Establishing liability frameworks for AI adoption across different economic sectors beyond large tech companies


Creating effective mechanisms for cross-border enforcement of digital rights and content moderation


Suggested compromises

Digital platforms should be regulated but not controlled, emphasizing cooperation and collaboration over strict governmental control


Cybercrime legislation should focus on protecting vulnerable groups while ensuring judicial rather than governmental oversight of content decisions


AI development should proceed boldly but responsibly through collaboration between private sector, government, civil society and international community


Content moderation should combine automated AI systems with human oversight to balance efficiency with cultural sensitivity


Legislative frameworks should align with international standards while addressing local cultural and social contexts


Private sector should engage in capacity building and transparency initiatives while maintaining innovation and competitive dynamics


Thought provoking comments

It’s not a fight between East or the West. It’s a fight between revenue generation entities versus a revenue curbing request.

Speaker

Anusha Rahman Ahmad Khan


Reason

This comment reframes the entire debate about content moderation from a geopolitical or cultural clash to an economic one. It cuts through diplomatic language to identify the core tension: platforms prioritize profit over cultural sensitivity and user safety. This insight is particularly powerful because it moves beyond abstract discussions of rights to concrete economic incentives.


Impact

This comment established a recurring theme throughout the discussion about private sector responsibility. Multiple subsequent speakers referenced the need for platforms to do more, and it influenced Franco Metaza’s later assertion that ‘companies can do more than what they are doing’ and his criticism of revenue-driven content decisions.


I think that today the permanent scrolling that we are all subjected to is as much or more harmful as going at full speed with a vehicle without knowing what is in front of us or without having a traffic light.

Speaker

Franco Metaza


Reason

This metaphor brilliantly captures the unregulated nature of social media consumption and its potential dangers. By comparing social media scrolling to reckless driving, it makes the abstract concept of digital harm tangible and relatable, while also providing a framework for understanding why regulation isn’t about restricting freedom but ensuring safety.


Impact

This metaphor was specifically noted by the moderator and became a reference point for discussing the legitimacy of digital regulation. It helped shift the conversation from whether regulation is needed to how it should be implemented, making the case that just as we accept traffic rules for safety, we should accept digital rules.


Digital platforms should be regulated, not controlled. Cooperation, collaboration, and solidarity should be strengthened.

Speaker

Yogesh Bhattarai


Reason

This distinction between regulation and control is crucial in the digital governance debate. It acknowledges the need for oversight while respecting democratic principles and avoiding authoritarian overreach. The comment provides a nuanced middle ground between laissez-faire and heavy-handed government intervention.


Impact

This comment influenced the moderator’s follow-up questions about how countries interact with tech platforms and helped establish a framework for discussing responsible governance approaches. It contributed to the overall theme of finding balance between protection and freedom.


We are now tired of waiting and I would urge and request all the other parliamentarians to come together to make a joint strategy where we can collectively speak to the social media platforms.

Speaker

Anusha Rahman Ahmad Khan


Reason

This call for collective action represents a shift from individual national approaches to coordinated international pressure on tech platforms. It recognizes that platforms operate globally while governments act locally, creating an inherent power imbalance that can only be addressed through cooperation.


Impact

This comment sparked discussion about regional cooperation, with Franco Metaza confirming consensus in Mercosur about platform responsibility, and influenced later questions about African Union initiatives. It helped establish the theme of multilateral approaches to digital governance.


Si les Nations Unies passent cette étape, vous allez remarquer que beaucoup de pays vont enjamber et peut-être ça sera même inscrit dans nos constitutions et dans nos lois… il faut que nous ayons un élément, un instrument commun comme on l’a avec les trois générations des droits humains et avec la génération des droits numériques.

Speaker

Honorable from Democratic Republic of Congo


Reason

This intervention fundamentally challenges the current human rights framework by proposing digital rights as a fourth generation of human rights. It’s intellectually rigorous, drawing on the historical evolution of rights recognition, and identifies a systemic gap in how we conceptualize digital governance within established human rights frameworks.


Impact

This comment prompted immediate agreement from Tsvetelina Penkova, who acknowledged it would resolve enforcement issues. It elevated the discussion from practical policy implementation to fundamental questions about the nature of rights in the digital age, representing one of the most conceptually ambitious contributions to the session.


There should be a balance also around the importance of the well-being of people and profits… we’ve realized that in the AI training that there is still the presence of a risk of what I would call digital apartheid.

Speaker

Ashley Sauls


Reason

This comment introduces the concept of ‘digital apartheid’ and connects AI bias to historical injustices, making the discussion more concrete and urgent. It challenges the tech industry’s narrative of progress by highlighting how AI systems can perpetuate and amplify existing inequalities, particularly affecting marginalized communities.


Impact

This was one of the final substantive comments and served as a powerful counterpoint to the earlier private sector presentation about AI benefits. It grounded the abstract discussion of AI governance in lived experience and historical context, emphasizing that technological advancement without equity considerations can reproduce historical injustices.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond surface-level policy debates to deeper structural questions. The session evolved from individual country experiences to systemic analysis of power dynamics between governments and platforms, the need for international cooperation, and the fundamental question of how to conceptualize rights in the digital age. The economic framing of platform behavior, the traffic regulation metaphor, and the digital apartheid concept provided concrete ways to understand abstract policy challenges. The call for collective action and the proposal for a fourth generation of human rights elevated the discussion to consider both practical coordination mechanisms and foundational legal frameworks. Together, these comments created a progression from problem identification to systemic analysis to potential solutions, while maintaining focus on protecting vulnerable populations and democratic values.


Follow-up questions

How long will social media platforms continue to take to listen to governments and their requests to remove objectionable content and secure vulnerable groups online?

Speaker

Anusha Rahman Ahmad Khan


Explanation

This addresses the ongoing challenge of platform responsiveness to government content removal requests, particularly for protecting vulnerable populations like women and children from harassment and harmful content.


How can social media platforms be made more sensitive to different cultural contexts when making content moderation decisions?

Speaker

Anusha Rahman Ahmad Khan


Explanation

This highlights the need for culturally-aware content moderation, as platforms currently make uniform decisions without considering local cultural sensitivities that could have severe consequences for users.


How can parliamentarians develop a joint strategy to collectively speak to social media platforms about protecting vulnerable citizens?

Speaker

Anusha Rahman Ahmad Khan


Explanation

This suggests the need for coordinated international parliamentary action to increase leverage when dealing with global technology platforms on content moderation issues.


Are there collaborative efforts across Mercosur countries to deal with harmful online content through awareness and capacity building, not just legislation?

Speaker

Sorina Teleanu


Explanation

This explores whether regional parliamentary bodies are taking comprehensive approaches beyond just legal frameworks to address online harms through user education and preparedness.


How are countries interacting with technology platforms while working on digital legislation?

Speaker

Sorina Teleanu


Explanation

This addresses the important process question of stakeholder engagement and dialogue between governments and platforms during the legislative development process.


How is the AI Act connecting to creating safer online environments and increasing transparency from private actors?

Speaker

Sorina Teleanu


Explanation

This explores the intersection between AI regulation and content safety, particularly regarding platform transparency obligations and automated content moderation systems.


Are there examples of African Union-level initiatives dealing with digital governance and online safety issues?

Speaker

Sorina Teleanu


Explanation

This seeks to understand regional cooperation mechanisms in Africa for addressing digital policy challenges at a continental level.


How can youth be more effectively involved in digital policymaking processes across different regions?

Speaker

Bibek Silwal


Explanation

This addresses the need for meaningful youth participation in policy development, recognizing that young people are both primary users of digital technologies and key stakeholders in implementation.


What are concrete experiences and best practices for combating cybercrime while avoiding abuse of cybercrime laws by repressive governments?

Speaker

Raoul Danniel Abellar Manuel


Explanation

This addresses the critical balance between effective cybercrime legislation and preventing authoritarian misuse of such laws to suppress dissent and free expression.


Should the UN recognize digital rights as a fourth generation of human rights to provide clearer framework for national legislation?

Speaker

Representative from Democratic Republic of Congo


Explanation

This proposes a systematic approach to digital rights recognition that could provide clearer guidance for national constitutional and legal frameworks worldwide.


How should definitional language around journalism and media freedom be developed in digital legislation to account for diverse content producers?

Speaker

Amy Mitchell


Explanation

This addresses the challenge of defining protected journalistic activity in an era where content production has expanded beyond traditional media to include citizen journalists and diverse digital creators.


How can enforcement mechanisms be designed to prevent future governments from misusing digital legislation for authoritarian purposes?

Speaker

Amy Mitchell


Explanation

This addresses the need for robust institutional safeguards that can withstand changes in government and prevent the weaponization of digital laws against civil society.


How can AI training and deployment address risks of digital apartheid and historical bias, particularly affecting marginalized communities?

Speaker

Ashley Sauls


Explanation

This highlights the critical need to address how AI systems may perpetuate or amplify existing social inequalities and discrimination, particularly in post-apartheid contexts.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online

Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online

Session at a glance

Summary

This discussion focused on protecting vulnerable groups online, bringing together parliamentarians, regulators, and advocacy experts from various countries to examine legislative and regulatory responses to digital harm. The panel explored how marginalized communities, particularly in the Global South, face unique online safety challenges that existing frameworks often fail to address adequately.


Neema Iyer from Uganda highlighted research showing that one in three African women experience online violence, often leading them to delete their digital presence due to lack of awareness about reporting mechanisms and fear of not being heard by authorities. She emphasized how intersecting inequalities, language barriers, and the normalization of abuse create complex challenges that narrow legislative frameworks cannot fully address. Raul Manuel from the Philippines discussed recent legislative measures including extraterritorial jurisdiction for child exploitation cases and expanded anti-violence bills, while noting the economic factors that drive children into exploitation.


Malaysian Deputy Minister Teo Nie Ching outlined her country’s holistic approach combining digital inclusion, robust legal frameworks, and multi-stakeholder collaboration, but acknowledged enforcement challenges with major platforms like Meta and Google refusing to comply with licensing requirements. Nighat Dad from Pakistan described the rise of AI-generated deepfake content and highlighted disparities in platform responses between Global North and South users, noting that non-public figures receive delayed or no response to abuse reports.


Arda Gerkens from the Netherlands discussed balancing human rights with content removal powers, revealing concerning trends of hybrid threats where terrorist groups target vulnerable children through mental health channels. Sandra Maximiano from Portugal introduced behavioral economics perspectives, explaining how cognitive biases affect online decision-making and can be leveraged to promote safer behaviors through design interventions and nudges.


The discussion revealed consensus that content takedowns alone are insufficient, with panelists advocating for greater platform transparency, algorithmic accountability, proactive design measures, and coordinated international responses. The session concluded with calls for global cooperation among regulators and recognition that protecting vulnerable groups online requires addressing both technological and human factors through multi-stakeholder collaboration.


Keypoints

## Major Discussion Points:


– **Unique challenges faced by marginalized communities in the Global South**: Discussion of intersecting inequalities, digital literacy gaps, language barriers, normalization of abuse, and how existing laws are often weaponized against the very groups they’re meant to protect, particularly women and marginalized communities.


– **Legislative and regulatory responses across different jurisdictions**: Panelists shared specific examples from the Philippines, Malaysia, Netherlands, Pakistan, and Portugal, highlighting both successful measures (like extraterritorial jurisdiction for child exploitation) and enforcement challenges, particularly with major tech platforms refusing to comply with local regulations.


– **Platform accountability beyond content takedowns**: Extensive discussion on the need for platforms to be more proactive, including algorithm transparency, improved reporting mechanisms, design friction to prevent harmful content sharing, and the importance of addressing root sources rather than just reactive content removal.


– **Behavioral economics and human-centered approaches**: Introduction of how cognitive biases affect online behavior and how regulators can use behavioral insights to nudge safer online practices, along with emphasis on addressing offline social structures and community-based solutions.


– **Need for coordinated global response**: Strong consensus that individual countries lack sufficient negotiating power with tech giants, leading to calls for regional blocs (like ASEAN) and international cooperation through networks like the Global Online Safety Regulators Network (GOSRN).


## Overall Purpose:


The discussion aimed to bring together diverse stakeholders (parliamentarians, regulators, and advocacy experts) to examine how to better protect vulnerable groups online, share experiences across different jurisdictions, and develop more targeted, inclusive, and enforceable policy responses to online harms.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with panelists building on each other’s insights rather than debating. There was a shared sense of urgency about the challenges, but also cautious optimism about potential solutions. The tone became increasingly focused on practical cooperation and concrete next steps toward the end, culminating in calls for international coordination and the promotion of existing collaborative networks.


Speakers

**Speakers from the provided list:**


– **Alishah Shariff** – Moderator, works at Nominet (the .UK domain name registry)


– **Neema Iyer** – Founder of Policy, a feminist organization based in Kampala, Uganda; works on feminist digital rights issues including online violence, gender disinformation, and AI impact on women; member of Meta’s Women’s Safety Board


– **Raoul Danniel Abellar Manuel** – Elected member of Parliament in the Philippines, representing the Youth Party; former student government and union activist


– **Teo Nie Ching** – Deputy Minister of Communication, Malaysia; appointed in December 2022; previously served as Deputy Minister of Education in 2018; mother of three


– **Nighat Dad** – Founder of Digital Rights Foundation, a woman-led organization based in Pakistan; focuses on digital rights, gender justice, online safety, and freedom of expression; serves on the Meta Oversight Board


– **Arda Gerkens** – President of the regulatory body of online content, terrorist content, and child sex abuse material (ATKM) in the Netherlands; former member of Parliament (8 years) and senator (10 years)


– **Sandra Maximiano** – Chairwoman of ANACOM, the Portuguese National Authority for Communications; digital service coordinator; economist specialized in behavioral and experimental economics


– **Anusha Rahman Khan** – Former Minister for Information Technology and Telecommunications; enacted cyber crime law in 2016; currently chairs standing committee on information technology


– **Andrew Campling** – Runs a consultancy; trustee of the Internet Watch Foundation


– **Audience** – General audience member asking questions


**Additional speakers:**


– **John Kiariye** – From Kenya; made comments about human-centered design and community-based approaches


Full session report

# Protecting Vulnerable Groups Online: A Multi-Stakeholder Discussion on Digital Safety and Platform Accountability


## Executive Summary


This comprehensive discussion brought together parliamentarians, regulators, and digital rights advocates from across the globe to examine the complex challenges of protecting vulnerable groups in digital spaces. Moderated by Alishah Shariff from Nominet (.UK domain name registry), the panel featured diverse perspectives from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya, highlighting both the universal nature of online harm and the unique contextual challenges faced by different regions.


The conversation revealed that while online platforms have transformed communication and access to information, they have simultaneously created new vectors for harm that disproportionately affect marginalised communities, particularly women and children. Key challenges identified included the inadequacy of reactive content moderation, geographic inequalities in platform responses, the rise of AI-generated harmful content, and the weaponisation of protective legislation against the very groups it aims to protect. The discussion moved beyond traditional approaches to explore innovative solutions rooted in behavioural economics, community-based interventions, and coordinated international responses.


## Opening Context and Participant Introductions


The session, titled “Click With Care, Protecting Vulnerable Groups Online,” was part of the Internet Governance Forum (IGF) and featured interpretation services in English, Spanish, and French. Participants represented a diverse range of expertise and geographic perspectives:


– **Neema Iyer** from Uganda, representing Pollicy and speaking from her experience in digital rights advocacy


– **Raoul Danniel Abellar Manuel**, representing the Youth Party in the Philippine Parliament


– **Deputy Minister Teo Nie Ching** from Malaysia’s Ministry of Communications


– **Anusha Rahman Khan**, former Minister for Information Technology and Telecommunications in Pakistan (served for five years)


– **Arda Gerkens**, President of ATKM (Authority for the Prevention of Online Terrorist Content and Child Sexual Abuse Material) in the Netherlands


– **Sandra Maximiano**, an economist specialized in behavioral and experimental economics from ANACOM (Portuguese National Authority for Communications)


– **John Kiariye** from Kenya, representing community-based perspectives


## Research Findings on Online Violence Against Women and Children


### Stark Statistics from Africa


Neema Iyer opened the substantive discussion with sobering research findings that framed the entire conversation: “One in three women across Africa experience online violence, and many of them end up deleting their digital presence because they don’t have adequate support systems and they feel like they’re not going to be heard by authorities.”


This statistic illuminated a broader pattern of digital exclusion, where those who could benefit most from online participation are driven away by harassment and abuse. Iyer explained how intersecting inequalities create complex barriers to digital safety: “There are large gaps in digital literacy and access, and platforms often don’t prioritise smaller markets or local languages.”


### The Weaponisation of Protective Laws


Perhaps most troubling was Iyer’s observation about how protective legislation can be turned against its intended beneficiaries: “The laws that do exist, especially in our context, have actually been weaponised against women and marginalised groups. So many of these cybercrime laws or data protection laws have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.”


This paradox challenges fundamental assumptions about the relationship between legislation and protection, suggesting that legal frameworks alone are insufficient without proper implementation and safeguards against misuse.


## Country-Specific Legislative and Regulatory Approaches


### The Philippines: Comprehensive Legal Framework


Manuel outlined several legislative initiatives demonstrating the Philippines’ comprehensive approach to online safety. The country passed Republic Act 11930, addressing online sexual abuse of children, and the House of Representatives approved an expanded anti-violence bill that “defines psychological violence through electronic devices as violence against women.”


Additionally, amendments to the Safe Spaces Act set higher standards for government officials, recognising their particular responsibility in online spaces. However, Manuel highlighted enforcement challenges: “Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence.”


The scale of internet usage in the Philippines adds urgency to these efforts: “An average Filipino spends around eight hours and 52 minutes, or roughly nine hours per day, on the internet,” Manuel noted, emphasising the significant exposure to potential online harms.


### Malaysia: Multi-Faceted National Strategy


Deputy Minister Nie Ching outlined Malaysia’s comprehensive approach, which combines legislative updates, platform regulation, and extensive public education. After 26 years, Malaysia amended its Communication and Multimedia Act, increasing penalties for child sexual abuse material and grooming.


Malaysia developed a code of conduct for social media platforms with over 8 million users and established “900 national information dissemination centres” alongside a “national internet safety campaign targeting 10,000 schools.” The campaign uses a modular approach for different age groups, recognising that safety education must be age-appropriate.


However, significant enforcement challenges remain. Nie Ching revealed that while “only X, TikTok, and Telegram have applied for licenses” under the new framework, “major platforms like Meta and Google have not applied for licenses.” This resistance led to a crucial insight: “Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN.”


### Pakistan: Balancing Protection and Rights


Anusha Rahman Khan, who served as Pakistan’s Minister for Information Technology and Telecommunications for five years, enacted Pakistan’s cyber crime law in 2016, introducing “28 new penalties criminalising violations of natural person dignity.”


Khan emphasised the ongoing challenges in balancing commercial interests with protection needs: “Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination.”


### The Netherlands: Addressing Hybrid Threats


Arda Gerkens introduced the concept of hybrid threats that combine multiple forms of online harm. Her organisation has unique powers to identify and remove terrorist content and child sexual abuse material, but faces increasingly complex challenges as different forms of harmful content become intertwined.


“We see more and more hybridisation of these types of content mixed together,” Gerkens explained. “We’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online… these terrorist groups or extremist groups are actually targeting vulnerable kids.”


This hybridisation represents a fundamental challenge to traditional regulatory approaches that treat different forms of harm in isolation. Gerkens noted that terrorist groups are “increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion.”


## Platform Accountability and Geographic Inequalities


### Inadequate Response Systems


The discussion revealed significant problems with current platform accountability mechanisms. Nie Ching highlighted practical limitations: “Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal.”


She provided a specific example involving “Dato Lee Chong Wei,” Malaysia’s famous badminton player, whose image was used in scam posts. Despite his verified status, removing the fraudulent content required regulatory intervention rather than effective platform mechanisms.


### Geographic Disparities in Platform Response


Nighat Dad from Pakistan’s Digital Rights Foundation, which has handled over 20,000 complaints since 2016, highlighted stark inequalities in platform responses: “Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment.”


This disparity is exacerbated by recent changes in platform policies. Dad noted that Meta’s scaling back of proactive enforcement systems “shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only.”


### The Rise of AI-Generated Harm


Dad also highlighted an emerging threat that exemplifies how technological advancement can amplify harm: “We are seeing a rise of AI-generated deepfake content, causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide.”


This technology democratises the creation of sophisticated abuse material while making it more difficult for victims to prove the falsity of harmful content, representing a qualitative shift in the nature of online harm.


## Behavioural Economics and Human-Centred Approaches


### Understanding Cognitive Vulnerabilities


Sandra Maximiano introduced a novel analytical framework through behavioural economics. “Users are affected by cognitive biases like confirmation bias, overconfidence bias, and optimism bias that influence online behaviour and decision-making,” she explained.


Maximiano emphasised that “vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design.” This insight suggests that effective protection requires understanding not just what harms occur, but why people are susceptible to them.


The potential for both exploitation and protection through behavioural insights became clear: “AI systems can exploit cognitive biases and overlook vulnerabilities, potentially causing significant harm even without intentional exploitation.” However, the same understanding can be used positively through “better user interface design, nudging safe behaviour, and using social norms messaging.”


### Community-Based Solutions


John Kiariye from Kenya introduced a crucial human-centred perspective: “The offenders are human. The victims are humans. If we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully.”


Kiariye advocated for leveraging existing social structures: “Schools, clubs, and family units to empower victims and prevent online abuse before it occurs.” This approach recognises that online behaviour is shaped by offline social structures and that effective prevention requires community-level interventions.


## Areas of Consensus and Disagreement


### Strong Consensus on Platform Reform


Despite diverse backgrounds, speakers demonstrated remarkable consensus on the inadequacy of current platform accountability mechanisms. All agreed that transparency in content moderation processes, proactive identification of harmful sources, and addressing geographic inequalities in platform responses are essential.


### International Coordination is Essential


Government representatives from Malaysia, the Philippines, and Pakistan all acknowledged that individual nations have limited leverage against major tech platforms, leading to growing support for coordinated international or regional approaches.


### Key Disagreement: Privacy Versus Safety


The most significant disagreement emerged during audience questions about age verification technologies. Andrew Campling from the audience advocated for “privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms,” citing the statistic that “300 million children annually are victims of technology-facilitated sexual abuse.”


However, Iyer strongly opposed such measures: “I think absolutely not… it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place… people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.”


## International Cooperation and Future Directions


### Regional Approaches to Global Challenges


The discussion revealed growing recognition that effective platform regulation requires coordinated international action while respecting cultural differences. Nie Ching advocated for regional approaches: “Different regions need different standards that meet their cultural, historical, and religious backgrounds rather than one-size-fits-all approaches.”


Gerkens mentioned the existence of the Global Online Safety Regulators Network and invited participation, representing an attempt to share best practices across jurisdictions.


### Addressing Root Causes


Manuel introduced crucial economic dimensions: “Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material.” This observation highlights how online harms often reflect offline inequalities and vulnerabilities.


## Conclusion


This comprehensive discussion revealed both the complexity of protecting vulnerable groups online and the potential for innovative, collaborative solutions. The conversation demonstrated growing sophistication in understanding online harm, moving from reactive content removal toward proactive prevention and addressing root causes.


Key insights included the recognition that online safety is fundamentally a human challenge requiring understanding of psychology, economics, and social structures alongside technical solutions. The emphasis on international coordination, cultural sensitivity, and multi-stakeholder collaboration suggests a maturing approach to online safety policy.


However, significant challenges remain, from platform resistance to enforcement difficulties to fundamental tensions between privacy and safety. Success will depend on sustained commitment to collaborative solutions that are both effective and respectful of fundamental rights and cultural differences across diverse global contexts.


Session transcript

Alishah Shariff: ♪♪ ♪♪ ♪♪ Good morning, everyone, and welcome to today’s session, Click With Care, Protecting Vulnerable Groups Online. I’m delighted you’re able to join us. I know there were some travel difficulties getting in this morning, so thank you for being here, and thank you also to our esteemed panelists for joining us today. My name is Alicia, and I work at Nominet, the .UK domain name registry, and I’ll be moderating today’s panel. Just a bit of housekeeping before we begin. You’ll have interpretation in your headphones in English, Spanish, and French, and when we open the floor to interventions and questions, you can ask your question by going to the microphone to my left and your right. So it’s a pleasure to chair today’s session, which brings together a diverse panel of parliamentarians, regulators, and advocacy experts to discuss a critical issue, which is how do we protect vulnerable groups online. We live in an increasingly digital world, which offers opportunities for connection, learning, and growth. But the digital world also brings with it risks and downsides, which are often felt more acutely by vulnerable groups, including children, individuals with disabilities, and others of marginalized communities, amongst others. The consequences of harm faced online can have a ripple effect into real lives, causing distress, harm, and isolation. The challenge of online harm has prompted a range of legislative and regulatory responses, as well as proactive and reactive approaches, and today’s session will enable us to better understand some of these across a range of geographies and contexts. I hope that by the end of the session, we’ll get a sense of how we can work towards a more targeted inclusive and enforceable policy response to online harms. I’ll now hand over to each member of our esteemed panel to briefly introduce themselves. So I think we’ll start with Nima.


Neema Iyer: Oh, super. Hi everyone. Good morning and thank you so much for joining us here. My name is Nima Iyer and I am the founder of Policy. Policy is a feminist organization based in Kampala, Uganda, but we work all across the continent and we are very interested in any issues related to feminist digital rights. So this could be about online violence, gender disinformation, the impact of AI on women, and any such topics. And yeah, we do a lot of research on these topics. We also work very closely in local communities and of course we do advocacy work, which is part of why we are here as well. Thank you. Over to you.


Alishah Shariff: Thank you, Nima. Next we’ll hear from Raul.


Raoul Danniel Abellar Manuel: Hello, good morning everyone. I am Raul Manuel. You can call me Raul. I am an elected member of Parliament in the Philippines, representing the Youth Party. And prior to being a part of the Youth Party and of the Philippine Parliament, I was active in the student government and the student union. That’s why you have been paying close attention to this issue of online freedoms and protections. Thank you.


Alishah Shariff: Thank you, Raul. And next we have Your Excellency, Tony.


Raoul Danniel Abellar Manuel: Hello, good morning everyone. Thank you, Alicia, for the introduction. My name is Ni Ching. I’m from Malaysia. I’m currently the Deputy Minister of Communication. I was appointed to this office in December 2022. However, in the year 2018, I also had this opportunity to serve in the Ministry of Education as the Deputy Minister as well. I’m currently a mother of three, so protecting children and our minors on internet is a topic that is very, very close to my heart. And under the Ministry of Communications, we have a very important agency that is called MCMC, Malaysia Communication and Multimedia Commission, who acts as a regulator for the content moderation, platform providers, etc. Looking forward to this fruitful discussion.


Alishah Shariff: Thank you. And next we have Nighat.


Nighat Dad: Good morning everyone. My name is Nighat, and I’m a founder of Digital Rights Foundation, an organization, a woman-led organization based in Pakistan, and we are committed to advance digital rights with a particular focus on gender justice, online safety, and freedom of expression. Our work is grounded in both direct support and systemic change. We have a digital security helpline which provides legal advice, digital security assistance, and psychosocial support to victims and survivors of online abuse, and has a survivor-centered approach. And we also conduct in-depth research, build digital literacy and safety tools, and engage in policy advocacy conversations at the national, regional, and international levels.


Alishah Shariff: level. Thank you. And Arda, I’m glad you were able to make it, and thanks for joining. Thank you.


Arda Gerkens: Thank you very much, and excuse me for being late, the train was so much delayed. So, my name is Arda Gerkens, I am the president of the regulatory body of online content, terrorist content, and child sex abuse material, ATKM, it’s the abbreviation. I used to be a member of Parliament for eight years, and a senator for ten years, so I bring some political experience too. My organization is really there to identify harmful content on terrorist content and child sex abuse material, and is able to remove that content or have it removed, and if not, we’ll find the ones who is not complying with our regulation. We’re kind of unique in the field, I think we’re the first regulator. at least, as far as I know, who has that special right to dive into that content. And yeah, looking forward to the discussion today.


Alishah Shariff: Great, thank you. And we have one panelist who’s still on their way here. So when they join, we should also have Sandra Maximiano, who’s the president of ANACOM Portugal. So hopefully she’ll be able to join us shortly. So the way this will work today is we have some questions for our panelists that they will speak to, followed by a quick fire round, and then we’ll open out the floor for your interventions and questions. So without further ado, I think my first question is for Nima. So Nima, what are some of the unique online safety challenges faced by marginalized communities, particularly in the Global South, that may not be adequately addressed in existing legislative frameworks?


Neema Iyer: Thank you so much for that question. So first, I just want to start by framing some of the research that we’ve already done on this topic. So, for example, we did research with 3,000 women across Africa to understand their online experiences, and we found out that about one in three women had experienced some form of online violence, and this basically led them to deleting their online identity, because many of them were not aware of reporting mechanisms, and they also felt that if they went to any authorities that they would not be listened to. A second study that we did is a social media analysis of women politicians during the Ugandan 2021 election. We wanted to see what was the experience like for women politicians, and we found that they are often targets of sexist and sexualized abuse. But more importantly, the fear of the abuse on online spaces meant that many women politicians did not actually have online profiles or chose not to exist and to participate in the online sphere. And in the third research we did is on the impact of AI on women. So we often tend to think of, when we think of care, we think of social media, but more importantly thinking of how does AI impact women in, you know, that may be marginalized in some way, and we found out there are grave issues of under-representation and data bias. There’s algorithmic discrimination. AI makes it very possible for digital surveillance and censorship. There’s labor exploitation, and also there’s a threat to low-wage jobs, which often tend to be occupied by women. So I just wanted to frame that research first and then talk more about the question, which is, what is unique about this group? And the first one is that there are intersecting inequalities, so there are large gaps in digital literacy and digital access, for example. And so when you are trying, both as a platform and a civil society or a government, you have to take into account the fact that there are some women who have absolutely no access, have no digital skills, and you know, this is across the spectrum. So how do you tailor interventions that can meet all these different people who exist in all these different inequalities? Then in our context, for example, in Uganda, there are about 50 languages that are spoken, in Uganda alone, not considering the whole continent. And because these are smaller countries, they don’t have a huge market share, you know, on these online platforms. They’re often not prioritized. And so how do you develop interventions? How do you make safety mechanisms when, you know, you don’t have these languages on your platform? Another one I want to talk about is the normalization of abuse, which is, you can see in real life and in online spaces, that are both cultural and a result of platform inaction. So in regular life, you go on the street, you get harassed, you go to the police, they don’t do anything. That is replicated in online platforms, where you face this harassment, you reach out for recourse on the platforms, and there is platform inaction. So basically, in that way, this kind of online abuse is normalized. And then there’s the invisibility in platform governance processes. Of course, this is an amazing venue where we can talk about these issues, but a lot of women, marginalized groups, are not in these rooms with us right now to talk about their experiences. And then lastly, I just want to talk about the fact that the laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them. That’s the reality that we live in. So the fact is that legislative frameworks are often too narrow. They, you know, they focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women. So for example, a law might address intimate image sharing, but it won’t, you know, it’ll ignore coordinated disinformation campaigns, for example, or it will ignore this ideological radicalization that’s happening to minors online. Or, you know, it won’t target specifically the design choices that platforms make, for example, like where, you know, they amplify violence or those kinds of things. So I think we really need to think broader about how we are legislating about online violence, and I’m really glad that this conversation is happening. So back to you.


Alishah Shariff: Thank you. And that was, I think there was so much in there from the kind of, you know, the sphere of abuse in online spaces to how different people feel and experience being marginalized, and then also how some of these kind of legislative measures and also policies can sometimes have an adverse effect, and really thinking about the context. But thinking about how we do kind of good regulation, we’ll turn now to Raul. So as a Member of Parliament, could you share recent legislative measures in the Philippines to address online exploitation of children and pending efforts to protect women, LGBTQI+, and other marginalized communities from online violence and threats?


Raoul Danniel Abellar Manuel: Yeah, thank you, Alicia. And before I proceed, I’d like to thank the IGF Secretariat for this opportunity to share our perspectives from the Parliament of the Philippines. In our case, we have been pushing for a vibrant debates and discourse to ensure that protections for marginalized and vulnerable groups do not come at the expense of sacrificing our basic and human rights. The Philippines right now, just for context, ranks as the number three as of February 2025 in terms of the daily time spent by citizens in using the internet. An average Filipino spends around eight hours and 52 minutes, or roughly nine hours per day, on the internet, which is much higher than the global average of six hours and 38 minutes. So while this time can be spent to, you know, connect with friends, family, conduct research, do homework, this also exposes vulnerable groups, including young people, to different forms of violence and threats. For example, the Philippines, unfortunately, has been a hotspot of online sexual abuse and exploitation of children, and also the distribution and production of child sexual abuse and exploitation materials. So this is a problem that we have to acknowledge so that we can take proactive measures in addressing it. Second would be the electronic violence against women and their children, which we call E-Vow-C for short, and third, among the major forms of violence and threats online, would be harassment based on identity and belief. So I will briefly touch upon what we have been doing in Parliament to address these. First, when it comes to online sexual abuse and for exploitation of children, we recently had the Republic Act 11930. It is a law that lapsed on July 30, 2022, so it is kind of fresh, and aside from content taken one major component of this is the assertion of extraterritorial jurisdiction, which means the state shall exercise jurisdiction if the offense either commenced in our country, the Philippines, or if it was committed in another country by a Filipino citizen or a permanent resident against a citizen of the Philippines. Recognizing that the problem of online sexual abuse of children can happen not just in a single occasion, but it can be part of a coordinated network involving several hubs or locations. That’s why we really had to put this into law. When it comes to electronic violence against women and children, the House of Representatives, on its part, approved the expanded anti-violence bill. It defines psychological violence, including different forms, including electronic or ICP devices. The use of those devices can be considered, and it was defined to be part of violence against women. We did this in the House of Representatives, but since the Philippines is bicameral, that’s why we’re still waiting for the Senate to also speed up in its deliberations. Now, when it comes to online harassment based on identity and belief, we approved at the committee level so far amendments to the Safe Spaces Act, which sets a higher standard on government officials who may be promoting acts of sexual harassment through digital or social media platforms, like when they have speech that tends to discriminate those in the LGBT community. Finally, we have a pending bill in the House of Representatives, which seeks to criminalize the tagging of different groups, individuals, as state enemies, subversives, or even terrorists without much basis in such labeling. Recently, the Supreme Court adopted the term red tagging, which has been a source of harm and violence that transcends up into the physical world. That’s all for now, and I hope that this can be a source of discussions also on how we can really work together to address these online problems. Thank you.


Alishah Shariff: Thank you, Raul. I think that was really eye-opening, and there’s definitely lots happening in your legislative space, and I think it’s really nice that we have this mix of where you’ve got kind of slightly newer regulation and legislation, and also to hear from somebody later on who has experience of kind of enforcing this sort of regulation. So, moving from the Philippines to Malaysia, next I will turn to Her Excellency Chini. So, what is Malaysia’s core philosophy and overall strategy for protecting vulnerable groups in today’s complex digital environment, and how does Malaysia balance creating and enforcing laws and regulations with maintaining freedom of expression? Thank you.


Teo Nie Ching: Thank you, Alicia, for the questions. First of all, in Malaysia, we view online protection not just as a single action, but as a holistic ecosystem built on three core strategic trusts. The first one is empowerment through digital inclusion, and of course literacy. And the second will be protection to a robust and balanced legal framework, and third, support to a whole of society, multi-stakeholder collaboration. So, currently in Malaysia, our internet coverage has reached 98.7% of the populated area. So, internet coverage is, I think, pretty impressive. At the same time, we also set up more than 900 national information dissemination centres, which act as a community hub providing on-the-ground digital literacy training, especially to the seniors, to the women, to the youth, who may be more susceptible to online risks. And not only that, we also recently launched a national internet safety campaign, and our target is to actually enter 10,000 schools in Malaysia. That is our primary school, secondary school, and of course, we aim to enter the campus of the university as well, so that we can engage with the user. And this programme is not the usual public awareness campaign. However, we are more specific. We developed a modular approach which depends on the audience. For example, if their age is between seven to nine, then what type of content is more suitable for them, and what type of interactive action is actually we can design for them. So, for example, primary school, secondary school, we will be focusing on cyberbullying, and of course, to protect their own personal information, and then for the elder, we will teach them more, or share with them more about online scam, financial scam, etc. So, we believe that this is an approach whereby we need to go to the community, we need to engage them, we need to empower the community, so that we can raise their digital literacy. And of course, I think we also need to have a legal framework to protect our people, and it is very, very important for us to strike a balance between freedom of expression, but at the same time, also make sure this vulnerable group, they are actually protected by law. Last year, we have amended our act that is Communication and Multimedia Act, first time in the 26 years, whereby we have actually increased the penalty for dissemination of child sexual abuse material, CSAM, grooming, or sim communication through digital platforms, with heavier penalty when minor are involved. And then, at the same time, the amended law also grants the Commission, the Communication and Multimedia Commission, Malaysia Authority, to probably instruct the service provider to block or remove harmful content, enhancing platform accountability. At the same time, we also develop a code of conduct targeting the major social media platform with more than 8 million users in Malaysia. Malaysia is a country with about 35 million population, so when we use the benchmark of 8 million, that was roughly about 25% of our population. We are hoping that by imposing this licensing regime, we will be able to impose this code of conduct against the service provider, but as I mentioned yesterday, I would not say this is a very successful attempt because the licensing regime is supposed to be implemented since 1st of January this year, but however, two major platforms in Malaysia, i.e. Metai and also Google, until today has yet to come to apply for the license. So, I think the challenge faced by Malaysia maybe would be similar to many, many other countries as well. Malaysia alone, we don’t have sufficient negotiation power when we engage with tech giants like Metai and Google. So, how can we actually impose our standard over this platform to ensure that the harmful content, according to Malaysia context, can be removed instantly in a reasonable period of time has been quite challenging in Malaysia. We see that even though sometimes platforms would still cooperate with MCMC to remove certain harmful content, but it is always like the user or the scammer put it out and then MCMC, upon the request of MCMC, the content were taken down, but however, there is no permanent solution to stop all this harmful content from being put out on the social media, such as online gambling, such as scammer posts, etc. So, I think that’s it for now and looking forward to more questions.


Alishah Shariff: Thank you. I think that was a really good overview of how you can have both legislation and then a kind of voluntary code of conduct and some of the challenges that come with that in terms of how you are able to enforce it and also maybe towards the end you were getting to actually how do you prevent some of this stuff in the first place because obviously the takedowns are a reactive measure and there’s a bigger challenge here around how we prevent this sort of thing in the first place. So, we’ll now move to more of a focus on digital rights and we’ll turn to Nigat. So, at the Digital Rights Foundation, you lead efforts against online harassment and advocacy for privacy and freedom of expression. You’re also serving on the Metta Oversight Board. So, what gaps in terms of digital rights do you observe between the global south and the global north and what are your perspectives on platform accountability?


Nighat Dad: Yeah, thank you so much. So, at the Digital Rights Foundation, over the years, we have been witnessing the rise of digital surveillance, privacy violations, gender-based disinformations which is very targeted and now the disturbing rise of AI-generated deepfake content. Since 2016, through our digital security helpline, we have dealt with more than 20,000 complaints from hundreds of young women every month, female journalists, now more from women influencers and content creators, women politicians, scholars, and students. And this number is only to a digital security helpline which is being run by a NGO. This number is even higher when it goes to our federal investigation agency, Cybercrime Wing. And the people mostly who complain to us, they are being blackmailed, silenced, or driven offline by intimate images that they never consented to, some of which aren’t even real. In the last one and a half year, I would say we have seen this rise in deepfakes that have blurred the line between fact and fiction, but at the same time, we have seen that the harms are real in the offline space. It’s reputational damage, it’s emotional trauma, and in some cases, complete social isolation. And in worst cases, we have seen some women committing suicide. What’s even more alarming is how platforms respond to it, and as Honorable Minister mentioned that many platforms in our part of the world are really not accountable to the governments, and too often, survivors are forced to become investigators of their own harm, hunting down copies of content, plaguing it repeatedly, and navigating opaque reporting systems that offer little support and no urgency. And unfortunately, if they are not public figures, and if they are not politicians, the response is even more delayed, if it comes at all. And in my work at the Metal Oversight Board, the same patterns show up, just on a global scale. Last year, we reviewed two cases of deepfake intimate imagery, one involving a US public figure, a celebrity, and another involving a woman from India. And Meta responded quickly in the US jurisdiction, because media outlets had already reported on it, but in the Indian case, the image wasn’t even flagged or escalated, and it wasn’t added to the Meta’s media matching service until the Oversight Board raised it. And what we noticed as a board, that if the system only works within these platforms when the media pays attention, what happens to the millions of women in the Global South who never make headlines? So we pushed Meta, in our recommendations in case, to change its policy. We recommended that any AI generated intimate image should be treated as non-consensual by default, that harm should not have to be proven through news coverage, and we advise that these cases be governed under the adult sexual exploitation policy, not buried under bullying and harassment, because what’s at stake is not just tone, it’s bodily autonomy. And I think that one thing which is deeply concerning, that Meta has recently scaled back, like several other platforms. It’s proactive enforcement systems now focusing mostly on illegal or high severity cases while shifting the burden of content moderation onto users. That may sound like empowerment, but let me tell you that looks very different on ground. In South Asia, many users don’t know how to report. And even when they do, the systems are in English. They are not even in our regional languages. The processes are opaque, and the fear of backlash is very real. In India, for example, we have documented cases where women reporting abuse ended up being harassed further. That’s the same case in Pakistan. It’s not just by other users, but by the very mechanisms that are meant to protect them. And I’ll stop here, and we’ll add more to the policy level debate.


Alishah Shariff: Thank you. Thank you. I think there was so much in there. And I think what’s really coming through is that if we have this right to privacy and right to freedom of expression, that should be for all of us everywhere around the world. And the way that then we are treated when something does go wrong should also be equitable, because you can’t put it all on the individual to try and get all these images taken down. I think we’re definitely seeing a lot more on non-consensual intimate imagery abuse in the UK as well. And actually, the regulatory response and the legislative approach catching up with the real harm, there’s a big gap still. So thank you so much. And so next, we’ll turn to Ada. And so, Ada, you’re the president of the Authority for the Prevention of Online Terrorist Content and Child Sexual Abuse material in the Netherlands. And that regulates online content by ordering the removal of terrorist and CSAM content. So how do you strike a balance between online rights, the promotion of a safer online environment, and law enforcement? And what are some of your areas of concern?


Arda Gerkens: Yes, thank you. Thank you very much for inviting me on this panel. To address one of the last points in your question, how do we deal with law enforcement, we basically only target the content. So we’re not looking for perpetrators, ones who is uploading it or downloading it. It’s not of our interest. But of course, certainly when it’s terrorist content, but also with child sexual abuse material, when there’s anything that is worried for us worrying, then we’ll report it to law enforcement so they can act upon. And also, we have something what’s called deconfliction, just to make sure that we’re not taking that material in areas where police or other services are already investigating to make sure that we don’t harm their investigation. So far, that hasn’t happened yet. So I think we’re doing a good job. The other question is about, how do you balance human rights? And of course, with the powers we have, which is a very important power, I think, taking down or at least ordering the take down of material comes great responsibility. And definitely, when you look at the field of terrorism, it can easily be abused and harm freedom of speech, right? So we need to see how we can balance that. Well, first of all, we have legislation. So it’s not we have to hold the standard for this legislation when we send out removal orders. But the legislation is quite broad and sometimes vague. For instance, one of the reasons of addressing something as terrorist content is the glorifying of an attack. Well, what’s glorifying? So what we’re doing at the moment is, together with other European countries, as this legislation is European legislation, we are now clarifying that law to see, so what do we think all of us is glorifying? What is a call to action? So that we can refine that and make it quite clear also to the outside world, how do we assess and the reports do we get? And what threshold does it meet before we send out a removal order? And then again, of course, we can also give that to the platform saying, listen, if it meets these and these criteria, then maybe you should take it down before you send your removal order. That would be much better than us for sending removal orders. So this is on terrorist content. And as you can imagine, child sexual abuse material, that’s quite clear. There’s no debate about it. There shouldn’t be a debate about it. And I don’t think there’s any way of freedom of speech or any other human rights except for the right of the child that’s involved. But however, if you look at the removal of this type of content, you’ll see that on terrorist content, the majority of the material we find will be on a platform. But for child sex abuse material, unfortunately, as the Philippines has their downside, we have our downside that the Netherlands is a notorious hosting country for this kind of material. So we’re basically focusing our actions on hosting companies. Now, some of them are really bad actors. So this kind of imagery would not be the only bad things on their platforms. But there are also very many legit websites as well. So we need to make sure that we’re proportionate in our actions. We have really strong powers. We are able to even pull the plug of the internet, almost, let’s say, that way. Or we could even make sure that access profilers block the access. But if you do such a thing, you need to make sure that you’re not harming innocent parties or companies involved. So again, we need to be very precise and very well know what we’re doing. And so basically, for all this work, we engage a lot with industry to know the techniques. I think it was Paul who said here yesterday, for politicians, it’s very important to know the technical aspects of the online world. So is it for us. So we know a lot. We don’t know everything. There are lots of people who are much smarter than we are. So we engage with them. And we have an advisory board who would help us to make the difficult decisions. But we also engage with civil society to make sure that we uphold all these rights which are there to be able to balance it. And in the end, of course, it’s our decision. But we have to be able to explain it to the public, to you, why did we take that position? And did we look at the downside and the effects of it? And yeah, so that’s how we’re doing it. And it’s a very, I think, very interesting job. Now, on the matter of concerns of vulnerable groups, something I would like to address is something that we are currently seeing happening in the space of what used to be, I think, terrorism. I say used to be because terrorist actions used to be quite clear cut. It’s either right wing terrorism. Look at the Christchurch shooting. Or it’s Judaism. Many of the attacks are well known from that. But we see more and more hybridization of these types of content mixed together with other content. So recently, we’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online. Can I say it that way? Because we find that these terrorist groups or these groups, extremist groups, are actually targeting vulnerable kids. For instance, create a telegram channel where kids can talk about their mental health state, eating disorders. They groom information out of them. And with that information, they then extort them. And they let them do horrible things like carving their bodies or making sexual images, which are then again spread. And we can see that this kind of material is radicalizing the kids very swiftly. And recently in Europe, we had some very young kids who were at the verge of committing attacks. And so what we see now is that this is exhilarating in a very fast pace. And as our focus is on terrorism and child sex abuse, we cannot speak on eating disorders or mental health problems. But we know here at the table, too, there are lots of organizations who address these problems. But they’re probably not aware of these things happening. It’s all in the dark. And I think, again, if you talk about protection of vulnerable groups online, we need to bring these things to light. Like you basically said, the one case is brought to light by media. The other case is not brought to light by media. I think it’s up to us to bring it to light that these things are happening online. So at least the awareness is out there for parents and other caregivers to take care of the kids. But also for adults, that if somebody finally is able to speak about what’s happening, you are there to help them and support them. But yeah, we need much more to be done here as a coordinated approach to tackle this problem.


Alishah Shariff: Thank you, Ada. I think there was a lot in there in terms of proportionality and having a position that you can defend that is kind of balanced. I think this point on hybrid kind of threats is also really interesting. It’s something I haven’t heard before personally. And yeah, I think how you have a response that works across the whole system when these threats are hybrid and blended is really tricky, but also important to get right because there’s a lot at stake. So thank you. So next we’ll turn to Sandra. And if you want to just do a short introduction, that would be great. And then I’ll get to your question. OK.


Sandra Maximiano: So I’m Sandra Maximiano. I’m a chairwoman of ANACOM, the Portuguese National Authority for Communications. And at the moment, also, ANACOM deals with electronic communications, postal communications. But it’s also the digital service coordinator, so also on the digital matters, and also responsible for online terrorism and all these new issues, and also some competences under AI. So quite a broad authority. I’m an economist and specialized in behavioral and experimental economics. Thank you. So bringing together those two roles, I guess, as a regulator and also a behavioral and experimental economist, can you explain what behavioral economics is about and how it can be used to protect vulnerable groups online? So let me first say that if we will be rational human beings, we will probably need to care so much about safety and have a big concern, because we will be super rational and be able to understand what is good and bad and immediately react upon that. But we are not. So behavioral economics is actually a field that blends insights from psychology and economics to fully understand how women make decisions. And they make decisions not like machines. They don’t really maximize all the time their welfare, but they are affected by social issues, by social pressure, by their own emotions. And we all are affected. So, we use shortcoms, we call heuristics to make decisions, and we have a ton of cognitive bias. And this cognitive bias actually, they significantly influence how users interact and behave in an online context, and we have to have that into account. For instance, I can give you some very quick examples, like confirmation bias. Users may seek out information or sources that align with their existing beliefs, leading to echo chambers on social media platforms. This can, of course, perpetuate misinformation, stereotypes, and false beliefs, and limit exposure to diverse perspectives. Another one, overconfidence bias. Users may overestimate their online security knowledge, leading to risky behaviors, such as using weak passwords or ignoring security updates. Optimism bias. So we underestimate the risks of online scams or data breaches, believing that they are less likely to be targeted than others, which can lead to inadequate precautions. And on top of that, so we all suffer from this bias, but some groups suffer even more. So if we are thinking about children, we are thinking about some disabled groups, some people with mental health problems, they have, of course, this bias influencing their decision even more. And we as regulators, we have to take that into account. So we should, of course, be aware how this bias are used to explore the decision-making process online, and we have to fight with the same weapons. Basically, we have to make usage of this bias and try to make people do or take good decisions. So we have to understand this cognitive bias and also be aware that we can use them to make individuals, make them take more informed decisions. AI can also increase the economic value of this cognitive bias. And why? Because AI makes firms, makes organizations to use even more, to exploit this cognitive bias and expose people even to higher risks. So we have to be aware of that. And also, AI systems do not need to exploit vulnerabilities to cause significant harm to vulnerable groups. Systems that, for instance, they merely overlook the vulnerabilities of these groups could potentially cause significant harm. So I can give you an example. Individuals with autism spectrum disorder, they often struggle with understanding no literal speech, such as irony or metaphor, due to impairments in socially understanding and recognizing the speaker’s communicative intention. In recent years, chatbots have become very popular to engage with and train individuals with autism to enhance their social skills. If a chatbot is trained solely on a database of typical adult conversations, it may incorporate the elements as jokes and metaphors. And individuals with autism may interpret them literally and act upon, potentially leading to significant harm. So we have to be aware. As regulators, we really have to be aware with intentional and non-intentional harms that can cause to individuals. But as I said, we can also use this bias to make individuals make good decisions to protect vulnerable groups online. So behavioral economics can be used to enhance online protections for vulnerable groups, such as children, disabled users, and marginalized communities in many ways. So we can better design user interfaces. So websites and applications can be designed with user-friendly interfaces that consider the cognitive load of users. Nudge safe behavior. Platforms can implement nudges that guide users to hard, safer online behaviors. And presenting information about online risks in a clear and relatable way that can improve understanding and compliance. So this is particularly important. For instance, regarding, just to finish, regarding cyberbullying, behavioral economics can also play a significant role in protecting children from cyberbullying. So for instance, we can apply its principles to education and awareness campaigns. Again, framing information in a way that makes it very clear and very relatable for users. Using social norms. Social norms can be really a problem because people feeling the pressure to follow what others do, for instance, and this is a really preoccupation related to the online challenges that many children engage and put them at risk. But at the same time, we can use social norms messaging and, for instance, highlight positive behaviors and peer support through campaigns can shift perceptions around cyberbullying. So by emphasizing that most children are not engaging in cyberbullying behavior, it can create a social norm against it. So this is the point I want to make, is basically we have to understand all this behavioral bias that are putting our children, and this is just an example, but putting all of us at risk online. but we can use the same weapons to make it a safer behavior. So you really have to understand and then playing with the same weapons as regulators. Using nudge, encourage reporting, nudges that remind children of the importance of reporting, bullying can increase reporting rates and that there are studies that confirm that programs can be designed to teach children how to respond to cyberbullying effectively and behavioral economics can inform the design of these programs. So incentivize positive online behavior also can test different incentives, gamification, reward systems, schools and online platforms can implement reward systems that recognize and incentivize positive online behavior and this can be tested using experimental tools. So this is just an example and there are much, much more. Online platforms can adopt clear policies against cyberbullying and communicate this effectively to users. Again behavioral economics can help in framing these policies to highlight the collective responsibility of users to maintain a safe online space. So this is the point again that I want to make and this is an example. The same can be applied to understand algorithm discrimination, how does it work, how the bias increase this discrimination, but at the same time how can we use nudge and behavioral to fight those bias that are perpetuated in some algorithms. So the message I want to leave is that especially if you are a regulator, a policymaker, be aware of behavioral insights. People are using it. to make others behave in a way they want, firms do it a lot to sell more, marketing strategies are all, they all make use of behavioral insights, so we as regulators have to use the same weapons, but for another purpose, with another goal in mind. That’s it.


Alishah Shariff: Thank you Sandra, I think that was, yeah, I think it’s great to have a different perspective on the issue and I’ve never really heard anyone come at this from a behavioral kind of bias perspective, so thank you so much for that and I think, you know, how do we actually turn this on its head and use gamification and use these things to kind of incentivize slightly different behavior is a really interesting question. I think something we’ve come to quite a lot in discussions has been around the role of platforms, so I have just a quick-fire question for each of our panelists before we open to the floor, and so that question is, what forms of accountability beyond content takedowns should platforms adopt to protect marginalized users? So I might start with Ada and go from this side.


Arda Gerkens: Thank you very much for that question. Well, first of all, I think we need to understand that the platforms do a lot already. I think we should start from the positive side, right, because there’s a lot of things we can say about the platforms, but they do have a lot of effort in there. The effort is there when it doesn’t cost them any money, but when it comes to the revenue, then it’s getting, you know, to be difficult, and I think there’s one thing it is indeed to take down content, but there’s a lot of things that you can do with the algorithms by bringing extra attention to some of the material that they have or to lower it in the attention, and here I think there’s still a big chance because it’s… A piece of content in itself is not harmful. It could be harmful, but it’s only viewed by three, four people, persons, then it’s not a problem, but once it spreads and it’s been into the eyes of millions, then there is where it gets harmful, but again, when it’s spreading that fast, that’s also the way the system works, right, because it’s there because you want to be able to spread it again, get more attention, and therefore get more viewers, and more viewers means more advertisement, means more money for the platforms, so I think if we should start a debate with them, I would really like to speak with them on how they are having that policy around moderation, or moderation in the sense of taking material lower into their feeds or bringing them up higher.


Alishah Shariff: Thanks, Ada. I think next we’ll turn to Nigat, and do you want me to repeat the question? No, you’re good.


Nighat Dad: I think just platforms are doing a lot, some of the platforms, not all, but I guess we should look at the positive side of some platforms where they have some oversight mechanisms that are still working, and gave some good decisions and recommendations, and which actually improved their policies, but at the same time I think we really need to see what to do with the platforms that are still thriving in our jurisdictions but absolutely have no accountability. And they do not have their trust and safety teams any longer. They don’t have human rights teams. I’m talking about X here. I don’t think that anyone in a room has any point of contact with X in terms of escalating content, in terms of the disinformation that thrives on this platform. And it’s very interesting for me to see, for a number of years, that in different jurisdictions, when we talk about platforms, in the North, it’s easier to say that we should move on to other alternative platforms, like Mastodon or Blue Sky. But the problem in our jurisdictions is that user base is not that digital literate. And they are very comfortable with the platforms that they already have. Not the civil society has access to these platforms, neither the government. So I’m very concerned. What are we thinking about those platforms? But at the same time, there are platforms that actually listen to all government requests and take down number is very higher. And that’s where many have mentioned necessity and proportionality. And I don’t think many jurisdictions are actually respecting that. So I think we really need to see what are the oversight or accountability mechanisms are out there. And what different actors are doing. Just government, and government is making policy and regulation. But what that regulation looks like, does it really respect UN guiding principles or international human rights, human rights law, when it comes to content moderation or algorithmic transparency? At the same time, what other actors are doing? Platforms at the moment have much more power in our part of the world. We do not have Digital Services Act. But our governments are coming up with its own kind of regulation, which might not be as ideal as DSA, and which might not have that kind of power of enforcement that DSA has. So we really need to see what kind of precedents we are setting.


Alishah Shariff: Thank you. I think from our first two speakers, there’s definitely something coming through around transparency of what the platforms share with us, and whether that’s how their content moderation processes work, or other things. And then also a point around accountability. But also, as you said, Nika, just designing this new regulation, we’ve got to also take into account privacy, freedom of expression, getting the balance right, and then also being able to enforce effectively. So yeah, next I will turn to you, Chinni.


Teo Nie Ching: Yeah, a few things I would like to highlight. First of all, I would like to see the platforms to improve their report mechanism, the built-in report mechanism. Because my experience in Malaysia would be sometimes even public figure, prominent figure, such as Dato Lee Chong Wei, a very famous badminton player from Malaysia. They are scammers who are using his video, his photo, to create scam-related posts. And however, Dato Lee Chong Wei, even though he has a Facebook account with the Blue Tick verification badge, he himself lodging report through the built-in report mechanism is not going to be helpful. He himself need to compile all the link, send to me, send to MCMC, and then we need to forward it to Meta for the scam-related content to be taken down. So I think, first of all, the self-reporting, built-in report mechanism is not functioning. And that is actually putting a heavier burden on the regulator to actually do the content moderation job on behalf of the platform. I do not think that is fair. Second, we talk about transparency. So even though the scam-related posts are being taken down, but what actions are taken by the platform against the scammer? Against the scammer? I think that is the question we need to pose to the platform provider. And I’m hoping to get an answer from them. How much advertising revenue they are collecting from Malaysia each year? Do we know? I don’t have the figure. How much advertising revenue they collect for ASEAN collectively? We never have the figure. But for me, if you only take down the scam-related posts, it’s not sufficient. Because I need to know what type of action is being taken by the platform against the person who sponsored the post. Shouldn’t that person be held responsible as well? And because we don’t have that type of transparency, it’s very difficult for us to have the platform accountable. And then, again, I would like to add a little bit more on the algorithm part. Because I think algorithm is very, very powerful. However, platform, when they design the algorithm, their only purpose is to make the platform more sticky, so that its user will spend more time on that platform. But however, I think it’s time for the public, for the general public, for the civil society to also have a say to design the algorithm, so that we can so-called practice information diet, as proposed by one of my favorite author, Yuval Harari, that we also need to make sure that the information consumed by the user, by the social media user every day, actually healthy content, and not just whatever content they like. Because I think that can be very, very dangerous.


Alishah Shariff: Thank you. Yeah, absolutely. Thank you. Yeah. Yeah, I think the incentives of these platforms, and understanding the kind of stickiness point with algorithmic promotion, and, yeah, kind of the advertising revenues is another whole piece of the puzzle that we could have a separate discussion about. But thank you. And next, I’ll turn to Raul.


Raoul Danniel Abellar Manuel: Yes, thank you. Actually, before this month of June, in the House of Representatives, we have had a series of hearings by three House committees, namely the Committee on Information and Communications Technology, Committee on Public Order, and Committee on Public Information. And the topic of takedowns has been discussed. And in the fifth, or the final hearing that we’ve had so far, the government and representatives from META reported to the public hearing that they had this non-written agreement that would enable the government to send the requests for content takedowns to META. And our reaction at that time was, without any written basis or any law that would explicitly set the standards as to what content can be taken down and what should just stay online, then it will be a slippery slope when it comes to using content takedown as a primary approach when it comes to ensuring that our online spaces are safe. It can be having decisions just being done in the shadows and people not being aware or being made knowledgeable about the basis for takedowns. That’s why, beyond takedowns, we really assert that platforms have a major responsibility. For example, when they already can monitor notable sources of content that is harmful to children, women, LGBT, and other marginalized groups, may it be bullying, hate speech, indiscriminate tagging, or those posts promoting scams to Filipinos, or those posts promoting hate speech to Filipinos, then platforms should proactively report those sources to government. And also, platforms should work with independent media and digital coalitions so that, aside from just going after each content, because that would also be very tedious and laborious, we should also focus on the sources, to promote a certain narrative or discourse, so that we can not just be reactive in our approach. Being proactive would be the better way to go. So that’s my piece. Thank you.


Alishah Shariff: Thank you so much, Raul. I think that’s really interesting on kind of knowing the sources. And also, you touched on a really important thing on independent media, which obviously is in decline in a lot of the places where we live, sadly. We’ll go to Nima next.


Neema Iyer: Thank you. So I want to shift gears a bit and talk more about actual design of platforms. So I am a member of Meta’s. as Women’s Safety Board. And sometimes they bring us in on design decisions that they make, and echoing some of the opinions of my colleagues at the other end of the table, that it’s really difficult work. It is so difficult to make these little design choices on the platform that impact user behavior. So the thing that I want to talk about is that, with content takedown, it’s a very reactive measure that happens after the fact. So the content is already shared. You go through this mechanism. It can take days, months, years, or it will never happen. It will never be taken down. That also happens. I’ve reported many times, and it doesn’t get taken down. And there’s none of this sense of justice, for the people who are wronged, after the content’s already been up there, and then you take it down, but the damage is already done. The wound is already there. So I think it’ll be interesting to think about what are the kinds of design friction that you can introduce that stops the content from being shared? And I think my behavioral economist colleague will have more to say probably. But how do you stop it from happening so that you’re not in the position of having to take down? And as Ada mentioned, that they’re already coming up with guidelines and practices that would be nice for platforms to use to take down, but what if this was used before it even comes up? Or when someone goes online to insult a woman, for whatever reason, that there’s a nudge that says, are you sure you wanna do that? You really wanna, what do you benefit from saying this? But then, of course, on the other end of that, it’s also very problematic. So I really want to acknowledge that this method is problematic because this sort of shadow banning has been used against feminist movements, against marginalized people, to silence them. So when you talk about issues like colonization, racism, any of these issues, your posts actually don’t get shown. And this is the problem because we don’t have transparency on what are the algorithms that show or hide information. And really, all of us are at the mercy of the moral and political ideologies of whoever owns that platform. If they’re a right-wing, anti-feminist person, then those are the rules of the platform, and we are all tied to those rules. So what would be lovely in a really perfect world would be if these algorithmic decisions are co-created by all of us, and we understand that if we are doing child protection, counter-terrorism, that we all have decided these are the things we don’t want to be posted to be shown. We have decided it as government, as civil society, and as the platforms coming together. I think we really need platforms to take that accountability, to be more transparent, to do more audits, to do more research with governments and civil society so that we’re not looking at the platforms as enemies, acknowledging they do a lot, that there is more need for us to collaborate on setting the guidelines. So, thank you.


Alishah Shariff: Thanks, Neema. I think that was, yeah. I see. So yeah, just having that multi-stakeholder voice in shaping, I guess, the things that govern the platforms that we interact with, but also I really liked your point on introducing design friction. I think that’s a really interesting one. And so finally we’ll turn to Sandra and then we’ll go to questions from the floor.


Sandra Maximiano: So I couldn’t agree more with what has been said so far. Think about this. Think about if you wanted to take, like, you know, just do a skydiving activity. You go, you go to a firm, you know, sign up for this service and you always get some briefing about security and safety measures that you need to take. Okay, so you are buying a service and the firm that offers that service is forced to provide that briefing. I think what I really would like is online platforms that are providing us a service who’ll be also entitled and forced to give at least these briefings to all of us about safety, about measures that we need to take as human beings. So we need to be aware of our cognitive biases, as I said, and what all this content and all this online interaction may impact on our decisions and on our behavior. So I think they should be more entitled to provide us that sort of information. Then what should be illegal offline should be illegal online. That’s the main principle. But then we have this gray area. What is not illegal offline and should be forced to be illegal online or take it down. So and here I’m more pro, let’s say, measures like nudge interventions, like some applying these behavioral insights and increasing awareness and giving more education, improving digital literacy and, of course, making us better users of online content and trying to be aware of what is there that can really damage us. But of course platforms, it needs to be very much easier for users to comply to platforms and that’s one of the biggest problems nowadays and we can see that as digital service coordinators that users, the first step that they have to take is to comply to a platform and then it’s very hard. It’s very even hard to, you know, have to whom they can contact and this is something that platforms need to be responsible, take those complaints seriously and respond to users appropriately. And of course about algorithms, more audits are really needed, regular auditing algorithms for bias that can help identify and correct discriminatory patterns, diverse development teams. This is also something that platforms should look for. Building diverse teams of developers and stakeholders that can help mitigate biases in algorithms, for instance, transparency and accountability, making algorithms more transparent that can allow users to understand how decisions are made, which can help identify also potential discrimination and, again, giving users more education. Also playing again with the behavioral issue, the default settings is a very important point for behavioral economists. So, setting stronger privacy defaults that can protect vulnerable groups. For instance, social media platforms can make private accounts the default setting for children, ensuring that their information is more secure unless they choose to change it. So, changing the defaults, playing with those, it’s also very, very important. So, basically, we have to be, as I said, aware of this cognitive bias. Platforms should give us more information about this cognitive bias that all of us face and give us briefings, give us information and education and be more accountable and transparent.


Alishah Shariff: Thank you, Sandra. That’s great. I think that’s been a really thought-provoking set of interventions on that question and now we will open to the floor. We’ve got about 15 minutes. So, if you’d like to ask a question, I’d encourage you to go to the microphone at the front just so that we can make sure everyone can hear and we’ll put these headsets in.


Anusha Rahman Khan: Thank you very much. I’m Anusha Rehman Khan. I’m a former minister for information technology and telecommunications and I’ve remained the minister for five years and I’m somebody who enacted the cyber crime law in 2016, which gave and introduced 28 new penalties and criminalized the offense. As an offense, the dignity of natural person was, if violated, would result in criminal penalty of going to the jail or being find. So we all know that it is important to legislate. We also know that when we are legislating and when we are creating new offenses of such nature, we have noticed that the interest lobbies, the interest groups, come out very strong against such activities. And we all know that the funding is provided by the commercial interest holders. So when in 2016 I was trying to make the enactment, I had a huge resistance from the interest groups. And at that time, it was difficult for people to appreciate that how they are being played in the hands of the commercial interest. And then I noticed that similar people, similar interest groups, make commercial interest for themselves. From the law that was enacted later on, we found out the same interest groups were creating and found it as an opportunity to generate revenue for their own interest later on. So this is a game that is being played globally. And we, by now, have seen the games that are being played in for this revenue generation at the cost of the dignity of a natural person. It is not just the women, not just the children, not just the girls. It is all the people on this globe who get affected by the abuse online. Now, the questions. My question and my ask is from the Minister from Malaysia. I’ve heard you and you are very eloquent and your clarity is really appreciated. Now, what do you think in your experience that after legislating in Malaysia, have you been able to overcome the difficulties the enforcement entails? Because I feel, having been the former minister and also now chairing the standing committee, part of the information technology system for the last 32 years, that the time has come that we stop begging the social media platforms now. Because we cannot continue to remain hostage to requests made for the welfare of our citizens. So what is it that we can together do to make sure that we introduce the mechanisms where we do not expose our children, our girls and our women, at the hands of those people who probably have a different philosophy about content online. So when people are sitting, perhaps in the West, have a different ideology and different legal system governing them. But people sitting in the East have a different value system. We are a country where a single aspersion on a girl can cause her to jump from the window without waiting for the content to get removed. This has been the major issue for me that we in the East and the Far East live in a different value system. What is it that we think that together we today come by and bring out a solution? I do not think that the commercial interest and the revenue generation is going to allow you to provide the civil protection that is needed. So maybe you could guide me and tell me that what is it in your mind that we need to do and come forward with some very solid recommendations. Thank you. Thank you.


Alishah Shariff: Thank you. Are you happy to answer that? I think maybe if we could do a really quick response to that one and then maybe also.


Teo Nie Ching: Thank you madam. Thank you for your questions. Frankly speaking, after what we have been trying to do in Malaysia, passing the law is easy. Being in the government means that we have the majority in Parliament. So passing the law is easy, it’s relatively easy. Of course we have to do a lot of engagement, consultation etc. But passing the law itself is not too difficult. However, as you rightly pointed out, to enforce it, it will be super super difficult. It will be super super challenging and as I mentioned to every one of you here just now, we need to admit that Malaysia, even though we have introduced this licensing regime, supposed to be implemented since 1st of January this year. But however, until today, only X, TikTok that is Baidans and also Telegram. They got more than 8 million users in Malaysia. They came to us, get the license. But however, until today, Meta and Google have yet to apply the license from Malaysian government. So but the question, next question would be what can we do? First of all, I think it is too difficult for Malaysia to deal with this tech giant. It’s too difficult. So I’m really hoping that we can actually have a common standard imposed on this social media platform. My neighbouring country, Singapore, I think they are doing something, I myself I think it’s a good idea, i.e. they impose a duty on Meta. Meta must verify the identity of every advertiser if the advertisement is targeting Singapore citizens. And Meta actually is doing that and that is partly because Meta has an office in Singapore and they are deemed to be licensed as well. So Meta is actually doing this in Singapore. So my question actually would be, why can’t you do it for Malaysia? Because if you verify the identity of the advertiser, then it will be much, much easier for us to identify who are scammers, who are those behind this account promoting online gambling, etc. Why are you only doing it in Singapore and not the rest of the world? So to me, it is very, very important if we can have one international organisation identifying what are the responsibilities that should be carried out by the platforms instead of one individual country. Because as Malaysia, our negotiation power is just too limited. And at the same time, I think to overcome the issue that the standard is set by the West, I think it is very, very important for us to engage this platform as a bloc. For example, instead of engaging, instead of Malaysia trying to engage with this platform, we are hoping that ASEAN as a whole, we can engage with this platform. If you engage with Malaysia, maybe they worry that the Malaysian government will abuse our power to restrict the freedom of expression, but how about ASEAN as a bloc? Because as 10 ASEAN countries, we have similar culture, we understand each other better, and therefore we shall be able to set a standard that actually meets our cultural and also a history and religious background, etc. So I think it is important for us not to, you know, apply one standard, but we understand the world as different, different, multi-polar or different, different region whereby we can sit down and discuss about the standard that should be imposed on our platforms at different, different region. That is something really I would like to propose. Thank you.


Alishah Shariff: Thank you. Okay, there’s going to be some future cooperation here, so that’s great. I’ll turn briefly to Nigat who also wanted to provide some comments, and if we can keep them short, that would be great.


Nighat Dad: Very briefly. I think governments really need to understand that we are here in a multi-stakeholder spirit, and when we make national policies, multi-stakeholder means government, industry, civil society, and I think civil society is a critical space because when they present policies and regulations, it’s a role of civil society to basically think of critical points and nuances and hold the government also accountable. I think when we are talking about accountability, it’s about all powerful actors, government and platforms. Thank you.


Alishah Shariff: Yeah, that’s a really important perspective to bring. Okay, we’ll go to our next question in the room.


Andrew Campling: Thank you. Good morning. My name is Andrew Campling. I run a consultancy, and I’m a trustee of the Internet Watch Foundation, which finds and takes down with partner hotlines CSAM material around the world. Over 300 million children annually are the victims of technology-facilitated sexual abuse and exploitation. That’s about 14% of the world’s children are victims every year. So with that in mind, does the panel agree that we should mandate the use of privacy-preserving age estimation or verification technology to stop children from accessing adult platforms and adult content, and also from adults accessing child-centric platforms and opening child accounts so they can target children? And also, does the panel agree that we should make better use of technologies like client-side scanning to prevent the use of messaging platforms like WhatsApp from being used to share CSAM at scale around the world, which you can do in a privacy-preserving way? Thank you.


Alishah Shariff: Thank you. I think we’ll take one more question, and then I’ll open it up. Thank you very much, and I must start by congratulating the panel. It looks like there is a bit less testosterone on the panel today. It was a girls’ day this morning.


Audience: But my name is John Kiariye from Kenya, and mine is more of a comment that, seated at the IGF, we are able to have a conversation around what it is that regulators can do. And regulators have other platforms to be able to learn. and what to do with the technology that is availed to us. But if we are talking about human-centered design, we’ve got to remind ourselves that the offenders are human. The victims are humans. And we have to look and see beyond what is happening online and see if there are opportunities on already existing human structures in community. Because the technical stuff that we would talk about at IGF, some of it is not applicable practically in some jurisdictions. For example, we come from places where big tech has got platforms that people are interacting with, but they do not have physical presence in some of these jurisdictions. So you have no place to go and have a conversation with these big tech to ask them to do some of these things that we are saying at IGF. But if we look at an already existing structure within community, then we might find an opportunity to empower the victim in the sense that if it is a child who is under threat, in a school, there are already existing social structures. There are social clubs. For example, the lessons we are learning from Kenya, we’ve got clubs like the scouting clubs and the girl guides that already exist. And for young people, we know that if you make it cool, for them it becomes the truth. So what if this discussion starts offline for the victim so that by the time they are getting online, they already have the tools and they’re empowered to get this done? Because the bully is a human. The victim is a human. So if we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully. And they can do that online. But if they were trained offline long before they got onto the internet, then maybe it can become a movement that. that saves a generation. So my point here and the comment is that even as we are focusing on the technology, let us not forget that this is technology for humans and there are already existing social setups. These social setups could be family, they could be school, they could be clubs, and all these other social setups that already exist before we get even online. We will leave it to the regulator to deal with the big tech because that animal is too big for the victim to face up, I thank you.


Alishah Shariff: Thank you, thank you. I think we’ll answer Andrew’s question first. So I think there was two parts to that. There was something around age verification or creation of child accounts and whether that could be a preventative action and then also something on client side scanning on device and whether that’s a good kind of proactive measure. I don’t know if there’s anyone in particular who wants to take that one. Would be good to hear from, yes? Yeah, okay, Neema and then Raul, okay.


Neema Iyer: I think absolutely not. So I live in Australia and we just passed a social media ban on children. In the past year, I have no idea what is the plan for implementation. And it’s really, it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place. So my general opinion is no, that we as humans need some level of privacy in our lives. And I think that there are better, and the fact is that people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.


Alishah Shariff: Thank you, and Raul?


Raoul Danniel Abellar Manuel: On our end in the Philippines, we’ve had this observation that sometimes the best way to solve a problem is to find the underlying basis for such a problem because directly confronting the problem may not be totally enough. For example, in the case of CISA-M and how young people are being used for these very bad objectives, we’ve had a realization that the economic basis is really a primary factor that drives children and unfortunately their relatives to have this kind of livelihood so that they can live from day to day. So we also have measures to really address issues like poverty, child hunger, and all that, alongside, of course, preventing the spread and the proliferation of these kinds of materials that exploit children. And I would just like to refer also to another point regarding how difficult it is, really, especially for those in the Global South to hold social media platforms to account. I can sympathize with our colleagues here and I also agree that we need to form a coordinated response, really, because in our case, when we invited representatives from these social media platforms, they did not attend our first two hearings and their reason was simply because they did not have an office in the Philippines, so why bother to attend? And we were insulted by such kind of a response because we just want to have concerted action on these issues that we are talking about. So we kind of threatened them with a subpoena and the prospects of an arrest if they will not attend the hearing. So fortunately, by the third hearing, they attended and that was the start of having them send representatives. But of course, we can’t act alone and we really have to work collaboratively. Thank you.


Alishah Shariff: Thank you. We actually only have a couple of minutes left. I think, Sandra, would you like to offer a kind of final comment?


Sandra Maximiano: Yes, just to add that and reinforce the point that what is illegal offline should continue to be illegal. Online. And, if we restrict children to have access to certain services offline, certain contexts, I think we should also take the same approach online. But that doesn’t mean, of course, you know, make every account private and ban every sort of possibility or behavior. So there are other better approaches rather than just going for extreme options. But I also like to add that this last intervention was very important, and thanks a lot for it, because we are humans, and we need, of course, to be aware of our shortcomings, our biases as humans, and that need to be taught, as it was said, in schools, and we need to be more prepared now to deal with this usage of cognitive biases online. So it’s basically making usage of technology to take advantage of them. So we need to be aware, we need to be more aware of that, and so you need more digital literacy for sure. But let me also add something as an economist. We are in a world that there are lots of incentives for platforms to start developing features that take into account safety and security, and make a profit out of it. And here I’m just talking as an economist, and we will see that happening. And then it will be left to us, and there should be some minimum standards that should be for everyone, and regulators should impose those, but also I’m pretty sure that there will be selling any sort of features that we, as users, will be able to buy and to add on to our systems to increase the level of protection. So there is like a huge market out there that is going to explore the safety, the security, and be prepared as consumers, as users, to make that choice. And it will depend on our risk aversion, our risk preference, and safety preferences, but it will come.


Alishah Shariff: Thank you, Sandra. I think that is all we have time for today, so I’d like to say a massive thank you.


Arda Gerkens: Could I make one remark, which I think is really important? A positive message. Look the way we are here together as regulators. I’ve been at the IGF for 15 years. There’s a lot changing, and there’s a lot of politicians involved in that change. What we need to do now is come together globally, because, indeed, Malaysia has a problem with some platforms. Other countries might have problems with other platforms. Once we are able to get some platforms to obey to our regulations, other ones will pop up. We really need to work together globally. We’re part of Global Online Safety Regulators Network, GOZERN. That’s a new initiative. I invite everybody who wants to be a part of it, please go to the website, GOSRN. Let’s see how we can tackle this problem, because it’s a global problem, and we need to work together here. Thank you.


Alishah Shariff: Thank you, Ada. I think that’s really the takeaway from this session for me, is that, through having this kind of multistakeholder, multidisciplinary discussion, this is the only way we will be able to tackle some of these challenges, and also to take into account intersectionality, geographical differences, the way platforms behave differently in different jurisdictions. Just very quickly, the opening of the IGF, the official opening, is at 11 a.m. in the plenary room on the ground floor, so we hope to see you there. Thanks once again to all the panelists, and to all of you, and to our online audience. Thank you.


N

Neema Iyer

Speech speed

179 words per minute

Speech length

1577 words

Speech time

525 seconds

One in three women across Africa experience online violence, leading many to delete their online identities due to lack of awareness about reporting mechanisms

Explanation

Research conducted with 3,000 women across Africa revealed that approximately one-third had experienced some form of online violence. This abuse led many women to completely delete their online presence because they were unaware of available reporting mechanisms and felt authorities would not listen to them if they sought help.


Evidence

Research study with 3,000 women across Africa showing one in three women experienced online violence


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Sociocultural


Agreed with

– Nighat Dad
– Sandra Maximiano

Agreed on

Marginalized communities face disproportionate online harm with inadequate support systems


Women politicians face sexist and sexualized abuse online, causing many to avoid having online profiles or participating in digital spaces

Explanation

A social media analysis of women politicians during Uganda’s 2021 election showed they were frequently targets of sexist and sexualized abuse. The fear of such abuse meant many women politicians chose not to have online profiles or participate in digital political discourse.


Evidence

Social media analysis of women politicians during the Ugandan 2021 election


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Sociocultural


AI systems create grave issues including under-representation, data bias, algorithmic discrimination, digital surveillance, and labor exploitation affecting marginalized women

Explanation

Research on AI’s impact on women revealed multiple systemic problems that disproportionately affect marginalized women. These include biased data representation, discriminatory algorithms, increased surveillance capabilities, and threats to low-wage jobs typically occupied by women.


Evidence

Research study on the impact of AI on women showing under-representation, data bias, algorithmic discrimination, digital surveillance, censorship, labor exploitation, and threats to low-wage jobs


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Economic


Intersecting inequalities create large gaps in digital literacy and access, with platforms often not prioritizing smaller markets or local languages

Explanation

Marginalized communities face multiple overlapping disadvantages including limited digital access and skills. Platforms often neglect smaller markets, with countries like Uganda having 50+ languages but lacking platform support for local languages due to limited market share.


Evidence

Uganda has about 50 languages spoken but platforms don’t prioritize smaller countries due to limited market share


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Development | Sociocultural


Laws designed to protect are often weaponized against women and marginalized groups, being used to punish rather than protect them

Explanation

Cybercrime laws, data protection laws, and other protective legislation are frequently misused to target women, activists, and dissenting voices. Instead of providing protection, these laws become tools of oppression against the very groups they were meant to safeguard.


Evidence

Cybercrime laws and data protection laws have been used against women, dissenting voices, and activists to punish rather than protect them


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Legal and regulatory | Human rights


Agreed with

– Nighat Dad

Agreed on

Laws designed to protect can be weaponized against vulnerable groups


Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing

Explanation

Current content moderation relies on reactive takedown processes that occur after harmful content has already been shared and caused damage. There’s a need for proactive design elements that create friction to prevent harmful content from being shared in the first place, though this approach has its own risks of censorship.


Evidence

Personal experience reporting content that doesn’t get taken down, and acknowledgment that damage is already done even when content is eventually removed


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Raoul Danniel Abellar Manuel

Agreed on

Platform accountability requires transparency beyond content takedowns


Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies

Explanation

Current algorithmic content moderation reflects the moral and political ideologies of platform owners, creating unfair power dynamics. A collaborative approach involving multiple stakeholders would ensure more balanced and transparent decision-making about what content should be promoted or suppressed.


Evidence

Shadow banning has been used against feminist movements and marginalized people discussing issues like colonization and racism


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Human rights


N

Nighat Dad

Speech speed

137 words per minute

Speech length

1238 words

Speech time

542 seconds

Digital Rights Foundation helpline has handled over 20,000 complaints since 2016, with hundreds of young women reporting monthly about blackmail and harassment

Explanation

The Digital Rights Foundation’s helpline has processed over 20,000 complaints since 2016, receiving hundreds of reports monthly from young women, female journalists, influencers, politicians, and students. These complaints primarily involve blackmail and harassment through non-consensual intimate images.


Evidence

Over 20,000 complaints handled since 2016 through digital security helpline, with hundreds of complaints from young women monthly


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Cybersecurity


Agreed with

– Neema Iyer
– Sandra Maximiano

Agreed on

Marginalized communities face disproportionate online harm with inadequate support systems


Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide

Explanation

The increasing prevalence of deepfake technology has created new forms of harm where people are blackmailed and silenced using intimate images they never consented to, some of which aren’t even real. The psychological impact includes severe reputational damage, emotional trauma, and in extreme cases, suicide.


Evidence

Rise in deepfakes over the last one and a half years, with cases of women committing suicide due to the harm


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Cybersecurity


Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment

Explanation

Meta Oversight Board cases revealed significant disparities in platform response times based on geography and prominence. A US celebrity’s deepfake case received immediate attention due to media coverage, while an Indian woman’s case wasn’t flagged until the Oversight Board intervened.


Evidence

Meta Oversight Board reviewed two deepfake cases – US celebrity case received quick response due to media attention, while Indian case wasn’t flagged until Oversight Board raised it


Major discussion point

Platform Accountability and Content Moderation


Topics

Human rights | Legal and regulatory


Agreed with

– Arda Gerkens
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Meta’s recent scaling back of proactive enforcement systems shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only

Explanation

Meta and other platforms have reduced their proactive content moderation, focusing mainly on illegal or high-severity cases while expecting users to handle more moderation themselves. This is especially problematic in South Asia where users may not know how to report, systems are only in English, and fear of backlash is significant.


Evidence

Meta has scaled back proactive enforcement systems; reporting systems are in English, not regional languages; documented cases in India and Pakistan where women reporting abuse faced further harassment


Major discussion point

Platform Accountability and Content Moderation


Topics

Human rights | Sociocultural


Agreed with

– Neema Iyer

Agreed on

Laws designed to protect can be weaponized against vulnerable groups


R

Raoul Danniel Abellar Manuel

Speech speed

132 words per minute

Speech length

1436 words

Speech time

650 seconds

Philippines passed Republic Act 11930 addressing online sexual abuse of children with extraterritorial jurisdiction provisions

Explanation

The Philippines enacted Republic Act 11930 in July 2022 to combat online sexual abuse and exploitation of children. A key component is the assertion of extraterritorial jurisdiction, allowing the state to prosecute offenses that commence in the Philippines or are committed abroad by Filipino citizens against Philippine citizens.


Evidence

Republic Act 11930 lapsed on July 30, 2022, includes extraterritorial jurisdiction provisions recognizing coordinated networks involving multiple locations


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity


House of Representatives approved expanded anti-violence bill defining psychological violence through electronic devices as violence against women

Explanation

The Philippine House of Representatives passed legislation expanding the definition of violence against women to include psychological violence committed through electronic or ICT devices. However, the bill still awaits Senate approval in the bicameral system.


Evidence

House of Representatives approved the expanded anti-violence bill, but waiting for Senate deliberations in the bicameral system


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Amendments to Safe Spaces Act set higher standards for government officials promoting discrimination through digital platforms

Explanation

The Philippines approved committee-level amendments to the Safe Spaces Act that establish stricter standards for government officials who promote sexual harassment or discrimination against LGBT communities through digital or social media platforms.


Evidence

Amendments approved at committee level targeting government officials who discriminate against LGBT community through digital platforms


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Pending bill seeks to criminalize ‘red tagging’ – labeling individuals as state enemies or terrorists without basis

Explanation

The House of Representatives has a pending bill to criminalize the practice of ‘red tagging’ – falsely labeling individuals or groups as state enemies, subversives, or terrorists without proper basis. The Supreme Court has adopted this term, recognizing it as a source of harm that extends into the physical world.


Evidence

Supreme Court adopted the term ‘red tagging’ and recognized it as causing harm that transcends into the physical world


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Platforms should proactively report sources of harmful content to government rather than just reacting to individual posts

Explanation

Beyond content takedowns, platforms should take greater responsibility by proactively identifying and reporting sources of harmful content to government authorities. This would shift from reactive individual post removal to proactive source identification, working with independent media and digital coalitions.


Evidence

Platforms can monitor notable sources of harmful content including bullying, hate speech, indiscriminate tagging, scams, and should work with independent media and digital coalitions


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence

Explanation

When the Philippine House of Representatives invited social media platform representatives to hearings, they initially refused to attend, stating they had no office in the Philippines and therefore no obligation to participate. Only after threats of subpoenas and arrest did they begin attending by the third hearing.


Evidence

Platforms did not attend first two hearings claiming no office in Philippines; attended third hearing after threats of subpoena and arrest


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Economic


Agreed with

– Teo Nie Ching
– Anusha Rahman Khan

Agreed on

Individual countries lack sufficient power to regulate global tech platforms effectively


Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material

Explanation

The root cause of child sexual abuse and exploitation often lies in economic desperation, where poverty drives children and their families to engage in such activities for daily survival. Effective solutions must address underlying economic issues like poverty and child hunger alongside technical and legal measures.


Evidence

Philippines ranks as hotspot for online sexual abuse of children; economic basis drives children and relatives to this livelihood for day-to-day survival


Major discussion point

Age Verification and Privacy Concerns


Topics

Development | Cybersecurity


T

Teo Nie Ching

Speech speed

153 words per minute

Speech length

1789 words

Speech time

699 seconds

Malaysia amended Communication and Multimedia Act after 26 years, increasing penalties for child sexual abuse material and grooming

Explanation

Malaysia made its first amendment to the Communication and Multimedia Act in 26 years, significantly increasing penalties for dissemination of child sexual abuse material, grooming, and similar communications through digital platforms. The law imposes heavier penalties when minors are involved and grants the Malaysian Communications and Multimedia Commission authority to instruct service providers to block or remove harmful content.


Evidence

First amendment in 26 years to Communication and Multimedia Act, with heavier penalties when minors are involved and new powers for MCMC to instruct content blocking/removal


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity


Malaysia developed code of conduct for social media platforms with over 8 million users, though major platforms like Meta and Google have not applied for licenses

Explanation

Malaysia implemented a licensing regime with a code of conduct targeting major social media platforms serving over 8 million users (about 25% of Malaysia’s 35 million population). However, despite the January 2025 implementation date, major platforms Meta and Google have not applied for licenses, while only X, TikTok, and Telegram have complied.


Evidence

Licensing regime for platforms with 8+ million users (25% of 35 million population); only X, TikTok, and Telegram applied for licenses while Meta and Google have not


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Economic


Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal

Explanation

Platform reporting systems are inadequate, as demonstrated by cases where even verified public figures like badminton player Dato Lee Chong Wei cannot successfully report scam content using their accounts. Instead, they must manually compile links and send them to regulators, who then forward them to platforms for removal.


Evidence

Dato Lee Chong Wei, a famous badminton player with verified Facebook account, cannot successfully use built-in reporting and must send links to MCMC for forwarding to Meta


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Economic


Agreed with

– Arda Gerkens
– Nighat Dad
– Raoul Danniel Abellar Manuel
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Platforms lack transparency about actions taken against scammers and advertisers, making accountability difficult to assess

Explanation

While platforms may remove scam-related posts, there’s no transparency about what actions are taken against the actual scammers or those who sponsored the posts. Malaysia lacks access to data about advertising revenue collected from their jurisdiction, making it difficult to hold platforms accountable for their broader responsibilities.


Evidence

No transparency on actions against scammers who sponsor posts; no access to data on advertising revenue collected from Malaysia or ASEAN region


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Economic


Agreed with

– Arda Gerkens
– Nighat Dad
– Raoul Danniel Abellar Manuel
– Neema Iyer

Agreed on

Platform accountability requires transparency beyond content takedowns


Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN

Explanation

Malaysia’s experience shows that individual countries have limited negotiation power with major tech platforms. A coordinated approach through regional blocs like ASEAN would provide stronger negotiating positions and allow for standards that reflect regional cultural, historical, and religious contexts rather than Western-imposed standards.


Evidence

Meta complies with advertiser identity verification in Singapore but not Malaysia; Malaysia alone has insufficient negotiation power with tech giants


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Economic


Agreed with

– Raoul Danniel Abellar Manuel
– Anusha Rahman Khan

Agreed on

Individual countries lack sufficient power to regulate global tech platforms effectively


Different regions need different standards that meet their cultural, historical, and religious backgrounds rather than one-size-fits-all approaches

Explanation

Rather than applying universal Western standards, different regions should be able to establish standards that align with their specific cultural, historical, and religious contexts. Regional blocs like ASEAN, with similar cultural understanding, could set appropriate standards for platform regulation in their jurisdictions.


Evidence

ASEAN countries have similar culture and understand each other better, allowing them to set standards meeting their cultural, historical, and religious backgrounds


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Sociocultural


A

Arda Gerkens

Speech speed

170 words per minute

Speech length

1792 words

Speech time

629 seconds

Netherlands established unique regulatory body ATKM with special powers to identify and remove terrorist content and child sexual abuse material

Explanation

The Netherlands created ATKM, a unique regulatory body with special authority to dive into and identify harmful terrorist content and child sexual abuse material. The organization can order content removal and fine non-compliant entities, representing a first-of-its-kind regulatory approach with direct content intervention powers.


Evidence

ATKM is described as unique and first regulator with special right to dive into terrorist and child sexual abuse content, with power to remove content and fine non-compliant entities


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Cybersecurity


Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion

Explanation

ATKM has identified a concerning trend where extremist groups create Telegram channels focused on mental health and eating disorders to target vulnerable children. These groups extract personal information through grooming, then extort children into harmful activities like self-harm and creating sexual images, which are then distributed.


Evidence

Terrorist groups create Telegram channels for kids to discuss mental health and eating disorders, groom information, then extort them to carve bodies or make sexual images


Major discussion point

Hybrid Threats and Emerging Challenges


Topics

Cybersecurity | Human rights


Hybridization of terrorist content with child sexual abuse material is radicalizing children rapidly, leading to cases of very young potential attackers

Explanation

There’s an emerging pattern of terrorist environments containing child sexual abuse material, creating hybrid threats that rapidly radicalize vulnerable children. This hybridization has accelerated to the point where very young children in Europe have been found on the verge of committing attacks.


Evidence

Finding child sexual abuse material within online terrorist environments; recent cases in Europe of very young kids at the verge of committing attacks


Major discussion point

Hybrid Threats and Emerging Challenges


Topics

Cybersecurity | Human rights


Coordinated approach needed to tackle hybrid problems that span multiple regulatory domains

Explanation

The hybrid nature of emerging threats requires coordination across different regulatory domains and organizations. While ATKM focuses on terrorism and child sexual abuse, issues like eating disorders and mental health fall under other organizations’ purview, necessitating collaborative approaches to address interconnected problems.


Evidence

ATKM cannot address eating disorders or mental health problems directly, but these issues are connected to terrorist grooming activities


Major discussion point

Hybrid Threats and Emerging Challenges


Topics

Legal and regulatory | Cybersecurity


S

Sandra Maximiano

Speech speed

123 words per minute

Speech length

2154 words

Speech time

1049 seconds

Users are affected by cognitive biases like confirmation bias, overconfidence bias, and optimism bias that influence online behavior and decision-making

Explanation

Behavioral economics reveals that users are not rational decision-makers but are influenced by psychological factors and cognitive biases. These include confirmation bias (seeking information that confirms existing beliefs), overconfidence bias (overestimating security knowledge), and optimism bias (underestimating personal risk of scams or breaches).


Evidence

Examples include confirmation bias leading to echo chambers, overconfidence bias causing risky behaviors like weak passwords, and optimism bias leading to inadequate precautions against online threats


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Sociocultural


Vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design

Explanation

While all users experience cognitive biases, certain vulnerable populations including children, disabled individuals, and those with mental health problems are disproportionately affected. Regulators must understand and account for these heightened vulnerabilities when designing policies and interventions.


Evidence

Children, disabled groups, and people with mental health problems have cognitive biases influencing their decisions even more than general population


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Development


Agreed with

– Neema Iyer
– Nighat Dad

Agreed on

Marginalized communities face disproportionate online harm with inadequate support systems


AI systems can exploit cognitive biases and overlook vulnerabilities, potentially causing significant harm even without intentional exploitation

Explanation

AI increases the economic value of exploiting cognitive biases and can cause harm to vulnerable groups even without malicious intent. For example, chatbots trained on typical adult conversations may use metaphors and jokes that individuals with autism interpret literally, potentially leading to harmful actions.


Evidence

Example of chatbots for autism training that may incorporate jokes and metaphors from typical adult conversations, which individuals with autism may interpret literally and act upon


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Infrastructure


Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging

Explanation

The same behavioral insights that create vulnerabilities can be redirected to enhance protection. This includes designing user-friendly interfaces that consider cognitive load, implementing nudges that guide safer behaviors, and using social norms messaging to promote positive online conduct.


Evidence

Examples include framing cyberbullying information clearly, using social norms to highlight that most children don’t engage in bullying, and implementing reward systems for positive behavior


Major discussion point

Behavioral Economics and Digital Safety


Topics

Human rights | Sociocultural


Regulators should use the same behavioral insights that firms use for marketing, but redirect them toward safety and protection goals

Explanation

Marketing strategies extensively use behavioral insights to influence consumer behavior and increase sales. Regulators and policymakers should adopt these same techniques but redirect them toward promoting safety, security, and positive online behavior rather than commercial objectives.


Evidence

Marketing strategies use behavioral insights to sell more; regulators should use the same weapons but with different goals in mind


Major discussion point

Behavioral Economics and Digital Safety


Topics

Legal and regulatory | Economic


Platforms should provide safety briefings to users similar to how other service providers are required to give security information

Explanation

Just as service providers in other industries (like skydiving) are required to provide safety briefings before service delivery, online platforms should be mandated to provide users with information about cognitive biases, online risks, and safety measures. This would help users make more informed decisions about their online behavior.


Evidence

Comparison to skydiving services that must provide safety briefings before service delivery


Major discussion point

Platform Accountability and Content Moderation


Topics

Legal and regulatory | Human rights


What is illegal offline should remain illegal online, but extreme restriction measures may not be the best approach

Explanation

The fundamental principle should be that illegal activities offline should also be illegal online. However, when it comes to restricting access for children or implementing extreme measures like complete bans, there are better approaches than blanket restrictions that may be overly broad or ineffective.


Major discussion point

Age Verification and Privacy Concerns


Topics

Legal and regulatory | Human rights


A

Anusha Rahman Khan

Speech speed

148 words per minute

Speech length

594 words

Speech time

239 seconds

Former Pakistani minister enacted cyber crime law in 2016 introducing 28 new penalties criminalizing violations of natural person dignity

Explanation

As Pakistan’s former IT and telecommunications minister, Anusha Rahman Khan enacted comprehensive cybercrime legislation in 2016 that introduced 28 new criminal penalties. The law specifically criminalized violations of natural person dignity, with offenders facing jail time or fines for online abuse.


Evidence

Cyber crime law enacted in 2016 with 28 new penalties, criminalizing dignity violations of natural persons with jail time or fines


Major discussion point

Legislative and Regulatory Responses


Topics

Legal and regulatory | Human rights


Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination

Explanation

The fundamental challenge is that commercial interest groups and revenue generation motives of platforms conflict with the need to protect citizens from online harm. This creates a situation where countries become hostage to platform policies, particularly problematic when Western platforms apply different value systems to Eastern societies where online harm can have more severe consequences.


Evidence

Interest groups funded by commercial interests resisted cybercrime legislation; same groups later found revenue opportunities in the law; different value systems between East and West where single aspersion can cause suicide


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Economic | Legal and regulatory


Agreed with

– Teo Nie Ching
– Raoul Danniel Abellar Manuel

Agreed on

Individual countries lack sufficient power to regulate global tech platforms effectively


A

Audience

Speech speed

148 words per minute

Speech length

460 words

Speech time

185 seconds

Existing social structures like schools, clubs, and family units should be leveraged to empower victims and prevent online abuse before it occurs

Explanation

Rather than focusing solely on technical solutions, existing community structures such as schools, scouting clubs, girl guides, and family units should be utilized to empower potential victims before they encounter online threats. These established social frameworks can provide foundational protection and education.


Evidence

Examples from Kenya including scouting clubs and girl guides; social clubs in schools that already exist as community structures


Major discussion point

Community-Based Solutions


Topics

Development | Sociocultural


Offline education and empowerment can prepare young people with tools before they encounter online threats

Explanation

By training and empowering young people through offline education and community programs, they can be better prepared to handle online threats when they encounter them. This proactive approach focuses on building resilience and awareness before exposure to digital risks.


Evidence

If young people are trained offline before getting online, and if training makes behavior ‘cool’ for them, it becomes truth and can save a generation


Major discussion point

Community-Based Solutions


Topics

Development | Sociocultural


Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions

Explanation

Technology solutions alone are insufficient because both perpetrators and victims of online abuse are human beings embedded in social contexts. Effective interventions must address the human element through community-based approaches that work alongside technical measures, recognizing that many jurisdictions lack direct access to big tech platforms.


Evidence

Big tech platforms don’t have physical presence in many jurisdictions, making direct engagement impossible; both bullies and victims are human and can be influenced by community interventions


Major discussion point

Community-Based Solutions


Topics

Development | Human rights


A

Alishah Shariff

Speech speed

177 words per minute

Speech length

2027 words

Speech time

683 seconds

The digital world offers opportunities for connection, learning, and growth but also brings risks and downsides that are felt more acutely by vulnerable groups

Explanation

While digital technologies provide significant benefits for human connection and development, they simultaneously create new forms of harm and risk. These negative impacts disproportionately affect vulnerable populations including children, individuals with disabilities, and marginalized communities.


Evidence

Consequences of online harm can have ripple effects into real lives, causing distress, harm, and isolation


Major discussion point

Online Safety Challenges for Marginalized Communities


Topics

Human rights | Development


Effective policy responses to online harms require targeted, inclusive, and enforceable approaches developed through multistakeholder collaboration

Explanation

Addressing online safety challenges requires policy frameworks that are specifically designed for different contexts, include diverse perspectives, and can be effectively implemented. This necessitates collaboration between parliamentarians, regulators, and advocacy experts across different geographies.


Evidence

Session brings together diverse panel of parliamentarians, regulators, and advocacy experts across range of geographies and contexts


Major discussion point

International Cooperation and Enforcement Challenges


Topics

Legal and regulatory | Human rights


A

Andrew Campling

Speech speed

125 words per minute

Speech length

155 words

Speech time

73 seconds

Over 300 million children annually are victims of technology-facilitated sexual abuse and exploitation, representing about 14% of the world’s children

Explanation

The scale of child sexual abuse and exploitation facilitated by technology is massive, affecting approximately one in seven children globally each year. This statistic demonstrates the urgent need for comprehensive protective measures in digital spaces.


Evidence

Over 300 million children annually are victims, representing about 14% of world’s children; Internet Watch Foundation finds and takes down CSAM material with partner hotlines around the world


Major discussion point

Age Verification and Privacy Concerns


Topics

Cybersecurity | Human rights


Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children

Explanation

Technical solutions like age verification can help create barriers that prevent inappropriate access to platforms while maintaining privacy protections. This includes stopping children from accessing adult content and preventing adults from creating child accounts to target minors.


Evidence

Need to stop children from accessing adult platforms and adult content, and stop adults from accessing child-centric platforms and opening child accounts to target children


Major discussion point

Age Verification and Privacy Concerns


Topics

Cybersecurity | Human rights


Disagreed with

– Neema Iyer

Disagreed on

Age verification and privacy-preserving technologies for child protection


Client-side scanning technology should be better utilized to prevent messaging platforms from being used to share child sexual abuse material at scale

Explanation

Privacy-preserving technologies like client-side scanning can help detect and prevent the distribution of child sexual abuse material through encrypted messaging platforms. This approach can maintain user privacy while providing protection against large-scale distribution of harmful content.


Evidence

Messaging platforms like WhatsApp are being used to share CSAM at scale around the world, which can be addressed in a privacy-preserving way


Major discussion point

Age Verification and Privacy Concerns


Topics

Cybersecurity | Human rights


Agreements

Agreement points

Platform accountability requires transparency beyond content takedowns

Speakers

– Arda Gerkens
– Nighat Dad
– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Neema Iyer

Arguments

Built-in reporting mechanisms are ineffective, requiring even verified public figures to compile links and send to regulators for content removal


Platforms lack transparency about actions taken against scammers and advertisers, making accountability difficult to assess


Platforms respond quickly to cases involving US celebrities but delay response to cases from Global South, highlighting inequality in treatment


Platforms should proactively report sources of harmful content to government rather than just reacting to individual posts


Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing


Summary

All speakers agreed that current platform accountability mechanisms are insufficient, with particular emphasis on the need for transparency in content moderation processes, proactive identification of harmful sources, and addressing geographic inequalities in platform responses.


Topics

Legal and regulatory | Human rights | Economic


Individual countries lack sufficient power to regulate global tech platforms effectively

Speakers

– Teo Nie Ching
– Raoul Danniel Abellar Manuel
– Anusha Rahman Khan

Arguments

Individual countries lack sufficient negotiation power when engaging with tech giants, requiring coordinated bloc approaches like ASEAN


Social media platforms initially refused to attend Philippine parliamentary hearings, claiming no obligation due to lack of physical office presence


Commercial interests and revenue generation priorities conflict with civil protection needs, requiring stronger international coordination


Summary

Government representatives from Malaysia, Philippines, and Pakistan all acknowledged that individual nations have limited leverage against major tech platforms, emphasizing the need for coordinated international or regional approaches to regulation.


Topics

Legal and regulatory | Economic


Marginalized communities face disproportionate online harm with inadequate support systems

Speakers

– Neema Iyer
– Nighat Dad
– Sandra Maximiano

Arguments

One in three women across Africa experience online violence, leading many to delete their online identities due to lack of awareness about reporting mechanisms


Digital Rights Foundation helpline has handled over 20,000 complaints since 2016, with hundreds of young women reporting monthly about blackmail and harassment


Vulnerable groups including children and people with disabilities suffer more from these biases, requiring regulators to account for this in policy design


Summary

Civil society representatives agreed that vulnerable populations experience higher rates of online harm and face additional barriers in accessing help, requiring specialized approaches that account for their unique vulnerabilities.


Topics

Human rights | Sociocultural


Laws designed to protect can be weaponized against vulnerable groups

Speakers

– Neema Iyer
– Nighat Dad

Arguments

Laws designed to protect are often weaponized against women and marginalized groups, being used to punish rather than protect them


Meta’s recent scaling back of proactive enforcement systems shifts burden of content moderation onto users, particularly problematic in regions where reporting systems are in English only


Summary

Both civil society advocates highlighted the paradox where protective legislation and platform policies can be misused to harm the very groups they were intended to protect, particularly in Global South contexts.


Topics

Legal and regulatory | Human rights


Similar viewpoints

Both speakers emphasized the need for collaborative, multi-stakeholder approaches to platform governance and the importance of using behavioral insights to promote safer online behavior through design interventions.

Speakers

– Neema Iyer
– Sandra Maximiano

Arguments

Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies


Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging


Topics

Legal and regulatory | Human rights | Sociocultural


Both emphasized that technical solutions alone are insufficient and that addressing root causes through community-based interventions and socioeconomic factors is essential for effective protection.

Speakers

– Raoul Danniel Abellar Manuel
– Audience

Arguments

Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material


Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions


Topics

Development | Human rights | Sociocultural


Both highlighted emerging hybrid threats that exploit vulnerable populations through sophisticated targeting and manipulation techniques, requiring coordinated responses across different regulatory domains.

Speakers

– Arda Gerkens
– Nighat Dad

Arguments

Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion


Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide


Topics

Cybersecurity | Human rights


Unexpected consensus

Rejection of extreme age verification measures

Speakers

– Neema Iyer
– Sandra Maximiano

Arguments

Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing


What is illegal offline should remain illegal online, but extreme restriction measures may not be the best approach


Explanation

Despite coming from different professional backgrounds (civil society advocacy vs. regulatory economics), both speakers rejected blanket age verification or social media bans as solutions, instead favoring more nuanced approaches that preserve privacy while promoting safety.


Topics

Legal and regulatory | Human rights


Need for behavioral and design-based interventions over purely legal approaches

Speakers

– Sandra Maximiano
– Neema Iyer
– Audience

Arguments

Regulators should use the same behavioral insights that firms use for marketing, but redirect them toward safety and protection goals


Content takedown is reactive and happens after damage is done, with need for design friction to prevent harmful content sharing


Existing social structures like schools, clubs, and family units should be leveraged to empower victims and prevent online abuse before it occurs


Explanation

Unexpectedly, speakers from regulatory, advocacy, and community perspectives all converged on the idea that behavioral interventions and proactive design changes are more effective than reactive legal measures, representing a shift from traditional regulatory thinking.


Topics

Human rights | Sociocultural | Development


Overall assessment

Summary

The speakers demonstrated strong consensus on several key issues: the inadequacy of current platform accountability mechanisms, the need for international coordination to effectively regulate global tech platforms, the disproportionate impact of online harm on marginalized communities, and the limitations of purely reactive legal approaches. There was also notable agreement on the need for more proactive, design-based interventions and multi-stakeholder collaboration.


Consensus level

High level of consensus with significant implications for policy development. The agreement across different stakeholder groups (government officials, regulators, civil society advocates) suggests these issues transcend traditional boundaries and require coordinated responses. The consensus on moving beyond reactive measures toward proactive design interventions represents a potential paradigm shift in online safety approaches. However, the challenge remains in translating this consensus into actionable policies given the power imbalances between individual nations and global tech platforms.


Differences

Different viewpoints

Age verification and privacy-preserving technologies for child protection

Speakers

– Neema Iyer
– Andrew Campling

Arguments

I think absolutely not. So I live in Australia and we just passed a social media ban on children. In the past year, I have no idea what is the plan for implementation. And it’s really, it’s really giving all your data to these platforms. I think it’s a very slippery slope to a bad place. So my general opinion is no, that we as humans need some level of privacy in our lives. And I think that there are better, and the fact is that people will get around all these things anyway. So I think there are better interventions rather than taking away the last shred of our privacy.


Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children


Summary

Andrew Campling advocates for mandatory privacy-preserving age verification technology to protect children online, while Neema Iyer strongly opposes such measures, arguing they compromise privacy and are ineffective since people will circumvent them anyway.


Topics

Cybersecurity | Human rights


Unexpected differences

Approach to addressing root causes of child exploitation

Speakers

– Raoul Danniel Abellar Manuel
– Andrew Campling

Arguments

Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material


Privacy-preserving age estimation and verification technology should be mandated to prevent children from accessing adult platforms and adults from targeting children


Explanation

While both speakers are deeply concerned about child protection online, they approach the problem from fundamentally different angles. The Philippine MP emphasizes addressing underlying economic causes like poverty that drive families to exploit children, while the Internet Watch Foundation trustee focuses on technical solutions like age verification. This disagreement is unexpected because both are child protection advocates but see completely different primary solutions.


Topics

Development | Cybersecurity | Human rights


Overall assessment

Summary

The main areas of disagreement center around the balance between privacy and safety (particularly regarding age verification), the effectiveness of technical versus socioeconomic solutions for child protection, and the specific mechanisms for international cooperation in platform regulation.


Disagreement level

The level of disagreement is moderate but significant in its implications. While speakers largely agree on the problems (online harm to vulnerable groups, platform accountability issues, need for international cooperation), they diverge substantially on solutions. The privacy versus safety debate represents a fundamental tension in digital rights policy, while the technical versus socioeconomic approach to child protection reflects different philosophical frameworks for addressing online harm. These disagreements suggest that achieving consensus on specific policy measures will require careful negotiation and potentially hybrid approaches that incorporate multiple perspectives.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the need for collaborative, multi-stakeholder approaches to platform governance and the importance of using behavioral insights to promote safer online behavior through design interventions.

Speakers

– Neema Iyer
– Sandra Maximiano

Arguments

Algorithmic decisions should be co-created by governments, civil society, and platforms together rather than left to platform owners’ ideologies


Behavioral economics can enhance online protection through better user interface design, nudging safe behavior, and using social norms messaging


Topics

Legal and regulatory | Human rights | Sociocultural


Both emphasized that technical solutions alone are insufficient and that addressing root causes through community-based interventions and socioeconomic factors is essential for effective protection.

Speakers

– Raoul Danniel Abellar Manuel
– Audience

Arguments

Economic factors driving child exploitation must be addressed alongside technical measures to effectively combat child sexual abuse material


Human-centered design must recognize that both offenders and victims are human, requiring community-level interventions alongside technical solutions


Topics

Development | Human rights | Sociocultural


Both highlighted emerging hybrid threats that exploit vulnerable populations through sophisticated targeting and manipulation techniques, requiring coordinated responses across different regulatory domains.

Speakers

– Arda Gerkens
– Nighat Dad

Arguments

Terrorist groups are increasingly targeting vulnerable children through platforms discussing mental health and eating disorders for grooming and extortion


Rise of AI-generated deepfake content is causing reputational damage, emotional trauma, and social isolation, with some cases leading to suicide


Topics

Cybersecurity | Human rights


Takeaways

Key takeaways

Online harm disproportionately affects marginalized communities in the Global South due to intersecting inequalities, language barriers, and lack of platform prioritization


Legislative frameworks are often too narrow, focusing on takedowns rather than prevention, and can be weaponized against the very groups they aim to protect


Platform accountability requires transparency in content moderation processes, algorithmic decision-making, and actions taken against violators beyond simple content removal


Individual countries lack sufficient negotiation power with tech giants, necessitating coordinated regional or international approaches


Behavioral economics insights can be leveraged to design better safety interventions, using the same cognitive bias understanding that platforms use for engagement


Hybrid threats combining terrorism, child exploitation, and targeting of vulnerable groups through mental health platforms represent emerging challenges requiring coordinated responses


Prevention through design friction and community-based offline education is more effective than reactive content takedown measures


Multi-stakeholder collaboration between governments, platforms, and civil society is essential for developing effective and balanced online safety policies


Resolutions and action items

Invitation extended for regulators to join the Global Online Safety Regulators Network (GOSRN) to facilitate international cooperation


Proposal for ASEAN countries to engage with platforms as a bloc rather than individually to increase negotiation power


Recommendation for platforms to provide mandatory safety briefings to users similar to other service providers


Call for platforms to proactively report sources of harmful content to governments rather than just responding to individual takedown requests


Suggestion for algorithmic decision-making to be co-created by governments, civil society, and platforms together


Proposal to leverage existing community structures (schools, clubs, families) to provide offline education and empowerment before online exposure


Unresolved issues

How to effectively enforce regulations when major platforms refuse to comply with licensing requirements or attend government hearings


Balancing privacy rights with age verification and content scanning technologies for child protection


Addressing the fundamental economic incentives that drive platforms to prioritize engagement over safety


Developing culturally appropriate standards for different regions while maintaining international cooperation


Creating effective reporting mechanisms in local languages and contexts for Global South users


Preventing the weaponization of online safety laws against marginalized groups and activists


Addressing the gap between Western-designed platforms and Eastern value systems and legal frameworks


Managing the rise of platforms with no accountability mechanisms or human rights teams


Suggested compromises

Implementing minimum universal safety standards while allowing regional variations for cultural and contextual differences


Using behavioral nudges and design friction as alternatives to extreme restriction measures like complete social media bans


Combining technical solutions with community-based offline interventions rather than relying solely on either approach


Establishing transparency requirements for platform actions against violators while respecting commercial confidentiality


Creating tiered accountability systems where platforms with larger user bases face stricter requirements


Developing privacy-preserving safety technologies that protect users without compromising fundamental rights


Balancing proactive content moderation with protection against algorithmic bias and shadow banning of legitimate content


Thought provoking comments

The laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.

Speaker

Neema Iyer


Reason

This comment is deeply insightful because it reveals the paradox of protective legislation becoming a tool of oppression. It challenges the assumption that creating laws automatically leads to protection and highlights how power structures can co-opt well-intentioned regulations.


Impact

This comment fundamentally shifted the discussion from focusing solely on creating new regulations to examining how existing laws are implemented and enforced. It introduced the critical concept that legislative frameworks can have unintended consequences, setting the stage for other panelists to discuss the importance of balanced, enforceable policies.


We see more and more hybridization of these types of content mixed together with other content… we’re finding within the online terrorist environments lots of child sex abuse material. And we find that certainly vulnerable kids at the moment are at large online… these terrorist groups or these groups, extremist groups, are actually targeting vulnerable kids.

Speaker

Arda Gerkens


Reason

This observation is thought-provoking because it reveals the evolution of online threats from discrete categories to complex, interconnected forms of harm. It demonstrates how traditional regulatory silos may be inadequate for addressing modern digital threats.


Impact

This comment introduced a new dimension to the discussion about the complexity of online harms. It moved the conversation beyond simple content takedowns to understanding how different forms of abuse intersect and require coordinated responses across different regulatory domains.


If the system only works within these platforms when the media pays attention, what happens to the millions of women in the Global South who never make headlines?

Speaker

Nighat Dad


Reason

This comment powerfully exposes the inequality in platform responses based on visibility and geography. It challenges the notion of equal protection online and highlights how media attention becomes a prerequisite for justice.


Impact

This comment crystallized the discussion around global inequities in platform accountability. It prompted other speakers to discuss the need for coordinated international responses and highlighted how current systems fail those without voice or visibility.


Behavioral economics is actually a field that blends insights from psychology and economics to fully understand how women make decisions… we have to understand this cognitive bias and also be aware that we can use them to make individuals, make them take more informed decisions.

Speaker

Sandra Maximiano


Reason

This comment introduced an entirely new analytical framework to the discussion, shifting from purely regulatory and technical approaches to understanding the psychological mechanisms that make people vulnerable online. It’s innovative in suggesting that the same tools used to exploit can be used to protect.


Impact

This intervention fundamentally broadened the scope of the discussion beyond traditional regulatory approaches. It introduced the concept of ‘nudging’ for protection and influenced subsequent speakers to consider design-based solutions rather than just content moderation.


I think we really need to think broader about how we are legislating about online violence… legislative frameworks are often too narrow. They focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women.

Speaker

Neema Iyer


Reason

This comment challenges the dominant paradigm of online safety regulation by questioning both the scope and cultural appropriateness of current approaches. It calls for more nuanced, context-specific solutions.


Impact

This comment established a critical theme that ran throughout the discussion – the inadequacy of one-size-fits-all solutions and the need for culturally sensitive, comprehensive approaches to online harm prevention.


The offenders are human. The victims are humans… if we concentrate on the technology, we are losing a very big part because this young person can be trained to be a bully… if they were trained offline long before they got onto the internet, then maybe it can become a movement that saves a generation.

Speaker

John Kiariye


Reason

This comment reframes the entire discussion by emphasizing the human element behind technology-mediated harm. It challenges the tech-centric approach and advocates for community-based, preventive solutions rooted in existing social structures.


Impact

This intervention brought the discussion full circle, grounding the technical and regulatory focus back in human relationships and community structures. It emphasized prevention over reaction and highlighted the importance of offline interventions for online safety.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches to online safety and introducing new analytical frameworks. The conversation evolved from a focus on reactive measures (content takedowns, legislation) to proactive, holistic approaches that consider behavioral psychology, cultural context, and community-based solutions. The comments revealed the limitations of current regulatory frameworks and highlighted the need for coordinated, multi-stakeholder responses that address both the technical and human dimensions of online harm. Most significantly, they exposed the global inequities in how online safety is implemented and experienced, pushing the discussion toward more inclusive and comprehensive solutions.


Follow-up questions

How can we develop interventions and safety mechanisms for platforms that don’t prioritize smaller countries with multiple local languages?

Speaker

Neema Iyer


Explanation

This addresses the challenge of platform governance in regions with linguistic diversity and smaller market shares, where safety mechanisms may not be adequately developed or localized


How can we develop broader legislative frameworks that address coordinated disinformation campaigns and ideological radicalization of minors online, beyond just intimate image sharing?

Speaker

Neema Iyer


Explanation

Current legislative frameworks are often too narrow and don’t address the full spectrum of online harms faced by marginalized communities


What are the specific criteria and thresholds for determining what constitutes ‘glorifying’ terrorist content or ‘call to action’ in content moderation?

Speaker

Arda Gerkens


Explanation

This is needed to clarify vague legislation and create consistent standards across European countries for terrorist content removal


How can we develop coordinated approaches to tackle hybrid threats that combine terrorism, child sexual abuse material, and targeting of vulnerable children across different regulatory domains?

Speaker

Arda Gerkens


Explanation

There’s an emerging trend of hybridization where terrorist groups are using CSAM and targeting vulnerable children, requiring cross-domain collaboration


What actions are platforms taking against scammers who sponsor harmful posts, beyond just content takedown?

Speaker

Teo Nie Ching


Explanation

There’s a lack of transparency about platform accountability measures against bad actors, not just their content


How much advertising revenue do major platforms collect from individual countries or regions like ASEAN?

Speaker

Teo Nie Ching


Explanation

This information is needed to understand the economic leverage that could be used in platform negotiations


How can we establish international standards for platform responsibilities instead of individual countries negotiating separately?

Speaker

Teo Nie Ching


Explanation

Individual countries lack sufficient negotiation power with tech giants, requiring coordinated international approaches


What happens to millions of women in the Global South who face online harm but never make headlines or receive media attention?

Speaker

Nighat Dad


Explanation

Platform response systems often only work when media pays attention, leaving many victims without recourse


How can we design algorithmic decisions through co-creation involving governments, civil society, and platforms rather than leaving them to platform owners’ ideologies?

Speaker

Neema Iyer


Explanation

Current algorithmic decisions reflect the moral and political ideologies of platform owners, requiring more democratic input


How can we introduce design friction to prevent harmful content from being shared in the first place, rather than relying on reactive takedown measures?

Speaker

Neema Iyer


Explanation

Proactive prevention through design changes could be more effective than reactive content moderation


How can we better utilize existing community structures (schools, clubs, families) to empower potential victims before they encounter online threats?

Speaker

John Kiariye


Explanation

Focusing on offline preparation and community-based solutions could complement technical approaches to online safety


What are the most effective behavioral economics interventions and nudges that can be implemented by platforms to promote safer online behavior?

Speaker

Sandra Maximiano


Explanation

Understanding and applying behavioral insights could help design more effective safety measures that work with human psychology rather than against it


How can regional blocs like ASEAN develop coordinated standards for platform regulation that reflect their cultural and religious contexts?

Speaker

Teo Nie Ching


Explanation

Regional coordination could provide more negotiating power and culturally appropriate standards than individual country approaches


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #13 Bridging the Digital Divide Focus on the Global South

Open Forum #13 Bridging the Digital Divide Focus on the Global South

Session at a glance

Summary

This open forum, hosted by the World Internet Conference (WRC), focused on bridging the digital divide with particular emphasis on the Global South and the role of emerging technologies like artificial intelligence. The discussion brought together high-level representatives from international organizations, regulatory bodies, and youth leaders to address solutions for digital inclusion.


UN Under-Secretary-General Li Junhua highlighted that 2.6 billion people remain offline, primarily in least developed countries, emphasizing that the digital divide has evolved beyond infrastructure to include affordable devices, digital skills, and safe navigation capabilities. He stressed the importance of local community empowerment and bottom-up approaches, noting progress since 2015 when 4 billion people were offline. Former WIPO Director-General Francis Gurry warned of a crisis point due to declining development funding (38% reduction expected) coinciding with rapid AI advancement, which risks exacerbating the digital divide when funding is most needed.


ICANN co-chair Tripti Sinha emphasized that the divide encompasses participation and inclusiveness beyond mere access, advocating for AI-powered solutions to optimize network infrastructure while maintaining multi-stakeholder governance approaches. She warned against fragmentation risks from state-led governance models that could separate Global South countries from the global internet. Chinese officials outlined China’s commitment to supporting Global South digital development through capacity building, international cooperation, and AI governance initiatives, including training workshops and technical assistance programs.


Dr. Nii Quaynor from Ghana provided an African perspective, noting infrastructure improvements but highlighting persistent challenges including limited technical capacity, fragile infrastructure, and economic sustainability issues. Malaysian representative Chern Choong Thum shared Southeast Asian solutions, including digital literacy centers and AI training programs, emphasizing human-centric approaches to digital governance. The forum concluded with consensus on the need for continued international cooperation, inclusive dialogue, and sustainable solutions to ensure equitable digital development for all communities.


Keypoints

## Major Discussion Points:


– **Scale and urgency of the digital divide**: 2.6 billion people remain offline globally, with the majority in least developed countries, creating gaps in opportunity rather than just access. The divide exists both between and within countries, affecting rural populations, women, indigenous peoples, and persons with disabilities.


– **Crisis in development funding amid AI advancement**: A critical timing challenge where development funding is decreasing by an estimated 38% while AI technology is rapidly advancing, potentially exacerbating the digital divide. The high costs and technical requirements of AI infrastructure risk creating an even wider gap between developed and developing nations.


– **Infrastructure and technical foundations**: Beyond physical connectivity, the discussion emphasized the need for reliable technical infrastructure including domain name systems, IP addresses, multilingual support, and local capacity building. Universal acceptance and internationalized domain names are crucial for cultural and linguistic participation.


– **Multi-stakeholder governance and Global South participation**: The importance of maintaining collaborative, bottom-up approaches to internet governance while ensuring meaningful participation from Global South countries in digital policy-making processes. There’s concern about potential fragmentation if countries pursue separate technical standards.


– **Practical solutions and international cooperation**: Concrete initiatives including China’s AI capacity building programs, Malaysia’s NADI Centers for digital literacy, Africa’s progress in internet infrastructure, and the need for South-South cooperation to share knowledge and resources effectively.


## Overall Purpose:


The discussion aimed to identify actionable solutions for bridging the digital divide affecting the Global South, with particular focus on how emerging technologies like AI can be leveraged to expand access and opportunities rather than widen existing gaps. The forum sought to build international consensus and cooperation frameworks for inclusive digital development.


## Overall Tone:


The discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers acknowledged serious challenges with urgency while remaining optimistic about potential solutions through international cooperation. The tone was formal yet inclusive, emphasizing shared responsibility and mutual benefit. There was a notable emphasis on practical examples and concrete commitments rather than abstract policy discussions, reflecting a pragmatic approach to addressing complex global challenges.


Speakers

– **Zhang Hui** – Deputy Secretary-General of the World Internet Conference (WRC)


– **Li Junhua** – Under-Secretary-General for UNDESA (United Nations Department of Economic and Social Affairs)


– **Francis Gurry** – Vice-Chair of WRC, former WIPO Director-General, global authority on intellectual property and digital innovation


– **Tripti Sinha** – Co-chair of Internet Corporation for Assigned Names and Numbers (ICANN), extensive experience in Internet infrastructure and multi-stakeholder governance


– **Ren Xianliang** – Secretary-General of the WRC


– **Qi Xiaoxia** – Director General of International Cooperation Bureau of Cyberspace Administration of China, extensive experience in international cyberspace exchange and cooperation


– **Nii Quaynor** – Chairman of Ghana Dot Com, known as the “Father of Internet in Africa,” Internet Hall of Fame awardee, WRC distinguished contribution awardee


– **Chern Choong Thum** – Special Functional Officer at the Ministry of Communications of Malaysia, 2024 global youth leader, supports Malaysia’s digital strategy and regional innovation programs, doctor working in public health


Additional speakers:


None identified outside the provided speakers names list.


Full session report

# World Internet Conference Open Forum: Bridging the Digital Divide


## Executive Summary


The World Internet Conference hosted an open forum on bridging the digital divide, featuring UN Under-Secretary-General Li Junhua, former WIPO Director-General Francis Gurry, ICANN co-chair Tripti Sinha, and senior officials from China, Ghana, and Malaysia. The discussion addressed the challenge of connecting 2.6 billion people who remain offline globally, with particular focus on the Global South, declining development funding, and the dual role of artificial intelligence in either bridging or widening digital gaps.


## Current State of the Digital Divide


### Global Scale and Distribution


UN Under-Secretary-General Li Junhua established that 2.6 billion people remain offline worldwide, with the majority in least developed countries. He noted progress since 2015 when 4 billion people were offline, but emphasized that the remaining gap represents the most challenging populations to reach. The digital divide affects rural populations, women, indigenous peoples, refugees, and persons with disabilities disproportionately.


Dr. Chern Choong Thum from Malaysia’s Ministry of Communications provided a public health perspective, stating that “digital exclusion deepens health inequalities and cuts off access to life-saving services and vital health education.” He noted that while 5.5 billion people are online, a third of the world remains disconnected, predominantly in Global South rural areas.


### African Infrastructure Challenges


Dr. Nii Quaynor, known as the “Father of Internet in Africa,” highlighted persistent challenges including fragile infrastructure, limited technical capacity, and economic sustainability issues. He provided historical context, noting that “every new technology comes with its distinct divides, and some may widen other divides.”


Quaynor shared specific statistics: “Africa is at 4.4 domain names per thousand, where global is 45 per thousand,” illustrating the continent’s digital infrastructure gaps. He raised critical questions about sustainability: “Where is the revenue to maintain, improve and develop infrastructure services constantly in the global south?”


## Funding Crisis and AI Acceleration


### Development Funding Challenges


Former WIPO Director-General Francis Gurry identified what he termed “a real crisis point,” outlining two converging challenges. First, a dramatic crisis in development funding with an estimated 38% reduction expected in the coming year. Second, the unprecedented speed of artificial intelligence deployment.


Gurry emphasized that “never has funding and development assistance been more needed than at the present time when artificial intelligence is coming online at such a speed that it is baffling to all of us.”


### AI’s Dual Impact


Tripti Sinha acknowledged AI’s potential to optimize network infrastructure and enable efficient resource allocation for unconnected markets. However, she warned that “knowledge begets knowledge, wealth begets wealth, and those who possess these will only have the opportunity to obtain more. Similarly, innovation begets innovation.”


Nii Quaynor warned that AI technology threatens the digital divide most significantly due to high infrastructure costs, substantial power requirements, and technical skills needed for participation.


## Infrastructure and Technical Requirements


### Beyond Physical Connectivity


Tripti Sinha emphasized that bridging the digital divide requires comprehensive technical foundations, including reliable domain name systems, IP address allocation, root servers, and multilingual support systems. She highlighted the importance of universal acceptance and internationalized domain names, noting that millions cannot engage with the Internet in their own language.


### National Success Models


Chern Choong Thum shared Malaysia’s achievements through the Jandela Plan, which equipped 9 million premises with fiber optic access and significantly boosted mobile speeds. Malaysia also established NADI Centers providing internet access and ICT training, including AI skills programs.


Ren Xianliang, Secretary-General of the WRC, emphasized sustainable infrastructure operation alongside digital education, which he termed “the biggest equalizer.”


## Governance Approaches and International Cooperation


### Multi-Stakeholder vs. State-Led Models


Tripti Sinha strongly advocated for ICANN’s multi-stakeholder model, bringing together governments, private sector, civil society, and the technical community. She warned about fragmentation risks from state-led approaches that could threaten the single, interoperable Internet.


Qi Xiaoxia, Director General of China’s Cyberspace Administration International Cooperation Bureau, presented a different perspective emphasizing respect for sovereignty in cyberspace. She advocated for countries’ rights to independently choose their Internet development paths while opposing “cyber hegemony.”


### Implementation Challenges


Nii Quaynor provided a balanced assessment of multi-stakeholder governance, acknowledging both its potential and limitations. He noted implementation challenges including difficulties finding qualified participants, potential “decision by fatigue,” and the need for skilled moderation to achieve consensus.


## Capacity Building Initiatives


### Bottom-Up Approaches


Li Junhua emphasized that “bottom-up, grassroots processes are foundational to global efforts,” giving communities voice in their digital development. This approach recognizes that sustainable solutions must emerge from local needs and capabilities.


### International Programs


Qi Xiaoxia outlined China’s commitment to supporting Global South digital development through comprehensive capacity building initiatives, including training workshops, knowledge sharing platforms, and technical assistance programs. China announced implementation of UN resolution on AI capacity building with ten major actions and five additional training workshops for Global South countries.


### Regional Leadership


Chern Choong Thum described Malaysia’s approach to regional leadership through its 2025 ASEAN Chairmanship, championing inclusivity and sustainability themes in digital development with human-centric policies ensuring no one is left behind.


## Concrete Commitments


### Organizational Actions


– **World Internet Conference**: Committed to deepening cooperation with the Global South through continued dialogue platforms


– **China**: Announced specific implementation of UN resolution on AI capacity building with ten major actions and five training workshops for Global South countries


– **Malaysia**: Committed to leveraging its 2025 ASEAN Chairmanship to champion inclusivity and sustainability in digital development


– **ICANN**: Committed to continued support for technical resilience, multilingual access, and global connectivity in underserved regions


### Global Review Opportunities


Li Junhua identified the WSIS Plus 20 review as a crucial opportunity to renew global commitment to digital inclusion and meaningful access for all.


## Key Challenges and Disagreements


### Unresolved Issues


The forum identified several critical unresolved challenges:


– Addressing the massive development funding crisis while meeting increased needs for AI-era digital infrastructure


– Reconciling unified global Internet standards with national sovereignty concerns


– Preventing AI advancement from creating new forms of digital exclusion


– Creating financially viable models for ongoing infrastructure maintenance in resource-constrained environments


### Governance Philosophy Differences


The most significant disagreement centered on governance approaches, with tension between maintaining technical coordination and respecting political sovereignty. This reflects broader challenges in global Internet governance between unified standards and national control.


## Conclusion


The forum revealed both the complexity and urgency of bridging the digital divide amid rapid technological change and constrained resources. The convergence of declining development funding with accelerating AI deployment creates challenges requiring innovative solutions and enhanced international cooperation.


Success will depend on reconciling technical coordination needs with political sovereignty concerns while ensuring emerging technologies bridge rather than widen existing divides. The commitments made by participating organizations provide concrete starting points, but the scale of the challenge requires sustained effort and continued dialogue.


The upcoming WSIS Plus 20 review offers an opportunity to translate forum insights into coordinated global action addressing the digital divide before it becomes insurmountable.


Session transcript

Zhang Hui: Your Excellency Under-Secretary-General Li Jinhua, Your Excellency Vice-Chair Francis Gurry, Your Excellency Board Chair Timothy Sinha, Your Excellency Secretary-General Ren Xianliang, Distinguished guests, ladies and gentlemen, good morning. It is my great honor to welcome you to attend this open forum. We feel highly appreciated that UNDESA and IGF provides us with a global platform for open dialogue on key digital issues. My name is Zhang Hui, Deputy Secretary-General of the World Internet Conference, also known as the WRC. The WRC is an international organization committed to establishing a global Internet platform for extensive consultation, joint contribution, and shared benefits, promoting the international community to follow the trend of digitalization, networking, and intelligence, to address security challenges for common development in the information age, and building a strong community with a shared future in cyberspace. Today’s session seems to be bridging the digital divide, focusing on the global south, highlights a key priority for inclusive global development, while challenges remain. The focus today is on solutions. How emerging technologies, particularly AI, can help expand access, strengthen digital capacity, and unlock new opportunities for the global south. We are honored to be joined by an exceptional group of speakers who are helping shape the future of digital governance. Among them are high-level representatives from international organizations, national regulatory bodies, and young representatives from emerging digital communities. First, I feel deeply honored to invite His Excellency, Mr. Li Junhua, on the Secretary General for UNDESA. UNDESA leads global initiatives on sustainable development and has played a key role in shaping international cooperation on emerging technologies. Welcome.


Li Junhua: Thank you. Thank you very much. Good morning, everyone, Excellencies, distinguished delegates. It is my great pleasure to join you today for this important gathering on bridging the digital divide with its vital focus on global south. I extend my sincere thanks to the World Internet Conference for convening this open forum. The theme of the digital divide could not be more urgent as we increasingly relied on the digital technologies to access education, healthcare, jobs, services, and civic participation. The divide between those who are connected and those who are not has become one of the defining challenges of our time. As technology evolves, so does the nature of the digital divide. It is no longer just the questions of cables, satellites, or cell towers. It is about affordable devices, the skill to use them, and the confidence and support needed to navigate the online world safely. It is a divide of opportunities. Today, 2.6 billion people remain offline. The majority live in the world’s least developed or lower-middle-income countries. This is where the digital gap remains widest and where our efforts must now be consolidated. We must also recognize the inequality within countries, even in those considered well-connected. Remote and rural populations, refugees, indigenous peoples, women and girls, and persons with disabilities continue to face barriers to full digital inclusion. These are not just gaps in access, but gaps in opportunity, which calls for a renewed focus on digital capacity development and building partnerships that are inclusive, innovative, and sustained. Forums like the World Internet Conference and the Internet Governance Forum are vital spaces for collaboration. They provide essential spaces for global dialogue, coordination, and collaboration on digital policy. Their true impact is realized when they are informed by what happens on the ground, because the roots of the digital divide are deeply local. The solution lies in empowering local communities. The IGF has evolved into a global ecosystem with over 176 national, regional, sub-regional, and youth IGFs now active worldwide. These local and regional processes are not just complementary to our global efforts. They are foundational. They give the communities a voice, surface the local innovations, and help shape the policies that are relevant, inclusive, and grounded in lived realities. To close the digital divide, we must strive for inclusive cooperation between global efforts and grassroots processes. The priorities that emerged from the bottom-up should guide the investment in infrastructure, human capacity, and meaningful partnerships, ensuring that no community is left behind in the digital age. Dear friends, dear colleagues, we now have a golden opportunity. The upcoming 20-year review of the World Summit on Information Society, or WSIS Plus 20, allows us to renew our commitment to digital inclusion and meaningful access for all. We have made progress. In 2015, during the WSIS Plus 10 review, an estimated 4 billion people were offline. Today, allow me to reiterate, that number has dropped to 2.6 billion. This is a major improvement, but of course, still far too many remain unconnected. That’s why there’s every reason for all of us to intensify our efforts. Let’s leverage the global platform to amplify the solutions, to collaborate, share, and work together to build a truly inclusive and equitable digital future for all. Thank you.


Zhang Hui: Thank you, Mr. Li. Thank you, Mr. Li. Next, it is our great pleasure to welcome Dr. Francis Gurry, Vice-Chair of WRC, former WIPO Director-General. He is a global authority on intellectual property and digital innovation.


Francis Gurry: Thank you very much indeed. Under-Secretary Li Junhua, Secretary-General Ren Xiaolong, distinguished panelists and guests, it’s so nice to be part of this forum and to see so many of you here participating on this exceptionally important topic in the context of this extremely important ongoing meeting of the Internet Governance Forum. Let me start with the very obvious point that digital technology has penetrated all aspects of our life. We’re all very much aware of this, but I don’t think we can repeat it sufficiently enough. We know that it is the basis of economic production now, if not the basis, at least a major factor of economic production. It is responsible for cost efficiencies, quality outcomes, innovation, and competitive advantage in the field of the economy. And outside the economy, we’re very much aware that digital technology enables or improves social communication, the delivery of social services such as health and medicine, cultural exchanges, and educational opportunities. So any impairment in the capacity… to use any of these advantages that are conferred by digital technology is obviously a major disadvantage. Digital technology is so important as the foundation of economic, cultural, social life now that a lack of access or a disadvantaged access of course creates a major, major problem. And that disadvantage, that divide, we know exists within countries. There is an urban-rural divide, there is a gender divide, there is an age divide, and there is an income divide. And we know it exists between countries, which is the one that we are concentrating on and addressing today. Now much good work has been done and Under-Secretary General Lee has referred to some of this. There’s the great work that’s been done by the International Telecommunication Union, for example, the great work of the World Internet Conference in increasing number of fields, and the many, many other organisations that are involved in trying to address this major question of the digital divide. Despite the progress, and I think Under-Secretary General Lee has referred to the fact that we now have two-thirds of the world connected, but of course one-third still not connected. Despite the progress, I think we are at a real crisis point in relation to the digital divide, and that crisis I think comes from two challenges. The first challenge is the crisis in development funding that we are witnessing right at the moment. So it’s estimated that next year there will be about 38% less development funding available around the world as a consequence of the change of attitude of the United States of America in relation to foreign aid, but also the diversion of funding by many European countries away from development and towards military expending and so on around the world. So there is a massive crisis we know in relation to development funding. And on some estimates, it’s scarcely sufficient to meet debt obligations. So this is the first problem we have. And the second problem is that never has funding and development assistance been more needed than at the present time when artificial intelligence is coming online at such a speed that it is baffling to all of us. So artificial intelligence now is another general purpose technology that will exacerbate or risk exacerbating the digital divide. We know that there are many positive aspects of artificial intelligence, and some of those include, for example, open access, but the speed at which it is unfolding and the amounts of money that is being invested in the development of artificial intelligence by some of the leading economies are such that we are at a great risk of the exacerbation of the digital divide, especially given that development funding is suffering a crisis at the same time. So this, I think, is the essence of the problem that we confront right at the moment in relation to the digital divide. And I think it requires a major international strategic plan with all the major actors involved in order to ensure that we do not end up in a worse situation with the advent, the arrival of these new artificial intelligence technologies, not so new perhaps now, we don’t end up in a worse position. If you look, just one final example, at data centres around the world essential to artificial intelligence infrastructure, you find that they are all, of course, or mainly in the north, with the exception of China. So we have a real potential crisis here, and I think a major international effort and strategic plan is required. Thank you very much.


Zhang Hui: Thank you, Dr. Gurry. Next, it is our great honour to welcome Ms. Tripti Sinha, co-chair of Internet Corporation for Signed Names and Numbers, or ICANN. She has extensive experience in Internet infrastructure and multi-stakeholder governance.


Tripti Sinha: Thank you very much, and thanks to the World Internet Conference for convening this very important discussion, and to my co-panellists for sharing this opportunity to speak to you today. So the digital divide, as you know, is not a new issue, and it continues to evolve in very complex ways, as my colleagues just stated. Today, the discussion is broader than access. It is also about participation and inclusiveness. The fact that 5 billion people are now online is significant. This growth was not accidental. It reflects years of coordination, technical cooperation, and a shared commitment to an Internet that remains global, resilient, and accessible to all. But as Dr. Currie just said, we are in a financial crisis in the world, and priorities are shifting and we are in a very, very difficult time. And we must treat this prevailing global digital divide with a sense of urgency. As the old adage goes, knowledge begets knowledge, wealth begets wealth, and those who possess these will only have the opportunity to obtain more. Similarly, innovation begets innovation. And for those who are not part of this opportunity ecosystem, you know, they will suffer and they will fall behind. And during this time of yet another innovation and the change agent at play, which is artificial intelligence, this divide will continue to grow. So a global community of have and have-nots will only lead to future significant problems. We know the world will change in unknown ways with the application of artificial intelligence. However, we should leverage the advantages that come with AI as we begin to create a strategic blueprint to reduce this digital divide for the world community. So let’s talk a little bit about AI as it offers significant benefits for building out networks in unconnected markets by enabling efficient resource allocation, proactive maintenance, and of course enhanced security, which is so needed in today’s world. AI-powered solutions for addressing digital divide can also optimize network infrastructure and we can apply the technology to intelligently assess opportunity gaps. So in terms of where do we start, we need to be infrastructure-ready. There are many reasons why this divide exists. So let’s talk about the infrastructure. Clearly a blueprint will start with addressing the lack of physical cables and so on and addressing the installation and putting an architecture in place to put the media together. And while this connectivity is essential, it’s only one part of the equation. You will then need to light this infrastructure to begin to get the bits and bytes flowing. So the Internet depends on this very strong technical foundation that allows it to function reliably, securely, and at scale. And this foundation, as you know so well, includes a domain name system, IP addresses, and root service systems. These elements may not be visible to the user community. However, we need to come together as a global community to ensure that we can put these different parts together to bring connectivity to those who are unconnected. At ICANN, we help coordinate this layer of the Internet. We work to maintain the stability and security of the DNS. We manage and facilitate the Internet’s unique identifier systems. We support the deployment of root server operations in underserved regions. And we also partner with technical operators and institutions across the global south to help strengthen local capacity and resilience. ICANN, of course, doesn’t build physical infrastructure. We coordinate key systems with colleagues around the world that makes connectivity reliable and sustainable. However, there’s yet another barrier, and that’s the barrier of language. Today, millions of users still cannot fully engage with the Internet in their own language or script, and this speaks to locality. And ICANN’s work on universal acceptance and internationalized domain names directly addresses this. These initiatives ensure that domain names and email addresses and local scripts work across devices, applications, and platforms. But this can only be possible if the technical community comes together, those who operate up and down the technology stack to make this happen. These capabilities are critical for cultural and linguistic participation. We encourage governments and institutions to integrate universal acceptance, international ICT strategies, and public service delivery. The technical steps are clear. The impact, particularly for multilingual and underserved communities, is significant. Solving this divide also depends significantly on coordination. ICANN was created as a multi-stakeholder organization, bringing together governments, the private sector, civil society, the technical community, and others to help manage these critical Internet resources. That model continues to be very relevant, and it’s open, it’s collaborative, and technically grounded, has helped keep the Internet stable and interoperable and global. We must continue to embrace it. So this coordination should not be assumed. Fragmentation at the technical level is a real and growing risk. An increasing number of governments are exploring state-led approaches to infrastructure and governance. Some are talking about a creation of a new multilateral model of Internet governance, which could result in serious issues in the functioning or even the existence of the single interoperable Internet. Of course, national interests are legitimate. No one’s disagreeing with that. However, divergence from global technical norms threatens the Internet’s core functionality, especially for the countries from the global South that could find themselves separated from the global Internet and part of some other networks that are not compatible. The Internet Governance Forum and all the other multi-stakeholder spaces help maintain alignment where it matters most, at the technical layer. They provide neutral platforms to resolve tensions and share solutions without imposing uniformity. The future of universal, affordable access depends on infrastructure that works, governance that adapts, accessibility above and beyond prevailing norms by applying universal acceptance, and participation that reflects the diversity of those who use the Internet. At ICANN, we remain committed to supporting this future by working with partners across the world, indeed with the global South, to expand technical resilience, enable multilingual access, and help keep the Internet globally connected. And hopefully we can close the digital divide. Thank you.


Zhang Hui: Thank you, Mr. Tripathi Sinha. Next, it is our great pleasure to welcome Mr. Zeng Xianliang, Secretary-General of the WRC. He has actually pushed forward the WRC’s transformation into a global platform for inclusive digital dialogue. Zeng Xianliang, Secretary-General of the WRC.


Ren Xianliang: and other key facilities to extend to developing countries. We focus on sustainable operation of infrastructure so that digital benefits can truly benefit the local population. We also strengthen the investment in capacity building and improve the level of digital education and skills. In the digital age, education and training are the biggest equalizer. We should increase the number of young people in southern countries and the ability of women, small and medium-sized entrepreneurs to set up digital training centers and develop localization courses to open the door to the future of digitalization. Third, we should improve the global digital governance mechanism and ensure the participation rights of developing countries. At present, developing countries are working on key governance mechanisms such as digitalization rights and technical supervision. We call on the international community to join the global governance program under the framework of multilateral participation and realize the beautiful vision of building, sharing and governing together. Fourth, we should strengthen international cooperation and expand multilateral participation channels. We should strengthen international cooperation under the framework of multilateral participation and expand multilateral participation channels. We should strengthen international cooperation under the framework of multilateral participation and expand multilateral participation channels. As an international organization, the World Internet Conference sincerely invites more enterprises, institutions and individuals from all over the world to join the membership and start cooperation. We should work together to promote the sharing of technology, the complementarity of capabilities, the joint construction of a multilateral participation and the mutually beneficial future of digitalization. Ladies and gentlemen, digitalization is not only a technical problem, but also a problem of development and fairness. The World Internet Conference welcomes all parties to continue to promote digital technology and to contribute to a stronger development momentum for the global South. Let’s work together to build a network space and a shared future and make the Internet a blessing for the people all over the world. Thank you.


Zhang Hui: Thank you, Mr. Zeng. Next, it’s our great honor to welcome Ms. Qi Xiaoxia, the Director General of International Cooperation Bureau of Cyberspace Administration of China. She has extensive experience in international cyberspace exchange and cooperation. Welcome.


Qi Xiaoxia: Distinguished guests, ladies and gentlemen, friends, good morning. I’m very pleased to be part of this distinguished panel discussing how to promote digital development and bridge digital divide for the global South. At present, as a collective of emerging market countries and developing countries, the global South has stepped onto the historical stage with great strides, injecting new impetus to global development and new progress for global governance. It has attracted the attention and anticipation of the international community. However, at the same time, the digital development deficit in the global South has become a weak link and a challenge that cannot be ignored as mankind embraces the digital age. How to bridge the digital divide and ensure that the global South does not fall behind in the digital age is a common task facing the international community. As a natural member of the global South, China has had the global South at heart and been deeply rooted in the global South. China regards assisting the development of the global South and bridging the digital divide as an unshakable international responsibility. In 2015, Chinese President Xi Jinping unveiled the vision of building a community with a shared future in cyberspace, contributing China’s wisdom and approach to the development and global governance of Internet. The vision advocated prioritizing development and deepening international exchanges and cooperation in the digital field. It has responded effectively to the development demands and common concerns of the global South in the digital age and provided important guidance for helping the global South bridge the digital divide, narrow the digital divide, and enable more countries and people to share the fruits of Internet development. In the face of the wave of AI development, President Xi Jinping emphasized the need to carry out extensive international cooperation on AI, helping the global South countries strengthen their technological capacity building and making China’s contributions to bridging the global intelligence gap. To help the global South bridge the digital divide, China is not only an advocate, but also a promoter and a pioneer. Under the theme of building a community with a shared future in cyberspace, we have continuously hosted World Internet Conference Wuzhen Summit, providing an important platform for exchanges and cooperation for the global South to share the dividends of digital development. For four consecutive years, the Wuzhen Summit has released a collection of practice cases of jointly building a community with a shared future in cyberspace, providing useful reference experiences for the global South in bridging the digital divide. In July last year, the UN General Assembly adopted the China-sponsored resolution Enhancing International Cooperation on Capacity Building of Artificial Intelligence. China has prioritized the follow-up implementation of the resolution and announced an action plan with ten major actions to fulfill the visions of the global South in five aspects, which contribute to strengthening AI capacity building for the global South. Last year, Chinese think tanks jointly launched the research report on global AI governance, identified AI divide and international collaboration as one of the ten key issues in global AI governance, and proposed a clear path of action to help the global South bridge the intelligence divide. Ladies and gentlemen, friends, development is the master key to solving all problems, and it is also the common aspiration and general expectations of the global South. Looking to the future, the global South should become a highland for digital innovation and development rather than a swamp left behind in sharing digital dividends. I would like to share three observations on how to accelerate bridging digital divide and create highlights of the global South. Firstly, the right to development of the global South should be upheld in the spirit of equality and mutual respect. Development is an eternal theme of human society and the right of all countries rather than an exclusive privilege of the few. China advocates respect for sovereignty in cyberspace and maintains that all countries, regardless of size, strength and wealth, are equal members of the international community and have the right to independently choose their own path of Internet development and the model of governance. Chinese think tanks have actively followed up and studied the issue of sovereignty in cyberspace and have successively released sovereignty in cyberspace theory and practice version 1.0 to 4.0, which provides an in-depth and systematic study and explanation of the specific issues related to the application of sovereignty in the process of digital-driven, Internet-based and smart growth and contribute to the theoretical support of safeguarding the right to digital development for the global South. Facing the issue of our time to help the global South bridge digital divide, China will always be committed to respecting sovereignty in cyberspace and work with the international community to respect the path of digital development and the models of governance of all countries, jointly oppose cyber hegemony and the politicization of technological issues with a view to fostering a favorable environment conducive to digital development for the global South. Secondly, practical cooperation should be strengthened to enhance digital Thank you all for joining us today. We are proud to announce that we are launching a new digital capacity for the global South. AI and other emerging technologies are on the rise. Dramatically enhancing mankind’s ability to understand and transform the world, while at the same time, raising the threshold of digital development capacity. In addition to international cooperation in capacity-building and digital capacity building, we will launch five new training workshops for Latin America and Caribbean countries, and for ASEAN countries, to carry out targeted training in digital capacity. Next, we will organize five more training workshops for the global South, with a view to continuously strengthening digital capacity building for the global South. We call on the international community to join hands in building a multi-channel exchange platform for the global South, to enhance its digital capacity, and help bridge the digital divide by building a multi-channel exchange platform, carrying out assistance and training projects, and promoting the sharing of knowledge on AI and other emerging technologies. Thirdly, efforts should be made to promote collaborative governance and amplify the voice of the global South in digital capacity building. At present, global digital governance is at an important crossroads, and the global South presents an important force for improving global governance. Listening to more voices from the global South can better help bridge the digital divide. China has organized the China-ASEAN Digital Governance Dialogue, China-Africa Internet Exchange Forum, and has deeply engaged in cooperation on digital governance under the platforms such as APEC, the BRICS, and the Shanghai Cooperation Organization, thus contributing more solutions to global digital governance. China is willing to work with the international community to support more active and broader participation by the global South in the digital governance processes of the United Nations, regional multilateral organizations, and specialized agencies. China aims to promote the enhancement of the representation and the voice of the global South in global digital governance so as to make the will of the global South be reflected in a more balanced and reasonable manner, and further consolidate the international consensus on bridging the digital divide. Thank you for your attention.


Zhang Hui: Thank you. and advocate, who is the chairman of Ghana Dot Com, the Father of Internet in Africa, Internet hall of fame, and the awardee of the WRC distinguished contribution.


Nii Quaynor: Thank you for the opportunity for me to share a session with such excellent speakers. In my perspective, it’s about time to mobilize additional attention on bridging the digital divide to address the systemic issues that impede efforts to eliminate divides with the global south. Although the digital divide is a very difficult challenge and pervasive, some countries are making progress in preventing its widening. Technology divides have a long history, as was mentioned earlier. In the 70s, we were missing human resources to initiate computer science institutions or build enterprise systems. In the 80s, we faced scientific instrumentation, computer interfacing deficiencies, VLSI, and in the 90s, Internet arrived in our countries with even more divides. It appears every new technology comes with its distinct divides, and some may widen other divides. However, addressing known limitations of infrastructure and costs, quality education, and digital governance will determine effective participation by the global south in the digital economies. Emerging Internet communities like in Africa feel fortunate with the open practices that give us a chance to be globally involved. The open standards, open documentation, and open participation have been particularly helpful in building capacity and networks addressing digital divide. Though we have made good progress with Internet, we have several challenges. Observations on resilience of Internet in Africa show a ready digital economy at about half midway user penetration, but has fragile infrastructure and known technical capacity needs. We have seen a decline in the number of data centers, connectivity, exchange points, capacity, and users are all improving. Africa is at 4.4 domain names per thousand, where global is 45 per thousand. 10 CCTLD registries have 92% of names. There are 13 ICANN globally accredited registrars in Africa and more than 1,000 registries in the world. Demand for hosting and data centers continue to increase. With an increasing attention on growing the infrastructure, the number of users in Africa region would soon be second only to Asia. The Burgundy REN research and education network ecosystem is becoming active with regional RENs and national RENs and campuses. We have seen a decline in the number of data centers, connectivity, exchange points, capacity, and users are all improving. With an increasing attention on growing the infrastructure, the number of users in Africa region would soon be second only to Asia. The infrastructure is working by standards, best practices regulation of operators and technical capacity. The approaches here are inherently multi-stakeholder and more bottom-up community discussions. The multi-stakeholder approach has its potency well-known, but is also known to have requirements. The multi-stakeholder approach is a multistakeholder approach and is a multi-stakeholder approach. The multi-stakeholder approach is a multi-stakeholder approach and is a multi-stakeholder approach. It is necessary to avoid capture and can sometimes result in a decision by fatigue. It also needs a meritorious moderator to call consensus in deliberations. The lack of consensus among resource members has caused a review of the arrangements around regional Internet registries. Fortunately, this impasse, like the Internet itself, the African registry core functions have shown resilience, and there are lessons learned to improve the governance. Participation in global MS organizations is voluntary and or by paid staff of organizations. The global south can have challenges finding, therefore, good participants. How to make the multi-stakeholder approach work better in the global south might be a governance divide issue to be addressed. We continue to deepen our foundation to cope with emerging technologies and learn how to manage with our limited resources, yet be able to be on the supply chain. We are not alone in this. We have to be prepared to adapt to new challenges and to adapt to new aspects of our strategy as well. With weak foundations, power, general infrastructure, skills, science education, our efforts were not good enough to meet the rapid growth of access speed, quality, and need for IPv6 access technology upgrades. The need for access speed and service went hard with some dominant providers with concentration and consolidation. The economic model of the internet, never favorable to newcomers, has not eased things for the global south. Where is the revenue to maintain and improve and develop infrastructure services constantly? The fast tracking of things for immediate results creates an efficient ecosystem that is unable to address the challenges and coping with the future. The non-existence of a stimulative and adaptive framework for rapidly evolving technology tends to hibernate innovations. What can we do? We can review the frameworks and make policies to enable innovation and creation and not just regulate usage. We have to build up on science education, optimize the use of data centers and exchange points and other existing infrastructure. Lots of effort and resources have been put in these and prudent to preserve the investment. Optimize knowledge transfer and capacity building through strong fundamental and intergenerational mentorship and coaching. Digital divide is tough. Therefore, in addition to all ongoing efforts, we welcome increased attention on it. We are encouraged by technical cooperation opportunities on global governance and digital divide. A South-South cooperation and collaboration leveraging WIC’s multi-stakeholder network to join in dealing with digital divides of the global South is a useful addition. The maturing AI technology threatens the digital divide the most, given associated high cost of infrastructure, high power requirements and technical skills needed to be on the supply side. Hence, attempt to harness AI to address the digital divide in this forum is insightful, might prevent it generating divide and bring real meaning to AI for good, AI for digital unity. Thank you very much for your attention.


Zhang Hui: Thank you, Dr. Quaynor. Now let’s give the floor to our youth leader. We welcome Mr. Chern Choong Thum, Special Functional Officer at the Ministry of Communications of Malaysia and also a 2024 global youth leader. He supports Malaysia’s digital strategy and regional innovation programs. Welcome.


Chern Choong Thum: And good morning from Malaysia. It is a great honour to be here, not only as a representative of the youth, but also one from Southeast Asia to speak about an issue affecting not just economies, but also the very heart of our societies, the digital divide. In Malaysia, we say muafakat membawa berkat, bersekutu bertambah mutu, reflecting our belief that unity brings great things. In our ultra-connected world, this spirit has never been more important. Yes, the rapid advancement of digital technology has brought incredible opportunities, but it has also widened gaps. In a global South, communities are still being left behind due to unequal access, high cost and also limited digital skills. As artificial intelligence, cloud services and the digital economy continue to accelerate, these gaps risk turning into chasms. The ITU Facts and Figures 2024 starkly highlight this uneven progress. While 5.5 billion people are online today, a third of the world, predominantly in the global South’s rural, low-income areas, remains disconnected. Internet use is almost universal in high-income nations, but drops to just 27% in low-income economies. Even at 5G expense, its reach in the poorest countries is a mere 4%. This digital exclusion mirrors existing social and economic inequalities demanding urgent action. That is why our policies must be open, inclusive, accepting of one another and most importantly, human-centric. The internet, after all, is not just a tool for commerce or entertainment. It has become a lifeline, a platform for learning, for healthcare, for livelihoods and for communities to connect and support one another. Malaysia takes this very seriously. As Chair of ASEAN in 2025, we champion inclusivity and sustainability as our theme for the year. As our theme, it is not enough to just grow fast. We must grow together and sustainably. Our Kuala Lumpur Declaration, sealed this May, envisions a shared future where no one is left behind. Recognising these disparities, Malaysia has actively deployed tangible solutions. Our National Information Dissemination Centres, or NADI Centres, exemplify this commitment. With 1,069 operational nationwide, these hubs provide collective internet access and vital ICT training, bridging the gaps for rural and urban poor communities. In a significant collaboration, the Malaysian Communications and Multimedia Commission, MCMC, and Microsoft has launched the AI Teach Skills for AI-Enabled Economy programme at NADI Centres, directly equipping local communities with crucial AI skills. Our Jandela Plan further strengthens this foundation. As of December 2024, Jandela has equipped over 9 million premises with fibre optic access, boosted media mobile download speeds to 105 Mbps and also extended internet coverage in populated areas to 98.66%. These efforts ensure a more equitable quality of digital experience regardless of location. Beyond infrastructure, we champion digital literacy and skills for the AI era. Initiatives like AI Untuk Rakyat enhance emerging tech skills across Malaysians. And through the Ministry of Human Resources National Training Week 2025, nearly 400,000 teachers nationwide are receiving large-scale upskilling, including comprehensive AI training to prepare our generation for a future-ready education. Take AI governance for example. Malaysia has developed the National Guidelines on AI Governance and Ethics and collaborated with ASEAN on their AI Guide. In both, we prioritise people and not just systems. Ethical, inclusive AI is not just a luxury for the global south, but a necessity for equitable development. We aim to advocate for digital governance frameworks that empower and uplift every community. Now as a doctor myself working in public health, I’ve seen firsthand how the digital divide carries a very real, very tangible consequences. At the time when a surgeon in Rome can perform an operation on a patient in Beijing through 5G-powered surgical robots, far too many communities still struggle to access even basic online health consultations or timely public health information. This gap is about lives. Digital exclusion deepens health inequalities, inequities, cutting off access to life-saving services and vital health education. Digital inclusion is not just an economic imperative, it is also a public health priority. And while we look at macro-level solutions, we must not forget the micro. The Southeast Asian kampung spirit, which means looking out for your neighbour, remains really strong. We should embrace this globally, creating spaces where no one is left behind. Women, youth, persons with disabilities, refugees and vulnerable populations must be given platforms to be heard and to lead. As part of the Global South, Malaysia will collaborate with ASEAN, Africa, Latin America and the Pacific to co-develop tailored solutions. The Global Youth Leaders Programme organised by the World Internet Conference is an inspiring example of bringing diverse young changemakers together. We must ensure such opportunities exist for the marginalised, not as a token, but also as a core part of our digital future. Let us collectively build bridges, not walls. Let us harness digital governance not as a tool of control, but as a platform for empowerment. And let us remember that the true measure of our digital progress is not in how we advance our systems are, but in how many lives we uplift along the way. Terima kasih, thank you, and may we , move forward together. Thank you.


Zhang Hui: Thank you to all our distinguished speakers. Today, we will prioritise forward-looking solutions to bridge the digital divide in the Global South. This open forum reflects a growing international consensus that inclusive dialogue and global engagement are essential to building a trusted, accessible and people-centred digital future. Looking ahead, the WRC is committed to deepening our cooperation with the Global South, listening, engaging and creating pathways to digital empowerment. On behalf of the WRC, thank you once again for your participation, insights and ongoing dedication. We look forward to continuing this wide conversation and working together to build a more inclusive, human-centred digital future. See you next year. Thank you.


L

Li Junhua

Speech speed

103 words per minute

Speech length

544 words

Speech time

316 seconds

2.6 billion people remain offline, majority in least developed countries – Digital Divide as Opportunity Gap

Explanation

Li Junhua emphasizes that the digital divide represents gaps in opportunity rather than just access. He highlights that 2.6 billion people remain offline, with the majority living in the world’s least developed or lower-middle-income countries where the digital gap remains widest.


Evidence

Specific statistic that in 2015, 4 billion people were offline, which has improved to 2.6 billion today, showing progress but still indicating far too many remain unconnected


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Infrastructure


Agreed with

– Francis Gurry
– Tripti Sinha
– Nii Quaynor
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


Digital divide exists within and between countries affecting rural populations, women, refugees, and persons with disabilities – Inequality Within Connected Nations

Explanation

Li Junhua points out that digital inequality exists not only between countries but also within countries that are considered well-connected. He specifically identifies vulnerable groups that continue to face barriers to full digital inclusion.


Evidence

Mentions remote and rural populations, refugees, indigenous peoples, women and girls, and persons with disabilities as groups facing barriers


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Human rights


Bottom-up grassroots processes are foundational to global efforts, giving communities voice

Explanation

Li Junhua argues that local and regional processes are not just complementary but foundational to global digital inclusion efforts. He emphasizes that solutions must be grounded in local realities and community empowerment.


Evidence

IGF has evolved into a global ecosystem with over 176 national, regional, sub-regional, and youth IGFs active worldwide


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Sociocultural


Agreed with

– Ren Xianliang
– Qi Xiaoxia
– Nii Quaynor
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


WSIS Plus 20 review provides opportunity to renew commitment to digital inclusion

Explanation

Li Junhua highlights the upcoming 20-year review of the World Summit on Information Society as a golden opportunity to renew global commitment to digital inclusion and meaningful access for all.


Evidence

References the progress made since WSIS Plus 10 review in 2015 when 4 billion people were offline compared to 2.6 billion today


Major discussion point

Regional and National Strategies


Topics

Development | Legal and regulatory


F

Francis Gurry

Speech speed

129 words per minute

Speech length

763 words

Speech time

353 seconds

Digital technology has penetrated all aspects of life making lack of access a major disadvantage – Digital Technology as Foundation of Modern Life

Explanation

Francis Gurry argues that digital technology has become fundamental to economic production, social communication, healthcare, education, and cultural exchanges. Any impairment in accessing these digital advantages creates major disadvantages for individuals and communities.


Evidence

Digital technology is responsible for cost efficiencies, quality outcomes, innovation, and competitive advantage in the economy, and enables improvements in social services, health, medicine, cultural exchanges, and educational opportunities


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Economic | Sociocultural


Agreed with

– Li Junhua
– Tripti Sinha
– Nii Quaynor
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


38% reduction in development funding next year creates crisis in addressing digital divide – Development Funding Crisis

Explanation

Francis Gurry warns of a massive crisis in development funding, with an estimated 38% reduction next year due to changes in US foreign aid policy and European countries diverting funds to military spending. This funding crisis makes it difficult to address the digital divide when resources are most needed.


Evidence

Attributes funding reduction to change of attitude of the United States in foreign aid and diversion of funding by European countries towards military spending; notes funding is scarcely sufficient to meet debt obligations


Major discussion point

Crisis Points and Emerging Challenges


Topics

Development | Economic


Artificial intelligence arrival at unprecedented speed risks exacerbating digital divide – AI as Accelerating Factor

Explanation

Francis Gurry identifies AI as another general purpose technology that poses risks of exacerbating the digital divide due to its rapid development and the massive investments being made by leading economies. The speed of AI development combined with the funding crisis creates a perfect storm for widening digital gaps.


Evidence

Points to data centers essential for AI infrastructure being located mainly in the north, with the exception of China, and the massive amounts of money being invested in AI development by leading economies


Major discussion point

Crisis Points and Emerging Challenges


Topics

Development | Infrastructure | Economic


Agreed with

– Tripti Sinha
– Nii Quaynor
– Qi Xiaoxia

Agreed on

AI poses both opportunities and risks for exacerbating the digital divide


T

Tripti Sinha

Speech speed

136 words per minute

Speech length

1010 words

Speech time

442 seconds

Need for infrastructure readiness including physical cables, DNS systems, and root servers – Technical Foundation Requirements

Explanation

Tripti Sinha emphasizes that while physical connectivity is essential, it’s only part of the solution. The Internet depends on a strong technical foundation including domain name systems, IP addresses, and root service systems that may not be visible to users but are crucial for reliable, secure, and scalable Internet function.


Evidence

ICANN coordinates this layer of the Internet, maintains DNS stability and security, manages Internet’s unique identifier systems, and supports root server deployment in underserved regions


Major discussion point

Infrastructure and Technical Solutions


Topics

Infrastructure | Legal and regulatory


Agreed with

– Li Junhua
– Francis Gurry
– Nii Quaynor
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


AI can optimize network infrastructure and enable efficient resource allocation for unconnected markets – AI-Powered Network Solutions

Explanation

Tripti Sinha argues that AI offers significant benefits for building networks in unconnected markets through efficient resource allocation, proactive maintenance, and enhanced security. AI-powered solutions can optimize network infrastructure and intelligently assess opportunity gaps.


Evidence

AI enables efficient resource allocation, proactive maintenance, enhanced security, and can intelligently assess opportunity gaps for addressing digital divide


Major discussion point

Infrastructure and Technical Solutions


Topics

Infrastructure | Development


Agreed with

– Francis Gurry
– Nii Quaynor
– Qi Xiaoxia

Agreed on

AI poses both opportunities and risks for exacerbating the digital divide


Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance

Explanation

Tripti Sinha advocates for ICANN’s multi-stakeholder model that brings together governments, private sector, civil society, and technical community. She argues this open, collaborative, and technically grounded approach has kept the Internet stable, interoperable, and global.


Evidence

ICANN was created as a multi-stakeholder organization bringing together various stakeholders, and this model has helped keep the Internet stable and interoperable


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Infrastructure


Agreed with

– Li Junhua
– Ren Xianliang
– Nii Quaynor

Agreed on

Multi-stakeholder governance approaches are essential but face implementation challenges


Disagreed with

– Qi Xiaoxia

Disagreed on

Governance approach – Multi-stakeholder vs State sovereignty


Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks

Explanation

Tripti Sinha warns that an increasing number of governments are exploring state-led approaches to infrastructure and governance, with some considering new multilateral models. She argues this divergence from global technical norms threatens the Internet’s core functionality and could separate Global South countries from the global Internet.


Evidence

Notes that countries from the global South could find themselves separated from the global Internet and part of incompatible networks


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Infrastructure


Disagreed with

– Qi Xiaoxia

Disagreed on

Risk assessment of fragmentation vs sovereignty protection


Millions cannot engage with Internet in their own language creating participation barriers – Language Barriers

Explanation

Tripti Sinha identifies language as a significant barrier to Internet participation, noting that millions of users still cannot fully engage with the Internet in their own language or script. This creates barriers to cultural and linguistic participation in the digital world.


Major discussion point

Language and Cultural Inclusion


Topics

Sociocultural | Human rights


Universal acceptance and internationalized domain names critical for cultural and linguistic participation – Multilingual Internet Access

Explanation

Tripti Sinha explains that ICANN’s work on universal acceptance and internationalized domain names directly addresses language barriers. These initiatives ensure that domain names and email addresses in local scripts work across devices, applications, and platforms, enabling cultural and linguistic participation.


Evidence

ICANN’s initiatives ensure domain names and email addresses in local scripts work across devices, applications, and platforms, requiring technical community collaboration


Major discussion point

Language and Cultural Inclusion


Topics

Sociocultural | Infrastructure | Multilingualism


R

Ren Xianliang

Speech speed

128 words per minute

Speech length

324 words

Speech time

151 seconds

Focus on sustainable infrastructure operation and digital education as the biggest equalizer – Sustainable Infrastructure Focus

Explanation

Ren Xianliang emphasizes the importance of sustainable operation of infrastructure so that digital benefits can truly benefit local populations. He argues that in the digital age, education and training serve as the biggest equalizer for addressing digital divides.


Evidence

Mentions extending infrastructure to developing countries and establishing digital training centers with localized courses to open doors to digitalization


Major discussion point

Infrastructure and Technical Solutions


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Li Junhua
– Qi Xiaoxia
– Nii Quaynor
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


Need for multilateral participation framework ensuring developing countries’ participation rights – Global Governance Participation

Explanation

Ren Xianliang calls for improving global digital governance mechanisms to ensure participation rights of developing countries. He advocates for international cooperation under multilateral participation frameworks to realize the vision of building, sharing, and governing together.


Evidence

Notes that developing countries are working on key governance mechanisms such as digitalization rights and technical supervision


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Development


Agreed with

– Li Junhua
– Tripti Sinha
– Nii Quaynor

Agreed on

Multi-stakeholder governance approaches are essential but face implementation challenges


World Internet Conference provides platform for Global South to share digital development dividends – International Platform Creation

Explanation

Ren Xianliang positions the World Internet Conference as an international organization that provides a platform for the Global South to participate in digital development. He invites global participation in membership and cooperation to promote technology sharing and capability complementarity.


Evidence

WRC sincerely invites more enterprises, institutions and individuals from all over the world to join membership and start cooperation


Major discussion point

Regional and National Strategies


Topics

Development | Legal and regulatory


Q

Qi Xiaoxia

Speech speed

142 words per minute

Speech length

1202 words

Speech time

506 seconds

China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace

Explanation

Qi Xiaoxia argues that all countries, regardless of size, strength, and wealth, have the right to independently choose their own path of Internet development and governance models. China advocates for respecting sovereignty in cyberspace and opposes cyber hegemony and politicization of technological issues.


Evidence

Chinese think tanks have released sovereignty in cyberspace theory and practice versions 1.0 to 4.0, providing systematic study of sovereignty application in digital processes


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Human rights


Disagreed with

– Tripti Sinha

Disagreed on

Risk assessment of fragmentation vs sovereignty protection


Digital capacity building through training workshops and knowledge sharing platforms essential – Digital Capacity Building

Explanation

Qi Xiaoxia emphasizes the importance of practical cooperation to enhance digital capacity for the Global South, particularly as AI and emerging technologies raise the threshold for digital development. She advocates for international cooperation in capacity-building and targeted training programs.


Evidence

China will launch five new training workshops for Latin America and Caribbean countries, and for ASEAN countries, with plans for five more workshops for the Global South


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Sociocultural


Agreed with

– Li Junhua
– Ren Xianliang
– Nii Quaynor
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


China’s commitment to helping Global South through AI capacity building and international cooperation – China’s Global South Support

Explanation

Qi Xiaoxia outlines China’s comprehensive approach to supporting the Global South, including hosting the World Internet Conference, releasing practice cases, and implementing UN resolutions on AI capacity building. China positions itself as an advocate, promoter, and pioneer in helping bridge the digital divide.


Evidence

UN General Assembly adopted China-sponsored resolution on AI capacity building; China announced action plan with ten major actions; Chinese think tanks launched research report on global AI governance


Major discussion point

Regional and National Strategies


Topics

Development | Legal and regulatory


N

Nii Quaynor

Speech speed

172 words per minute

Speech length

933 words

Speech time

324 seconds

Africa shows fragile infrastructure despite improving connectivity and user growth – African Infrastructure Challenges

Explanation

Nii Quaynor describes Africa’s digital economy as being at midway user penetration with improving connectivity, but having fragile infrastructure and known technical capacity needs. Despite progress in various areas, fundamental challenges remain in building resilient digital infrastructure.


Evidence

Africa is at 4.4 domain names per thousand compared to global 45 per thousand; 10 CCTLD registries have 92% of names; only 13 ICANN accredited registrars in Africa versus over 1,000 globally


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Infrastructure | Development


Agreed with

– Li Junhua
– Francis Gurry
– Tripti Sinha
– Chern Choong Thum

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


Every new technology brings distinct divides and may widen existing ones – Technology Divide Pattern

Explanation

Nii Quaynor provides historical perspective showing that technology divides have a long history, with each new technology era bringing its own distinct challenges. He traces this pattern from the 1970s computer science era through the 1990s Internet arrival to current AI developments.


Evidence

In the 70s: missing human resources for computer science; 80s: scientific instrumentation and VLSI deficiencies; 90s: Internet arrival with new divides


Major discussion point

Crisis Points and Emerging Challenges


Topics

Development | Infrastructure


AI threatens digital divide most due to high infrastructure costs and technical skill requirements – AI Infrastructure Barriers

Explanation

Nii Quaynor warns that maturing AI technology poses the greatest threat to the digital divide because of the associated high costs of infrastructure, high power requirements, and technical skills needed to be on the supply side. This makes it particularly challenging for Global South countries to participate.


Major discussion point

Crisis Points and Emerging Challenges


Topics

Infrastructure | Development | Economic


Agreed with

– Francis Gurry
– Tripti Sinha
– Qi Xiaoxia

Agreed on

AI poses both opportunities and risks for exacerbating the digital divide


Multi-stakeholder approach has potency but requires good moderation and can face participation challenges in Global South – Governance Implementation Challenges

Explanation

Nii Quaynor acknowledges the effectiveness of multi-stakeholder approaches while noting their requirements and limitations. He points out that participation in global multi-stakeholder organizations can be challenging for the Global South due to resource constraints and the need for quality participants.


Evidence

Multi-stakeholder approach needs meritorious moderator to call consensus; participation in global MS organizations is voluntary or by paid staff; Global South can have challenges finding good participants


Major discussion point

Governance and International Cooperation


Topics

Legal and regulatory | Development


Agreed with

– Li Junhua
– Tripti Sinha
– Ren Xianliang

Agreed on

Multi-stakeholder governance approaches are essential but face implementation challenges


Need for intergenerational mentorship and coaching to optimize knowledge transfer – Knowledge Transfer Optimization

Explanation

Nii Quaynor emphasizes the importance of optimizing knowledge transfer and capacity building through strong fundamental education and intergenerational mentorship and coaching. He sees this as crucial for building sustainable digital capacity in the Global South.


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Sociocultural


Agreed with

– Li Junhua
– Ren Xianliang
– Qi Xiaoxia
– Chern Choong Thum

Agreed on

Capacity building and education are fundamental to bridging the digital divide


C

Chern Choong Thum

Speech speed

151 words per minute

Speech length

855 words

Speech time

339 seconds

5.5 billion people are online but a third of the world remains disconnected, predominantly in Global South rural areas – Uneven Global Progress

Explanation

Chern Choong Thum cites ITU Facts and Figures 2024 to highlight the stark disparity in global internet access. While internet use is almost universal in high-income nations, it drops to just 27% in low-income economies, with 5G reach being only 4% in the poorest countries.


Evidence

ITU Facts and Figures 2024 shows internet use almost universal in high-income nations but only 27% in low-income economies; 5G reach is mere 4% in poorest countries


Major discussion point

Current State and Urgency of the Digital Divide


Topics

Development | Infrastructure


Agreed with

– Li Junhua
– Francis Gurry
– Tripti Sinha
– Nii Quaynor

Agreed on

Digital divide represents urgent global challenge requiring immediate attention


Malaysia’s NADI Centers provide internet access and ICT training, with AI skills programs – Community Access Centers

Explanation

Chern Choong Thum describes Malaysia’s National Information Dissemination Centres (NADI) as a tangible solution with 1,069 operational centers nationwide providing collective internet access and vital ICT training. These centers specifically target rural and urban poor communities and include AI skills training through collaboration with Microsoft.


Evidence

1,069 NADI Centers operational nationwide; collaboration between Malaysian Communications and Multimedia Commission (MCMC) and Microsoft for AI Teach Skills programme


Major discussion point

Capacity Building and Education Initiatives


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Li Junhua
– Ren Xianliang
– Qi Xiaoxia
– Nii Quaynor

Agreed on

Capacity building and education are fundamental to bridging the digital divide


National Infrastructure Success – Malaysia’s Jandela Plan equipped 9 million premises with fiber optic access and boosted mobile speeds

Explanation

Chern Choong Thum highlights Malaysia’s Jandela Plan as a successful infrastructure initiative that has equipped over 9 million premises with fiber optic access, boosted median mobile download speeds to 105 Mbps, and extended internet coverage in populated areas to 98.66% as of December 2024.


Evidence

As of December 2024, Jandela equipped over 9 million premises with fiber optic access, boosted median mobile download speeds to 105 Mbps, extended internet coverage in populated areas to 98.66%


Major discussion point

Infrastructure and Technical Solutions


Topics

Infrastructure | Development


Malaysia champions inclusivity and sustainability as ASEAN Chair with human-centric policies – ASEAN Leadership Approach

Explanation

Chern Choong Thum explains Malaysia’s leadership role as ASEAN Chair in 2025, championing inclusivity and sustainability with the theme that it’s not enough to grow fast but must grow together and sustainably. Malaysia advocates for human-centric policies and ethical AI governance frameworks.


Evidence

Kuala Lumpur Declaration sealed in May envisions shared future where no one is left behind; Malaysia developed National Guidelines on AI Governance and Ethics and collaborated on ASEAN AI Guide


Major discussion point

Regional and National Strategies


Topics

Legal and regulatory | Development | Human rights


Agreements

Agreement points

Digital divide represents urgent global challenge requiring immediate attention

Speakers

– Li Junhua
– Francis Gurry
– Tripti Sinha
– Nii Quaynor
– Chern Choong Thum

Arguments

2.6 billion people remain offline, majority in least developed countries – Digital Divide as Opportunity Gap


Digital technology has penetrated all aspects of life making lack of access a major disadvantage – Digital Technology as Foundation of Modern Life


Need for infrastructure readiness including physical cables, DNS systems, and root servers – Technical Foundation Requirements


Africa shows fragile infrastructure despite improving connectivity and user growth – African Infrastructure Challenges


5.5 billion people are online but a third of the world remains disconnected, predominantly in Global South rural areas – Uneven Global Progress


Summary

All speakers acknowledge the digital divide as a critical and urgent global challenge, with billions still offline, particularly in the Global South. They agree this represents not just access gaps but opportunity gaps that require immediate coordinated action.


Topics

Development | Infrastructure


AI poses both opportunities and risks for exacerbating the digital divide

Speakers

– Francis Gurry
– Tripti Sinha
– Nii Quaynor
– Qi Xiaoxia

Arguments

Artificial intelligence arrival at unprecedented speed risks exacerbating digital divide – AI as Accelerating Factor


AI can optimize network infrastructure and enable efficient resource allocation for unconnected markets – AI-Powered Network Solutions


AI threatens digital divide most due to high infrastructure costs and technical skill requirements – AI Infrastructure Barriers


Digital capacity building through training workshops and knowledge sharing platforms essential – Digital Capacity Building


Summary

Speakers agree that AI represents a double-edged sword – offering solutions for network optimization and resource allocation while simultaneously threatening to widen the digital divide due to high infrastructure costs and technical requirements.


Topics

Development | Infrastructure | Economic


Multi-stakeholder governance approaches are essential but face implementation challenges

Speakers

– Li Junhua
– Tripti Sinha
– Ren Xianliang
– Nii Quaynor

Arguments

Bottom-up grassroots processes are foundational to global efforts, giving communities voice


Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance


Need for multilateral participation framework ensuring developing countries’ participation rights – Global Governance Participation


Multi-stakeholder approach has potency but requires good moderation and can face participation challenges in Global South – Governance Implementation Challenges


Summary

All speakers support multi-stakeholder governance models while acknowledging practical challenges in implementation, particularly ensuring meaningful participation from Global South countries and communities.


Topics

Legal and regulatory | Development


Capacity building and education are fundamental to bridging the digital divide

Speakers

– Li Junhua
– Ren Xianliang
– Qi Xiaoxia
– Nii Quaynor
– Chern Choong Thum

Arguments

Bottom-up grassroots processes are foundational to global efforts, giving communities voice


Focus on sustainable infrastructure operation and digital education as the biggest equalizer – Sustainable Infrastructure Focus


Digital capacity building through training workshops and knowledge sharing platforms essential – Digital Capacity Building


Need for intergenerational mentorship and coaching to optimize knowledge transfer – Knowledge Transfer Optimization


Malaysia’s NADI Centers provide internet access and ICT training, with AI skills programs – Community Access Centers


Summary

Speakers unanimously agree that capacity building, education, and skills development are crucial equalizers in addressing the digital divide, with emphasis on localized training programs and community-based approaches.


Topics

Development | Sociocultural


Similar viewpoints

Both speakers provide historical and economic context showing that technology divides are persistent challenges that require sustained resources, with current funding crises making the situation more critical.

Speakers

– Francis Gurry
– Nii Quaynor

Arguments

38% reduction in development funding next year creates crisis in addressing digital divide – Development Funding Crisis


Every new technology brings distinct divides and may widen existing ones – Technology Divide Pattern


Topics

Development | Economic


Both speakers emphasize the importance of maintaining global Internet unity while respecting national sovereignty, though from different perspectives – technical stability versus political sovereignty.

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Topics

Legal and regulatory | Infrastructure


Both speakers advocate for inclusive international platforms and human-centric approaches to digital development, emphasizing the importance of ensuring no one is left behind.

Speakers

– Ren Xianliang
– Chern Choong Thum

Arguments

World Internet Conference provides platform for Global South to share digital development dividends – International Platform Creation


Malaysia champions inclusivity and sustainability as ASEAN Chair with human-centric policies – ASEAN Leadership Approach


Topics

Development | Legal and regulatory | Human rights


Unexpected consensus

Language and cultural barriers as significant digital divide factors

Speakers

– Tripti Sinha

Arguments

Millions cannot engage with Internet in their own language creating participation barriers – Language Barriers


Universal acceptance and internationalized domain names critical for cultural and linguistic participation – Multilingual Internet Access


Explanation

While most speakers focused on infrastructure and economic barriers, Tripti Sinha uniquely highlighted language and cultural barriers as significant factors in the digital divide, representing an important but often overlooked dimension of digital inclusion.


Topics

Sociocultural | Infrastructure | Multilingualism


Digital divide as public health priority

Speakers

– Chern Choong Thum

Arguments

5.5 billion people are online but a third of the world remains disconnected, predominantly in Global South rural areas – Uneven Global Progress


Explanation

Chern Choong Thum uniquely framed the digital divide as a public health issue, noting how digital exclusion deepens health inequalities and cuts off access to life-saving services, providing a medical perspective not emphasized by other speakers.


Topics

Development | Human rights


Overall assessment

Summary

Strong consensus exists among speakers on the urgency of addressing the digital divide, the dual nature of AI as both solution and challenge, the importance of multi-stakeholder governance, and the critical role of capacity building and education.


Consensus level

High level of consensus with complementary perspectives rather than conflicting views. The agreement spans technical, policy, and implementation aspects, suggesting a mature understanding of the challenges and potential for coordinated action. The consensus implies strong foundation for international cooperation and coordinated strategies to bridge the digital divide.


Differences

Different viewpoints

Governance approach – Multi-stakeholder vs State sovereignty

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Summary

Tripti Sinha advocates for ICANN’s multi-stakeholder model bringing together governments, private sector, civil society, and technical community, while Qi Xiaoxia emphasizes state sovereignty in cyberspace and countries’ rights to independently choose their own Internet development paths and governance models


Topics

Legal and regulatory | Infrastructure


Risk assessment of fragmentation vs sovereignty protection

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Summary

Tripti Sinha warns that state-led approaches and new multilateral models could fragment the Internet and separate Global South countries from the global Internet, while Qi Xiaoxia frames state sovereignty as protection against cyber hegemony and politicization of technological issues


Topics

Legal and regulatory | Infrastructure


Unexpected differences

Multi-stakeholder governance effectiveness in Global South

Speakers

– Tripti Sinha
– Nii Quaynor

Arguments

Multi-stakeholder governance model remains relevant for keeping Internet stable and globally connected – Multi-stakeholder Model Importance


Multi-stakeholder approach has potency but requires good moderation and can face participation challenges in Global South – Governance Implementation Challenges


Explanation

While both speakers support multi-stakeholder approaches, Nii Quaynor provides a more nuanced critique highlighting practical implementation challenges in the Global South, including resource constraints and participation difficulties, which somewhat contradicts Tripti Sinha’s more optimistic view of the model’s universal applicability


Topics

Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on governance approaches (multi-stakeholder vs state sovereignty), risk assessment of Internet fragmentation, and implementation strategies for addressing the digital divide


Disagreement level

Moderate disagreement level with significant implications. While speakers largely agree on the urgency of bridging the digital divide, their fundamental differences on governance models could impact international cooperation efforts. The tension between multi-stakeholder governance and state sovereignty represents a core challenge in global Internet governance that could affect policy coordination and resource allocation for Global South development initiatives


Partial agreements

Partial agreements

Similar viewpoints

Both speakers provide historical and economic context showing that technology divides are persistent challenges that require sustained resources, with current funding crises making the situation more critical.

Speakers

– Francis Gurry
– Nii Quaynor

Arguments

38% reduction in development funding next year creates crisis in addressing digital divide – Development Funding Crisis


Every new technology brings distinct divides and may widen existing ones – Technology Divide Pattern


Topics

Development | Economic


Both speakers emphasize the importance of maintaining global Internet unity while respecting national sovereignty, though from different perspectives – technical stability versus political sovereignty.

Speakers

– Tripti Sinha
– Qi Xiaoxia

Arguments

Risk of fragmentation from state-led approaches and new multilateral models threatening single Internet – Fragmentation Risks


China advocates respecting sovereignty in cyberspace and opposing cyber hegemony – Sovereignty in Cyberspace


Topics

Legal and regulatory | Infrastructure


Both speakers advocate for inclusive international platforms and human-centric approaches to digital development, emphasizing the importance of ensuring no one is left behind.

Speakers

– Ren Xianliang
– Chern Choong Thum

Arguments

World Internet Conference provides platform for Global South to share digital development dividends – International Platform Creation


Malaysia champions inclusivity and sustainability as ASEAN Chair with human-centric policies – ASEAN Leadership Approach


Topics

Development | Legal and regulatory | Human rights


Takeaways

Key takeaways

The digital divide has evolved beyond infrastructure to encompass affordability, digital skills, and meaningful participation, with 2.6 billion people still offline globally


A critical crisis point exists due to 38% reduction in development funding coinciding with AI’s rapid advancement, which risks dramatically exacerbating existing digital divides


Multi-stakeholder governance models remain essential for maintaining a unified, interoperable global Internet, but face implementation challenges in the Global South


Infrastructure development must be coupled with capacity building, digital literacy programs, and culturally inclusive solutions including multilingual Internet access


Bottom-up, community-driven approaches are foundational to bridging divides, requiring local empowerment alongside global coordination


AI presents both opportunities (network optimization, resource allocation) and threats (high infrastructure costs, technical skill requirements) for addressing digital divides


Regional cooperation and South-South collaboration are crucial, with successful models like Malaysia’s NADI Centers and China’s capacity building initiatives showing practical pathways forward


Resolutions and action items

World Internet Conference commits to deepening cooperation with Global South through continued dialogue and engagement platforms


China announced implementation of UN resolution on AI capacity building with ten major actions and five additional training workshops for Global South countries


Malaysia will leverage its 2025 ASEAN Chairmanship to champion inclusivity and sustainability themes in digital development


ICANN commits to continued support for technical resilience, multilingual access, and global connectivity in underserved regions


Call for international community to join multi-channel exchange platforms and assistance programs for Global South digital capacity building


WSIS Plus 20 review identified as opportunity to renew global commitment to digital inclusion and meaningful access for all


Unresolved issues

How to address the massive development funding crisis while meeting increased needs for AI-era digital infrastructure


Balancing national sovereignty in cyberspace with maintaining unified global Internet standards and interoperability


Making multi-stakeholder governance models work more effectively in Global South contexts where participation can be challenging


Preventing AI advancement from creating new forms of digital colonialism or widening existing technological gaps


Sustainable financing models for ongoing infrastructure maintenance and improvement in resource-constrained environments


Addressing the concentration and consolidation of dominant Internet providers that disadvantage newcomers and Global South participation


Suggested compromises

Respecting national sovereignty in cyberspace while maintaining global technical standards through neutral multi-stakeholder platforms


Leveraging AI technology to address digital divides (network optimization, resource allocation) while simultaneously building capacity to prevent AI from creating new divides


Combining top-down international cooperation frameworks with bottom-up community-driven solutions to ensure local relevance and global coordination


Balancing rapid technological advancement with sustainable, inclusive development that doesn’t leave communities behind


Integrating universal acceptance and internationalized domain names into national ICT strategies while maintaining global Internet interoperability


Thought provoking comments

Despite the progress, I think we are at a real crisis point in relation to the digital divide, and that crisis I think comes from two challenges. The first challenge is the crisis in development funding… And the second problem is that never has funding and development assistance been more needed than at the present time when artificial intelligence is coming online at such a speed that it is baffling to all of us.

Speaker

Francis Gurry


Reason

This comment reframes the entire discussion by identifying a critical paradox: just as AI creates unprecedented opportunities and needs for bridging the digital divide, the resources to address it are dramatically shrinking. Gurry quantifies this with the stark statistic of 38% less development funding, creating urgency around what could otherwise be seen as a gradual progress issue.


Impact

This comment fundamentally shifted the tone from optimistic progress reporting to crisis management. It influenced subsequent speakers to address practical solutions and international cooperation more urgently. The ‘crisis framing’ became a recurring theme, with later speakers like Tripti Sinha acknowledging ‘we are in a financial crisis’ and emphasizing the need for strategic coordination.


As the old adage goes, knowledge begets knowledge, wealth begets wealth, and those who possess these will only have the opportunity to obtain more. Similarly, innovation begets innovation. And for those who are not part of this opportunity ecosystem, you know, they will suffer and they will fall behind.

Speaker

Tripti Sinha


Reason

This philosophical observation introduces a systems thinking perspective that explains why the digital divide is self-perpetuating and accelerating. It moves beyond technical solutions to address the fundamental economic and social dynamics that make digital inequality a compounding problem rather than a static gap.


Impact

This comment deepened the analytical framework of the discussion, moving it from infrastructure-focused solutions to systemic inequality concerns. It provided intellectual foundation for why urgent, coordinated action is needed and influenced the conversation toward more holistic approaches that address root causes rather than just symptoms.


It appears every new technology comes with its distinct divides, and some may widen other divides… The maturing AI technology threatens the digital divide the most, given associated high cost of infrastructure, high power requirements and technical skills needed to be on the supply side.

Speaker

Nii Quaynor


Reason

This historical perspective from someone dubbed the ‘Father of Internet in Africa’ provides crucial context by showing that digital divides are not anomalies but predictable patterns that accompany technological advancement. His ground-level experience adds authenticity to the theoretical discussions and warns that AI represents the most challenging divide yet.


Impact

Quaynor’s historical framing validated the crisis narrative established by earlier speakers while providing practical credibility from someone who has lived through multiple technology transitions. His comment influenced the discussion to consider AI not just as a solution tool but as a potential amplifier of existing inequalities, adding nuance to the technology-optimism expressed by other speakers.


Digital exclusion deepens health inequalities, inequities, cutting off access to life-saving services and vital health education… Digital inclusion is not just an economic imperative, it is also a public health priority.

Speaker

Chern Choong Thum


Reason

As the youth representative and a doctor, Thum brings a human-centered perspective that connects abstract digital policy to tangible life-and-death consequences. His medical background provides unique authority to discuss how digital divides translate into health disparities, making the issue more visceral and urgent.


Impact

This comment humanized the entire discussion by connecting digital access to fundamental human needs like healthcare. It broadened the conversation beyond economic development to include social justice and human rights dimensions, influencing the final framing of digital inclusion as a moral imperative rather than just a development goal.


The multi-stakeholder approach has its potency well-known, but is also known to have requirements… It is necessary to avoid capture and can sometimes result in a decision by fatigue. It also needs a meritorious moderator to call consensus in deliberations.

Speaker

Nii Quaynor


Reason

This is a rare moment of critical self-reflection about the governance model that underlies the entire forum. Quaynor acknowledges the limitations of the multi-stakeholder approach that everyone else takes for granted, introducing necessary skepticism about whether current governance structures are adequate for addressing the digital divide.


Impact

This comment introduced a meta-level critique that challenged the fundamental assumptions of the forum itself. It added complexity to discussions about governance solutions and influenced later speakers to be more specific about implementation mechanisms rather than just advocating for more multi-stakeholder cooperation.


Overall assessment

These key comments transformed what could have been a routine policy discussion into a more urgent, nuanced, and strategically focused conversation. Gurry’s crisis framing established the stakes, Sinha’s systems thinking explained the underlying dynamics, Quaynor’s historical perspective provided credibility and warnings, Thum’s health focus humanized the issues, and Quaynor’s governance critique added necessary self-reflection. Together, these interventions elevated the discussion from incremental progress reporting to strategic crisis response, while maintaining focus on practical solutions and human-centered outcomes. The comments created a progression from problem identification to systemic analysis to implementation challenges, resulting in a more sophisticated understanding of both the urgency and complexity of bridging the digital divide.


Follow-up questions

How can we develop a major international strategic plan to address the digital divide crisis caused by reduced development funding and rapid AI advancement?

Speaker

Francis Gurry


Explanation

Gurry identified a critical crisis point where development funding is decreasing by 38% while AI technology is advancing rapidly, creating an urgent need for coordinated international response


How can we make the multi-stakeholder approach work better in the global south, particularly addressing governance divide issues?

Speaker

Nii Quaynor


Explanation

Quaynor highlighted challenges with multi-stakeholder participation in the global south, including difficulties finding good participants and potential decision fatigue, suggesting this governance approach needs improvement


How can we review and reform frameworks to enable innovation and creation rather than just regulate usage in rapidly evolving technology environments?

Speaker

Nii Quaynor


Explanation

Quaynor noted that non-existence of stimulative and adaptive frameworks for rapidly evolving technology tends to hibernate innovations, requiring policy reform


Where is the revenue to maintain, improve and develop infrastructure services constantly in the global south?

Speaker

Nii Quaynor


Explanation

Quaynor raised concerns about the economic sustainability of internet infrastructure in the global south, questioning the financial model for ongoing maintenance and development


How can AI be harnessed to address the digital divide rather than generate new divides?

Speaker

Nii Quaynor


Explanation

Given AI’s high infrastructure costs, power requirements, and technical skills needed, there’s a need to explore how AI can be leveraged for good and digital unity rather than widening gaps


How can we ensure that universal acceptance and internationalized domain names work across all devices, applications, and platforms through technical community coordination?

Speaker

Tripti Sinha


Explanation

Sinha emphasized that language barriers prevent millions from fully engaging with the Internet, requiring coordinated technical efforts across the technology stack


How can we prevent fragmentation at the technical level and maintain a single interoperable Internet while respecting national interests?

Speaker

Tripti Sinha


Explanation

Sinha warned about the growing risk of technical fragmentation as governments explore state-led approaches, which could separate global south countries from the global Internet


How can we optimize knowledge transfer and capacity building through intergenerational mentorship and coaching in the global south?

Speaker

Nii Quaynor


Explanation

Quaynor suggested this as a key strategy for addressing digital divides, but the specific mechanisms and implementation approaches need further development


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #78 Shaping the Future with Multistakeholder Foresight

Open Forum #78 Shaping the Future with Multistakeholder Foresight

Session at a glance

Summary

This discussion focused on a strategic foresight project commissioned by the German Federal Ministry for Digital Transformation, which developed scenarios for internet governance in 2040. The session featured Philipp Schulte from the German ministry, Julia Pohler who led the scenario development task force, and panelists Anriette Esterhuyse and Gbenga Sesan who were interviewed as part of the process.


Julia Pohler explained that strategic foresight is not about predicting exact futures but rather developing plausible scenarios to help stakeholders prepare for uncertainties and disruptions. The German project created four distinct scenarios exploring different trajectories for internet governance over the next 15 years, ranging from continued geopolitical competition to complete internet fragmentation to over-regulation to transformation toward public goods orientation. The process involved a 15-member German task force representing diverse stakeholder groups, supplemented by interviews with international experts to bring global perspectives.


A key finding was that geopolitics and the role of states emerged as the dominant driving forces across nearly all scenarios, with actions by major powers like the US, China, and Russia being central factors. Pohler noted that geopolitical developments have already moved faster than anticipated when the scenarios were written, suggesting the reality is outpacing their projections. Significantly, none of the scenarios except one showed a bright future for multi-stakeholder internet governance, with most depicting it as either hollowed out or institutionalized to the point of losing meaning.


The panelists found the interview process valuable and intellectually stimulating, though they noted some limitations including the abstract nature of the exercise and concerns about implementation. There was discussion about whether the report would be practically useful, with consensus that while the scenarios themselves might have limited direct application, the participatory process of developing them was extremely valuable for expanding thinking and preparing stakeholders for different possibilities.


The conversation highlighted broader challenges facing internet governance, including the tension between idealistic multi-stakeholder principles and geopolitical realities, the need for more concrete and courageous discussions about desired outcomes, and suggestions for making the IGF more interactive and willing to tackle difficult questions without seeking consensus on everything.


Keypoints

## Overall Purpose/Goal


This discussion centered on a strategic foresight project commissioned by the German Ministry for Digital Transformation, which developed four scenarios for internet governance in 2040. The session aimed to present the methodology, findings, and implications of this foresight exercise while exploring how such approaches could inform future policy-making and multi-stakeholder processes.


## Major Discussion Points


– **Strategic Foresight Methodology and Process**: The panelists explained how strategic foresight works – not to predict exact futures, but to develop plausible scenarios that help stakeholders prepare for uncertainties. The German project involved a 15-member task force representing diverse communities, supplemented by expert interviews and validation workshops to create four distinct scenarios for internet governance.


– **The Dominant Role of States in Future Scenarios**: A key finding was that geopolitical factors and state actions emerged as the primary drivers across most scenarios, rather than civil society or technical community initiatives. This represented a shift from earlier foresight exercises that might have emphasized corporate actors or civil society as main drivers.


– **Crisis of Multi-stakeholder Governance**: The scenarios revealed a troubling pattern where multi-stakeholder processes were either being hollowed out, undermined by state and corporate actors, or becoming so institutionalized that they lost their bottom-up character and meaningful impact. This finding prompted reflection on whether current multi-stakeholder models are living up to their promises.


– **Need for More Concrete and Courageous Approaches**: Panelists emphasized moving beyond abstract ideals to address specific challenges like fair taxation of big tech, data extractivism, and digital barriers created by platform monopolies. They called for “braver” multi-stakeholder forums willing to tackle difficult questions without consensus.


– **Practical Applications and Future Directions**: Discussion focused on how to make foresight exercises more useful through interactive formats like scenario games, better stakeholder engagement, and clearer pathways from analysis to policy implementation. Suggestions included redesigning IGF sessions to be more participatory and innovative.


## Overall Tone


The discussion maintained a constructive but increasingly critical tone. It began with technical explanations of the foresight methodology, but evolved into more pointed critiques of current multi-stakeholder governance limitations. While panelists expressed appreciation for the German government’s initiative, they became more direct about systemic problems and the need for fundamental changes. The tone remained collaborative throughout, with participants building on each other’s observations and offering concrete suggestions for improvement, though there was an underlying urgency about addressing the challenges identified in the scenarios.


Speakers

**Speakers from the provided list:**


– **Philipp Schulte** – Senior policy officer at the Federal Ministry for Digital Transformation and State Modernization in Germany


– **Julia Pohler** – Co-lead at the Berlin Social Center for a research group politics of digitalization, co-author of future scenarios on internet governance, task force lead for the strategic foresight project


– **Anriette Esterhuysen** – Senior advisor for global and regional Internet governance with the Association for Progress Communications, former MAG Chair, long-time IGF participant


– **Gbenga Sesan** – Executive Director of the Paradigm Initiative, IGF MAG leadership panel member


– **Audience** – Various audience members asking questions (roles/titles not specified for most)


**Additional speakers:**


– **Professor Roberta Haar** – Professor at Maastricht University, leading a horizon project called Remit Research, develops scenario testing workshops and games


– **Bertrand de la Chapelle** – Executive director of the Internet and Jurisdiction Policy Network


Full session report

# Strategic Foresight for Internet Governance: Scenarios for 2040 – Discussion Report


## Introduction and Context


This discussion centered on a strategic foresight project commissioned by the German Federal Ministry for Digital Transformation and State Modernization, which developed four scenarios for internet governance in 2040. The session brought together Philipp Schulte, a senior policy officer from the commissioning ministry; Julia Pohler, co-lead at the Berlin Social Center for a research group on politics of digitalization who participated in the scenario development; and experienced internet governance practitioners Anriette Esterhuysen (senior advisor for global and regional Internet governance with the Association for Progressive Communications) and Gbenga Sesan, both of whom were interviewed as experts during the research process.


## Strategic Foresight Methodology and Process


### Understanding Strategic Foresight


Julia Pohler explained that strategic foresight differs fundamentally from prediction or forecasting. Rather than attempting to determine what will happen, the methodology develops plausible scenarios to help stakeholders prepare for uncertainties and potential disruptions. As she noted, “We’re not trying to predict the future, but we’re trying to develop scenarios that are plausible and that help us to think about what could happen and how we can prepare for it.”


The German project employed a structured participatory methodology involving a 15-member task force representing diverse stakeholder groups from German civil society, academia, and technical communities. This core group was supplemented by interviews with international experts to ensure global perspectives were incorporated.


### The Four Scenarios


The task force developed four distinct scenarios for internet governance in 2040:


1. **Continuation of current trends** – Characterized by ongoing geoeconomic competition between major powers


2. **Complete systemic collapse** – Featuring internet fragmentation and breakdown of current governance structures


3. **Over-regulation** – Where everything becomes heavily controlled and regulated


4. **Transformation toward public goods orientation** – A shift toward treating internet infrastructure and governance as public goods


### Expert Perspectives on the Process


Anriette Esterhuysen found the interview process intellectually stimulating, noting that “foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints.” However, she expressed some limitations with the individual interview format, preferring group dynamics for such discussions.


Gbenga Sesan appreciated the intellectual challenge, describing it as valuable for helping organizations adjust their strategies as reality unfolds. He emphasized that such exercises help stakeholders think beyond immediate concerns and consider longer-term implications of current trends.


## Key Finding: The Dominance of Geopolitical Factors


### States as Primary Drivers


One of the most significant findings was the unexpected prominence of geopolitical factors and state actions across nearly all scenarios. Julia Pohler observed that “the most important factor in almost all scenarios is actually the role of states and the role of governments,” which emerged more strongly than anticipated during the scenario development process.


This finding proved particularly prescient given subsequent developments. Pohler noted that “we wrote these scenarios before President Trump took office again. And before we kind of saw this increase of geopolitical tensions… So I think today we would have gone even further in emphasizing the role of geopolitics and geoeconomics… the reality is actually moving faster than we thought it would.”


### Reframing State Involvement


The prominence of state actors prompted important discussions about their role in internet governance. Philipp Schulte used a gardening metaphor to describe the state’s role: “The state should act as a gardener, ensuring that all stakeholder groups can perform their roles effectively.” This positioned governments not as threats to multi-stakeholder governance but as essential facilitators.


Anriette Esterhuysen challenged traditional assumptions, arguing that “states have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals.” She provided concrete examples from WSIS processes where government positions significantly shaped outcomes.


## Challenges to Multi-Stakeholder Governance


### Sobering Scenario Outcomes


Perhaps the most concerning finding was what the scenarios revealed about the future of multi-stakeholder governance. Julia Pohler stated bluntly: “I think in all of these scenarios we ended up writing possible futures in where multi-stakeholder processes are either being hollowed out or kind of completely undermined by corporate actors and state actors… So I would say that in all of these scenarios, somehow multi-stakeholderism and governance has outlived its promises.”


Only one of the four scenarios showed a positive future for multi-stakeholder approaches, with the others depicting processes that had either lost meaning through over-institutionalization or been systematically undermined.


### Calls for More Direct Engagement


Anriette Esterhuysen argued for more concrete and courageous discussions in multi-stakeholder forums. She provided specific examples of topics often avoided: “I want fair tax payment by big tech so that countries who need revenue to actually build a fiber optic backbone… I want data flows that are not based on an extractive sort of colonial type model… but it’s almost impossible to say those things in the context of so many multi-stakeholder fora because you don’t want to offend the private sector.”


She criticized the tendency toward “watered down set of wedding vows” rather than meaningful policy discussions, calling for forums willing to tackle controversial issues without requiring universal agreement.


## Technology’s Complex Role


### Unfulfilled Promises


Anriette Esterhuysen reflected on technology’s role, noting that “early hopes that technology would be an equalizer between rich and poor have not fully materialized.” This observation highlighted how technological development interacts with existing power structures rather than automatically disrupting them.


### Corporate Fragmentation


Julia Pohler highlighted an often overlooked source of digital fragmentation: “Digital barriers created by big tech companies fragment online spaces as much as government regulation.” This expanded the discussion beyond traditional concerns about government restrictions to consider how platform business models contribute to digital division.


## Audience Engagement and Future Directions


### Collaborative Proposals


Professor Roberta Haar, leading a horizon project called Remit Research, proposed collaboration to develop scenario testing workshops and games based on the German project’s findings. She suggested moving beyond traditional report formats toward more interactive methodologies.


Bertrand de la Chapelle contributed observations about the limitations of multilateral systems and the upcoming discussions about the IGF’s future mandate in 2026, emphasizing the need for institutional evolution.


### Making Foresight More Interactive


Several speakers advocated for more engaging approaches to strategic foresight. Anriette Esterhuysen suggested that “the IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything,” proposing “more participative methodologies including scenario games rather than traditional panel formats.”


## Implementation Challenges and Next Steps


### Government Perspective


Philipp Schulte acknowledged the challenge of translating foresight exercises into policy action. He noted that Germany is currently in a government transition period, which affects the timeline for publishing the full report. The new ministry responsible for strategic foresight will handle publication once established.


### Process Improvements


Julia Pohler suggested that “future foresight processes should include government representatives directly in task forces for better implementation.” However, this raised questions about maintaining the independence and multi-stakeholder character of such exercises.


### Updating Scenarios


Given the rapid pace of geopolitical change, speakers acknowledged that the scenarios would benefit from updates reflecting current realities, potentially through addendums or annexes to maintain relevance.


## Practical Applications


### Beyond Traditional Formats


The discussion revealed strong interest in moving beyond conventional panel discussions toward more participatory approaches. The German scenarios could serve as the basis for interactive workshops, games, and simulation exercises that engage stakeholders more directly.


### Institutional Reform


The conversation included specific suggestions for reforming existing governance institutions, particularly the IGF. Speakers called for greater willingness to address controversial topics while maintaining inclusive character, and for adopting new methodologies that move beyond consensus-seeking on every issue.


## Conclusion


This discussion demonstrated the value of strategic foresight exercises both as planning tools and as catalysts for critical reflection on existing governance approaches. The sobering finding that multi-stakeholder governance faces significant challenges in most future scenarios prompted important conversations about reform and renewal.


The unexpected prominence of geopolitical factors across scenarios highlighted the need to better understand and engage with state actors as potential enablers rather than threats to inclusive governance. Similarly, the recognition that corporate actions contribute significantly to digital fragmentation suggested the need for more direct engagement with platform business models and big tech power.


The strong interest in more interactive and participatory methodologies, combined with calls for more courageous discussions of controversial topics, indicated potential pathways for more effective governance approaches. The challenge moving forward will be translating these insights into concrete actions while maintaining the inclusive character that defines multi-stakeholder approaches.


As Julia Pohler noted, the reality of geopolitical change is moving faster than anticipated, making such foresight exercises increasingly valuable for preparing stakeholders to navigate uncertain futures. The German project serves as both a methodological model and a wake-up call for the internet governance community to engage more seriously with the fundamental challenges facing current governance models.


Session transcript

Philipp Schulte: Hello, good morning, good afternoon everybody. Welcome to our open forum, Shaping the Future with Multi-Stakeholder Foresight. My name is Philipp Schulte, I’m a senior policy officer at the Federal Ministry for Digital Transformation and State Modernization in Germany, and I’m happy to see you all here on site and online and also on the panel and on our online panel. I will briefly explain what this session is about, that we have an online moderator with us, Lars. You can ask questions here on site and online after the first round of questions and you are very welcome to ask questions and I will also give you a lot of time for that, since I know some of the people here in the room have been involved in this exercise. I’m happy to discuss with you. So what is this session about? This session is about a project of our ministry which is called Strategic Foresight. Some of you might know that Germany has published last year the first strategy for international policy ever and there were several follow-up messages about this. There was funding for the IGF secretariat which we all really much welcome, but there was also a fellowship for international digital policy for young fellows which are also around here on the IGF and there was a process for strategic foresight we will dive into on this panel. And for that I’m very much excited to have you here on stage and online. We have here Anriett Esterhusen. I’m a senior advisor for global and regional Internet governance with the Association for Progress Communications and a former MAG Chair and around at the IGF since ever. I don’t know. Yes. Yeah, so and next to Anriette there’s Gbenga Sesan Executive Director of the Paradigm Initiative and also IGF MAG leadership panel member and Also, yeah, not new to the ecosystem here I can say and Online we have Julia Pohler. She’s co-lead at the Berlin Social Center for a research group politics of digitalization and co-author of the future scenarios we are discussing here on internet governance and in that role she has been a task force lead and developed the scenarios we we will discuss here. She’s online and hope can hear us Without further ado I will give the word to our panelists and starting with Julia online. Julia, you have been a task force lead in this experiment on strategic foresight with our ministry and Maybe you can explain to the audience which might not really be aware of this project or maybe doesn’t know what strategic foresight is it really is What was your role? What did you do with the task force members? Maybe you can say also word who was on the task force and what was the outcome?


Julia Pohler: Yes, thank you, can you hear me? Yeah, I can hear you well Perfect. Thank you very much. And I’m so sorry that I cannot be with you at the IGF But it’s my son’s birthday tomorrow, and I wouldn’t miss this not even for the IGF. I’m sorry So I’m happy to join online and I’ll be I’m happy to say a few words about the process and the methodology involved Not so much about the scenarios. We can discuss them later But maybe for those who are not familiar with strategic foresight I would like to make a few points of what strategic foresight is about and then Explain them on the example of our task force and what we did So I think what is important to keep in mind when we speak about strategic foresight in the field of Internet governance or elsewhere That strategic foresight is not about predicting the exact future and I think that’s something that we also struggle with in this process So it’s really strategic foresight is a process that helps us deal with Uncertainties by exploring kind of possible possible futures So it’s more about like thinking how we could prepare ourselves for different scenarios rather than trying to kind of guess What will actually happen in the future? So by using strategic foresight, and I think that’s the motivation also of the German Ministry to launch this process. Decision makers and stakeholders can better understand certain uncertainties that there are in the world and in which direction they might develop and then prepare for disruptions before they actually happen. That brings me to my second point. The first one is it’s not about predicting the future. The second one is it’s actually about developing future scenarios. Developing scenarios that are basically stories of plausible futures. That means that these futures that we develop in these scenarios, they don’t have to be realistic. It’s very likely that none of these stories that we develop will ever happen in that way, but they need to be plausible. So in some way they could happen if certain kind of circumstances come together. So these kind of stories that we develop or these scenarios that we develop are designed to highlight in different ways how the future might unfold and then help us understand how we can, with certain actions, kind of go in one direction or the other. So for instance, in the project that we’re discussing here, which was called Strategic Foresight Internet Governance in the year 2040 and was commissioned by the German Ministry for Digitalization and Transport. So before we actually changed the name of that ministry, we created four distinct scenarios for internet governance in the 15 years, in the next 15 years. And so these four scenarios were really kind of plausible stories in which we could explore a range of possible futures, which went from the continuation of trends that we see today, kind of growth and geoeconomic and geopolitical competition, and where this leads. And the second one was more about a complete and total systemic collapse and a fragmentation of the internet in two distinct networks. And the third one was about a regulation of the digital world to a degree that everything becomes controlled in some way. And the fourth one was about a complete transformation of the internet governance structures that we have today. in a turn away from economic competitive logic towards kind of a shared commitment in promoting public goods. So all of these four scenarios are possible futures and none of them will happen, but they helped us kind of understand what we see as trends and how we can deal with these trends. And also, I think what’s important to keep in mind that these scenarios do not exclude each other. Parts of them could coexist, so it could happen a part of one and a part of the other scenario, but it help us discuss what is desirable and what are risks that we want to avoid and where we kind of see opportunities and where we want to go. And for this very reason, I think strategic foresight has been a methodology that has been used by international organizations, also including a lot of UN agencies and by the European Commission, but also a lot by civil society organizations since the early 2000s to kind of inform decisions, inform actions and also inform policies. And I’m sure that Henriette and Brenna will tell us more about that because they probably have used a foresight in the past too. And that brings me to my third point, speaking about civil society and other actors. So the third part, so the first one, it’s not about predicting the future. The second one, it’s about writing possible future stories. And the third one, it’s to do this in a positive participatory way. So we followed a very structured method, but we reached this method through focused discussions with experts and stakeholders from very different backgrounds, which came together to gather insights and then kind of discuss different options and perspectives that we have on where we might go in the future with internet governance. And let me explain just in a few minutes what this meant concretely for our process. I will guide you a little bit through the process that we used to develop these scenarios for the German Ministry of Digitalization. So the entire process was mandated by the ministry, but it was coordinated by the German Agency for International Cooperation. the GESET, and they also provided the method experts who really kind of helped us through this process, all the task force members, and guided us methodologically through this discussion and how we develop the scenario. And for the task force, we were 15 members who were invited and were selected to represent the kind of diverse communities that we have in Germany, in internet governance, academia, business, civil society, and the technical community. And the goal was to develop these kind of four different scenarios for the next 15 years, basically, what will happen in the next 15 years in internet governance. So I was kind of the content lead, and that also meant that I helped drafting the scenarios, but it was really a joint process between all different members of the task force. So the task force members really contributed at every stage of the scenario creation, and we collected influential factors, we discussed what the impact of these factors may be, and then, based on the methodology, drafted these four possible futures, and also in the next step really critically accessed the possibility and constantly refined the writing of these scenarios. What’s also important is that because all members of this task force were from Germany, also they represented different stakeholder groups, still was a kind of very German or European centralized view that we had in this task force. So what we did is that I conducted interviews with specialists from various world regions and stakeholder groups to kind of validate these draft scenarios, bring in new ideas, and bring in also more global and diverse perspectives. So Henriette and Benna were interviewed by me for this process, so that’s also how they were involved in this. And finally, what we did after we had a good draft of the scenarios and they were validated, we had a network kind of workshop in which the different members of the task force, but also a different set of experts, were invited to participate in this workshop. So this was a kind of a virtual workshop, to discuss and also use a certain method to kind of develop ideas how these scenarios, what they mean for their own kind of actions and their own planning. And as far as I know, the scenarios are also now being used by the ministry to discuss potential options for actions in the field of Internet governance. And I leave it at that.


Philipp Schulte: Thank you, Julia. That was really helpful for us all on stage and also in the audience to better understand what you did and what the German government together with stakeholders here proposed. So one important point is that my panelists here on stage were interviewed for these scenarios. So let me turn to you, Anriette Bengar. How was it for you to be interviewed in this project? Was that something familiar to you? Was it completely new? What was your experience during the interview? What were you thinking when you were reading? I mean, we come maybe later to that. The scenarios and yeah, what was your impression?


Anriette Esterhuysen: You want me to start? I have used this methodology before and it was quite interesting a long, long time ago. It was actually in South Africa in around the late 1990s, just shortly after liberation, after the first democratic government was in place. And it was being used in the context of planning for development and inclusion and actually also participative governance. And at the time, I found it immensely frustrating. And I think I wasn’t a very productive participant at all because I found the abstraction very frustrating because I knew exactly, you know, I was much younger. I thought I knew exactly what we need to do, what the problems are. And approaching it in this kind of roundabout way seemed to me, and the facilitator was from the U.S., which frustrated me even more. And I really did not find it very helpful. But now I’m much older, much wiser, and Julia is a very, very good interviewer. And also I think, you know, having been around Internet governance for a long time, I think we have become very, what’s the word, quite boring is maybe the best word, but there are more sophisticated words. I don’t think we’re being creative or innovative enough. I don’t think we’re applying critical thinking enough in how we are evolving Internet governance. So I actually found it very exciting and very interesting and enjoyed the process. I think abstraction is still an issue, and maybe we can talk about that a little bit more later. Yes, I found it really, it was a sort of stream of consciousness approach, but guided by Julia to focus on the plausible, but also not trying to think of what will actually happen, and then playing with those trajectories, but of course with the knowledge of the world that we are living in and working in. So I found it very useful, and I was very impressed actually that Germany had done this. I think my only sort of one, I would have liked to be part of a focus group or a group at some point. I think I found it, I would have found it more interesting in some ways to have a group dynamic. And then I think my only other question about it as well is the way in which you treat multi-stakeholder in how you are approaching the future of Internet governance. And I think in that sense, the study itself, I think, perhaps did not unpack or deconstruct what multi-stakeholder means. I think I would have actually possibly found it more valuable if it was scenarios of governance effective, accountable, whatever governance. Somehow I felt that the focus on multi-stakeholder became a little bit one-dimensional. You know, civil society, business, government, technical, which I think is actually one of the weaknesses in our entire ecosystem.


Gbenga Sesan: Well, it wasn’t my first, but talking to Julia was also very interesting. I do interviews a lot, either from research interviews where people are hoping that you are supporting a thesis, or to media interviews where people are hoping they can pigeonhole you into a position. So this was very helpful that there was no target outcome you could think. And I think it was very helpful to be able to think on your feet, well, maybe on your seat, to think while you’re having the conversation. I’d done this in 2007, a while ago, as part of the Desmond Tutu Leadership Fellowship, and we’re trying to create scenarios for the future of Africa. It was an interesting process because for us at the time, it was like a compromise. There were people who felt things were going to go this way, and there were people who felt things were going to go the other way. The optimist, the pessimist, and maybe a small group in between. So doing futures, possible futures, was sort of a compromise. Like everybody felt heard, and everybody felt that they saw the future in this. And what I also found interesting in this, like Andrea said, it was good that it was a government. Typically, you would have this kind of project by civil society, thinking of the future. But it was good to know it was a government. And one of the things, I don’t know if Julia remembers this, but one of the things that I was very keen on was implementation. Whatever that asks you continue to implement, you’re able to look at the scenarios and adjust. Because one of the beautiful things about possible futures is that it won’t happen exactly the same way. But when something happens close to the scenario that you’ve discussed, then you have an opportunity to either align or to run away from certain things. And I’m glad to hear that that is happening now. So for me, it was fun. There was no exact destination. There was no eating agenda. So it was a better conversation to allow me to think and to speak to the issues as I saw them. and what I thought could happen. The other involvement I had with scenario planning, and I think that Anriette was also involved with this, was I think in 2008, was it Elon University? I don’t remember the name, talking about predicting the future. And I remember one of the things I said about the future at that time was that we will begin to find confusion between work and play. And so sometime during COVID, they sent me an email and said, oh, wow, what you said is what is happening now. And I was like, no, I did not predict the pandemic. I was just a bit scared that that was going to be something that would happen in the future. So I think it was very helpful because when that moment came, I felt prepared because you had thought about it. And I think this is one of the beautiful things about creating possible futures.


Philipp Schulte: Yeah, thank you so much. That was super rich already and it highlighted some of our ideas, but also some of the challenges within the process, but also in the outcomes or which the outcomes highlighted some of the challenges of the current environment, current community. And yeah, thank you so much for your thoughts here. Julia, do you want to react directly or otherwise I would ask you, what are your key takeaways or the defining event in describing these scenarios? What do you consider to be the defining characteristics of the multistakeholder governance of internet, the multistakeholder model of internet governance, which you had pointed out was a bit one dimensional. I think that’s what you said. And maybe actually it’s fun that you said that after reading the reports, because that’s what actually our finding was before starting this process. But so you might have now a better basis for discussion. That’s at least our hope, but I don’t know, you are heavily thinking about the multistakeholder process and having published. about it. What is your opinion?


Julia Pohler: Yeah, thanks. Well, I also really kind of enjoyed for me, as I usually, I’m a researcher, I work in academia, we don’t usually think about potential futures, right? We look at the past and the present. So for me, it was also a very interesting kind of exercise. And I’ve done this three times now, also in internet governance before in different contexts. And I thought it was very interesting. Also, I really enjoyed the interviews, because they really broadened my perspective. And I learned a lot from these interviews. But I think when kind of I we get to the stage where we really wrote the scenarios, and I looked at them with some distance after a while, I think what strikes me most is that the most important factor in almost all scenarios, maybe a little bit less the last one, which is about a complete transformation. But the first three one, the key driving factors is actually the role of states and the role of governments. So in each scenario, the kind of actions of states of in particular of important states, the US, China, but also Russia, then, of course, we are from the EU. So we kind of looked also at the EU, but the kind of the actions of particular states, including also emerging kind of powers, and the relationship between states and governance was really the key defining factor in each of the scenarios. So geopolitics, and so also geoeconomics, in some way, were the main drivers for transformations in these scenarios. And they’re all the main drivers of the future and the main kind of key factors for the future that we imagined. And we, that’s to say, we actually started writing them. And we wrote these scenarios before President Trump took office again. And before we kind of saw this increase of geopolitical tensions and economic competition that followed his taking office again. So I think Today we would have gone even further in emphasizing the role of geopolitics and geoeconomics in these scenarios and kind of written them even more around these kind of tensions that we see. In some way I would even say that the actual geopolitical developments have already overtaken the scenarios that we’ve written only six months ago or eight months ago. So the reality is actually moving faster than we thought it would. And it’s my assumption that actually 10 years ago had we written these scenarios 10 years ago we would have given a less much less prominent role to states and to governments and into the relationship between states. And I’m actually I don’t only think so I actually I’m sure we would have given this because I did such a process a foresight process in 2013, 14, 15 and there the key driver was actually the corporate actors and civil society. So it has changed and I think this is also kind of a finding that we see like on how we see the world as well that actually states and geopolitical and geoeconomic actions do have become more important again. As for multi-stakeholderism I kind of have to share also Anriette’s observation and I think what also with some distance looking back at these scenarios what is kind of striking and maybe even frightening that in none of these scenarios except maybe for the last one which is very different there is a bright future for multi-stakeholderism and internet governance. So I think in all of these scenarios we ended up writing possible futures in where multi-stakeholder processes are either being hollowed out or kind of completely undermined by corporate actors and state actors. To some degree basically the commitment of to multi-stakeholderism and internet governance remains at least at the discursive level but we wrote scenarios where this commitment is either only a lip service or where multi-stakeholder processes are being so institutionalized and professionalized and becoming so predictable that they actually lose their meaningness and they lose the kind of bottom-up character and the possibility to also include voices that might diverge from the mainstream kind of perspectives in these processes. So I would say that in all of these scenarios, somehow multi-stakeholderism and governance has outlived its promises. And I think that’s something that, since we ended up writing them, not really with that intention in mind, but this is how we kind of ended up seeing the future, I think that’s something that should give us some reflection on what we are doing and how we maybe can also transform our current model to make it more meaningful.


Philipp Schulte: See a lot of nodding here. Do you want to direct directly?


Gbenga Sesan: I mean, it’s, you know, I started nodding when you were talking about how fast the geopolitics have played out. And I think I remember part of our conversation at a time, we talked a bit about it, but I don’t think that anyone sort of could predict that things will move this fast. In fact, you know, by November, there were people who were doing scenarios, I mean, within organizations, we had to do some planning at Paradigm Initiative in November, you know, but there wasn’t that sense. And I think this is the relationship between insight and scenario planning. And this is why adjusting as you go is critical. So there are things that you sort of plan for and you dial them to a level seven and they get to a level nine and you have to tell yourself, listen, we can’t, I mean, it will be insanity for you to then take the actions you planned for level seven when you are at a level nine. I would say for multi-stakeholderism right now, not only is it not living up to some of the lofty, you know, definitions and branding it did, it’s also been threatened because of that reason, there are people who are then saying, yes, we’ve talked about the ideal multi-stakeholderism where everyone is an equal partner. Of course, we know not everyone is equal around the table. It hasn’t worked. So let’s try this other less perfect, but pragmatic model. And that itself is a challenge because I think, and two things to me, one is, yes, we must adjust, but we must also never lose that opportunity to dream of, to wish for a better scenario. It won’t be perfect. We have to adjust, we have to be realistic, but we shouldn’t move from optimism straight to pessimism. We should maintain a healthy dose of realism and say that some things may not be working now, but it is still possible to get things to become better.


Anriette Esterhuysen: Let me comment on what you said emerged about the role of states. I think absolutely, now that’s not a surprise to me. And in fact, what is a surprise to me is that there’s still reluctance to talk about enhanced cooperation in this space, which is, you know, one of the worst things not to be named. Words not to say. Because the reality is that how states engage or not engage with one another has profound impact on how inclusive governance is, how strong civil society can be, to what extent human rights are respected or democratic institutions are able to grow and play their role. So much as we like this, I don’t know, there’s this kind of fairytale notion of multi-stakeholder governance as this alternative dimension of perfect governance. I mean, I see it as a way, a way of arriving at more accountable, inclusive, effective governance. And states are a big part of that. I think what the multi-stakeholder approach gives us is a way of really putting on the table that states cannot do this on their own. And if they do it on their own, they’re probably not gonna do it very well. But that doesn’t mean that states do not still have quite a profound role. And I think the other thing that multi-stakeholder also gives us, or what I think the way in which the IGF and IG has evolved is the fact that it’s a diverse ecosystem. Internet governance has many. types of decision-making processes, types of development and standard-making processes. Some of them might be led by governments, some of them could be completely technical community-driven, and some might be more society-driven or private-sector-driven. I think what the multi-stakeholder approach gives us is the constant reminder that we need to connect these with one another and that they need to overlap and engage with one another. But it doesn’t mean that there’s this new sort of amorphous multi-stakeholder ideal which has to operate across the board. So I do think it’s interesting that the role of states is important and I don’t think we should feel that that undermines the ideals that we are striving for in this space, which is to have inclusive and participative governance that achieves good results, public interest results.


Philipp Schulte: Yeah, I guess I couldn’t agree more here since… I mean, to the world of the digital or to the internet, the state was maybe a foreign player for a long time. And now just… I mean, I share the observation here that the state, as also the ministries, show up more and more. They show up more to the IGF, but they show up more to ICANN, they show up more even to the IETF right now. And they get involved and I think that might be maybe a usual process since the state was reluctant to show up compared to other political areas or fields of politics. So that might be, I don’t know, it can be healing, because as you said, the state is still important and can play a role. And this also triggered a bit our thinking, what our role is and what a good role for us is. And I think, when I think about it, I think the responsibility of the state is more like… There’s a garden of multi-stakeholders and the state is… Maybe the responsibility of the state is to make sure that all floors, all the different stakeholder groups can perform in their role they want to perform and they perform best. And maybe these scenarios can help the different groups and that was one idea behind it. So this leads me to my next question. Is that… I mean, the report is not published yet, but it will be published at some time. Is that something you could use in your daily work?


Gbenga Sesan: Absolutely. Not just because I contributed to it, but one of the things I was very keen to see was how all the ideas would come together to define what the scenarios would be. I think, well, at the risk of giving you more work, I think it needs to be updated very quickly with some new realities, maybe like an addendum or an annex or something like that. And I believe that that’s something that we will do, most likely. Anyone who picks that report, who looks at the scenarios, will be able to situate those scenarios in our current reality. So some of the geopolitics that we talked about at the time, it wasn’t as deep as some of the experiences we have right now. So I can imagine that it will be at least a starter for conversations. But absolutely, I think this is something that will be useful, not just the content of it, but also the principle behind it. The principle of creating possible futures and adjusting your strategy as you continue to see what has emerged and how close they are to the possible futures that you predicted. One key role was the role of the state, but another key role is the role of technology. And so, Anayat, you worked a lot on connectivity and worked with a lot of technologies here also in this area.


Philipp Schulte: What’s your assessment of the role of technology in these reports or also in real life? And what can we learn from all the technologies, how they were implemented, how they were introduced for new technologies? And where do you see the dangers and opportunities?


Anriette Esterhuysen: You’re taking me away from scenarios now and foresight to reality, you know, in the present. I mean, I think that one of the things that we need to do and I think one of the strengths of the report is that it does allow us to think of technology in both as a force that has actually impact on its own, as well as a sector that interacts with geopolitical conflicts, with different forms of societal change and organization. I mean, I think… I mean, you were also going to ask me at one point when I was so actively involved in trying to build Internet connectivity in Africa in the sort of 80s, 90s, early 2000s, what our hopes were. And I think this is also a shift from WSIS and WSIS Plus 20. I think it was very much a belief, naive, obviously, that access to technology and particularly access to communications technology will be an equalizer. That it will be an equalizer between rich and poor, the center and the periphery, men, women, non-binary, that individuals… That it would be this set of tools and processes that creates engagement and cooperation. And of course, it didn’t quite pan out that way, but that is still part of what technology gives us. So I think, I mean, the hard part about foresight, but also the interesting part is to look at how this complex way in which individuals and societies engage with technology and are changed by technology, how that will play out in different scenarios. And maybe that’s also one of the reasons why the role of states emerged as important, because I think when faced with unpredictability, there is also, I guess, a tendency to look at who are the institutions in this context of unpredictability and insecurity that have the capacity and the responsibility to make sure things don’t go wrong. And I guess that’s also naive, because we also know that both corporations and states are unpredictable themselves. So I’m not giving you a good answer here, because I think that it is… So I’m going to actually answer the question you asked, Gbenga, which is, is this useful in my work? Not particularly, because I think it could. Is the report useful? I’m not sure if the report will be useful. I think the exercise is enormously useful. I think participatory processes like this are very valuable to the people that are part of them. So I think to make the report useful, you’d have to find a way of using it in a context where people are actually able to discuss and think about it and engage those scenarios. And then I think it could be very useful, because I think we do need to think more creatively. And I’m just going to give one example. We probably don’t have much time, but this year, for those of you who don’t know, but you’ve probably all heard so much about the World Summit on the Information Society, by the way, the action line on enabling environment, that’s what governments are supposed to do, create an enabling environment. But when the WSIS was reviewed by the Commission on Science for Technology, which is a UN body, it’s part of ECOSOC, it was shortly after the US government had taken a position on not wanting to support the sustainable development goals or use the concepts of developments and sustainability. It was also shortly after the US had pronounced that gender is biological and they are just too sexist. And so these featured in the negotiations around the WSIS, where people were talking about, have we got digital inclusion? Is there security? Are we achieving development goals? And it was really, I was there as a civil society participant, and to see the European states in particular, shell-shocked, because it was so difficult for them to operate in this context, when a long-time partner in the Internet governance and World Summit process, the US, was moving outside or taking on a different position. My first thought during that entire week was, I wish these governments had all done some foresight work. And maybe if they had, they’d actually be able to take advantage of this shift, be creative, form new alliances. And I think that’s why, certainly for diplomats, I think certainly for governments, anyone who is involved in negotiation in a geopolitical or even in a multi-stakeholder context, I think it’s a very, very useful technique to use.


Philipp Schulte: You gave us a lot of homework here. Speaking about time, we still have some time and I’m happy to take questions from the audience. So if you have prepared already a question, please line up here. We have a mic here. Otherwise, we are also able to take online questions and we are more than happy if you are in the discussion. Otherwise, I will pick up on another point you said that’s, I mean, level of abstract, like the reports are abstract. Oh, we have already. Yeah, please introduce yourself and…


Audience: Yes, good morning. My name is Professor Roberta Haar. I’m at Maastricht University. And I’m also leading, I was on a panel in day zero with my, I’m also leading a horizon project, Remit Research. I encourage you all to look at it. And I’m very excited about the work that you’re doing because we’re also, part of what we’re trying to develop is something called scenario testing workshops. And in those workshops, we’ve also developed games and we developed them with the joint research center at the EU Commission. They also have this scenario exploration system. You’re shaking your head, so I guess you’re aware of it. And so we have taken their system and used the data from our research and developed scenario games. And we’ve already played now the first one, which we had on military AI at Erasmus University in Rotterdam. And we had extremely good results. And we did exactly what Anriette said, as we took the data and we brought it to people to play and discuss. And we had four scenarios that we have developing with our different data. And we still have four workshops to go. And we will have one in Rome in April, in Helsinki in September, so April of next year, obviously. And then we’ll also have summit in Brussels. And I know Anriette is shaking your head because she’s on our supervisory board. She also pointed out for me to come today. So my question is, and my first question is, is the report accessible? And then you already answered that question. But then my next one is, can we sort of also adapt your data and maybe also have some collaborative in taking your data onto the next step into a game and so that we can integrate it and then indeed have policy ideas to invite policy stakeholders to our games and to play it? So I’m hoping that I’ve noted all your details out. I want to write to Julia and hoping that we can maybe have some collaborative work there. Is that something that you find interesting? So thank you.


Philipp Schulte: I guess I have to take the question first, but I’m happy to step in. So on the report, yes, it’s true. It’s not published yet. as you might know we are in a government transition period in Germany and we set up a new ministry but this new ministry will also be responsible for strategic foresight so that’s a lucky coincidence in this case and so we are really optimistic that we can proceed at some level with this report and also with the methodology for sure and also with our work we have done however one idea behind it was that this report is not only for the government but also for all stakeholders so happy to reach out to all involved in the program on the project and I know that some civil societies organization in Germany for example Wikimedia already taking the work and trying to to work with it with the reports and with the methodology so I’m happy


Julia Pohler: to connect you. Julia do you want to add anything? No you just mentioned what I would like to mention Julia I mean I know about the Remit project I went to one of your conferences last year discussing multilateralism, multi-stakeholderism and I would be happy to also be involved I think we have to go through the ministry if it will confirm this since they probably have some kind of control over the material we produce but I think it would be very helpful to kind of take this further and develop it into a game which would be fun and I think that also kind of connects to what I wanted to say about stakeholder engagement during this exercise maybe I can come to that later because it is also a challenge to keep people involved in these kind of exercises and I think making it into some bringing it to another format also could be very helpful in the learning process for us all and how we can do this differently as well to maybe make it more fun for everybody involved and make it more meaningful also the output and take the output to something that can then be kind of used elsewhere yeah Bertrand


Audience: good morning my name is Bertrand de la Chapelle I’m the executive director of the internet and of the German Jurisdiction Policy Network. Two things. One, first of all, congratulations to the German government for having undertaken this thing because I think it’s a perfect place and venue for discussing also the meta level of how our institutions are going. The tool of scenario or foresight is definitely a good one. I am extremely frustrated that you cannot present this because it would have been a perfect session to build the session around this. So I’m waiting impatiently for the release of the scenarios of the foresight report. The second thing is precisely about these kind of exercises. And they are extremely important. We know the limitations of those things. You know the benefit of engaging the people. It’s mostly the process of developing those things that is the most interesting because it allows people to express what they see as the trend, what they see as the drivers, pro or negatively. However, there are things that are always extremely difficult to anticipate in those environments. Call them the black swan or the unexpected events. For those of us who are old enough, we can remember that when the World Wide Web emerged, everybody was talking about America Online and the domination of America Online and how the future of electronic communications was going to be those mammoth companies or the telcos. And then something happened on the side. I want to keep faith in the fact that the multistakeholder spirit, not the model because there is no such thing as a multistakeholder model, but the multistakeholder spirit not only will be alive. but that it will ultimately permeate everything because the reality is today because of those geopolitical tensions we are seeing more than ever that the governments together cannot solve those problems. I want to highlight and I’ve said that in other sessions in 20 years since the WSIS there hasn’t been one single agreement among all governments on digital issues except a cybercrime convention sponsored by one of the countries that is the most present behind cybercrime. That’s the ultimate irony of the limits of the multilateral system which has to be preserved don’t get me wrong the states are absolutely fundamental but our inability so far to bring the different actors around the table in environments like the IGF and other venues is one of the reasons why we’re struggling to address those problems. At this juncture this exercise about scenarios we need to also think a little bit more about what we want not only what the trends are and to finish the WSIS plus 20 process at the moment is entirely focused on producing another resolution in December. There’s one thing that it should say and set the stage for which is what is going to be the future of the IGF. When do we discuss in 2026 and where the evolution of the mandate and the evolution of the structure of the IGF and as you’re discussing scenarios thinking about the institutional arrangements internationally is a core follow-up I think for what you’ve been trying to achieve.


Philipp Schulte: So I couldn’t track a real question here but provoking


Gbenga Sesan: Thanks a lot for that, and I underlined want here, when you were saying we should keep in mind what we want, and that’s something I was speaking to earlier when I said there is reality, there is history, there is data, but there is also the desire that we have, and we may be faced with challenges, but we need to come to the table with an ideal scenario that we want. What do we want? Because the challenge is, if you’re frustrated by history, historical data, if you’re frustrated by some scenarios that paint a bleak future, then there’s no point. We might as well just throw up our hands and say, let’s sit down and watch the TV, but if there is something that we want, what this brings to mind for me is if you’re running or sailing or flying against the wind, you could either submit to the direction of the wind, which then means you will go anywhere the wind takes you, or you could drive against the wind. My mathematics interest comes into play here. You think of the velocity to use, you think of the direct angle of inclination, so that worst-case scenario, you will not be pushed away and you will end up where you want to go. I think it’s really important that we know what we want, and knowing what we want has to come from everyone on the table. It cannot be what the government wants. It cannot be what only one stakeholder wants. Of course, we all come with what we want. We have conversations. In some cases, we’ll have consensus, and we will come together and agree on some things, but it is absolutely important, for the want of another phrase, to just keep dreaming.


Anriette Esterhuysen: Sometimes I’ve known Benga since he was very young, and sometimes he makes me feel very old and sometimes not, but today you make me feel very old, because I think that, of course, we have to dream, but it’s not just about dreaming. It’s about concrete things. What is the WSIS all about? It’s about a people-centered development, human rights-oriented information society, where people can use technology to improve their lives. To me, that’s more important than having an IGF, frankly, but I believe we need the IGF to get there, and I do agree with Batra that we have to renew the IGF, and I think that’s actually an interesting point about the foresight exercise as well. I think all of those scenarios, as Juli has said, they all depict a fairly not such a positive picture of multi-stakeholder, which I think we should interpret as a real indicator that we need the IGF, and we need forums like the IGF. I think for me, the important thing, though, is that it is an IGF which allows the wind in and doesn’t close all the windows so that we can sit in our sort of safe, comfortable, multi-stakeholder space, because I think, Benga, the reality is we don’t all want the same thing, and we’re not always going to have consensus. That doesn’t mean that we shouldn’t be in active, open conversation with one another, so for me, an IGF that actually allows us to tackle the big issues.


Julia Pohler: to do this kind of process or even take what we did now and move forward and see how why this matters also to the members who tasked with me who wrote this report and kind of show them the real kind of clear benefits. And I think one of the ideas on how to make this more meaningful would be to actually see what now for example the German government who mandated this process is doing with whatever ideas we developed and how these ideas actually kind of help people within the ministry, people within the government to figure out what they want as you just said it or figure out what they don’t want and how this impacts whatever the government should be doing and should not be doing. And I think that’s the kind of opaque part for us and this is also meant as a criticism on our task force itself because we had difficulties kind of picturing where we were going. It’s not meant as a criticism of the process itself because I also know that how it was organized we had a government change in between and whatever and the funding was also meant to engage stakeholders with each other. But I think what would be really helpful is to actually have people from the ministry in the task force next time you do this or when you take this to the next step because I think it would be extremely interesting to actually have people talking to each other about where this leads us and where would we want to go and I think it would also help the task force member better understand what their contributions are leading to and it will help the government who is mandating this process better understand where these ideas are coming from, what kind of competing visions are behind this, what kind of competing perspectives or even compatible perspectives are behind this. So I think this is one of the ways I would say take this forward or make it better in the next round.


Philipp Schulte: Yeah, that are valid points and it’s good to hear that because we were a bit reluctant to be on this task force because we don’t want it to have a government-steered process so writing our own scenarios by stakeholders and then use them as a lip service. So we really wanted not really to get into the scenarios. I mean and I agree that it was a lot of work and this was also a reason why the stakeholder group was mainly people from Germany or Europe because otherwise it would be even more. like harder to bring them all together to Berlin like two to three times and to work on a scenario. So there were some restrictions as I said, but our hope is that with the new government and with the new responsibilities in the ministry that we can learn from this process and take it to the next level. But coming back to my original question about like implementation and alternatives, what is your…


Gbenga Sesan: So I can go back to this word want and of course I agree with you on starting at times from what we don’t know what we don’t want. In fact in itself knowing what you don’t want is like knowing what you want. I want not to have what I don’t want. And we had this conversation on leadership panel, you know, we were inaugurated in August 2022 and we had a lot of conversations and it almost always ended with we don’t want this, we don’t want that. Many of you I hope have seen the internet we want paper that the leadership panel put out. That was the idea behind it. We have to at some point define certain things. There are certain things that we agree on. We don’t all agree on everything, but there are certain things that people will not feel too strongly against and we could start with that. The internet we want paper talks about certain things that I’m sure some people will read and say, hey, rights online. Maybe we don’t want that, but at least it is out there as something that certain stakeholders and majority of people desire, you know, to have. So I think it’s absolutely important. Yes, optimism, yes, dreaming, but also putting down in clear terms what we want. Because at times what that does is when you go into situations and you see reality, you can then say this is the reality, this is what I want and your task, your action is then creating a pathway. between where you are at and where you want to be. If I want to face reality, I definitely will resign from my job right now. I mean, I work on a continent to talk about digital rights and inclusion where every other conversation I have with governments in the region is about clampdowns or about explaining away, you know, clampdowns that they have, but it helps that we know that this is the desired destination. This is where we’re at and this is the tough work we have to do from point A, which is where we’re at, to point B, where we’d like to be.


Anriette Esterhuysen: You know, I think that, I think sometimes we say what we want and particularly when we try and say it in a multi-stakeholder way, you know, it sounds like some kind of sort of watered down set of wedding vows or whatever. I can’t think of a good analogy. I mean, I want fair tax payment by big tech so that countries who need revenue to actually build a fiber optic backbone so that there’s feasible, reasonable internet for institutions, for universities in a country to be able to have some access to resources. I, you know, I want data flows that are not based on an extractive sort of colonial type model, you know. I also want competition between the private sector and I want local private sector operators in developing countries. There are lots of things I want that I think will create an enabling open and inclusive internet, but it’s almost impossible to say those things in the context of so many multi-stakeholder fora because you don’t want to offend the private sector. You don’t want to offend governments that shut down the internet. You know, you don’t want to talk about the great firewall of this or that country. And I think we have to be able to be willing to use this sort of multi-stakeholder modality with a little bit more. I think it will help us get there, but I do have a concrete suggestion for the IGF, because I think this methodology is so powerful. I think one of the things that makes multi-stakeholder fora, or has the IGF, as it’s evolved, made it maybe also more difficult, is that it’s much more now not about individuals, but about institutions. I mean, if Philipp came to an IGF 15 years ago, he might have just been there as an individual, rather than as a representative of the German government. Now, there are pros and cons both ways, but I think if we could maybe collaborate, yourselves collaborate, with Roberta and her team, come up with a game that at the next IGF we play not in rooms like this, where we sit here and talk and you all sit there and listen, but actually engage with one another in an interactive way, and everyone participates in thinking about foresight and changes. And there’s no reason why you can’t do that in a room with 500 people, actually. There are methodologies that allow that. So, I’d like to see a redesigned IGF, a redesigned and a braver IGF, redesigned in terms of making it much more participative and innovative, in terms of the methodologies we use for our sessions, and a braver IGF, more willing to actually ask difficult questions around which there’s not going to be consensus.


Philipp Schulte: Absolutely. I think that’s a really good proposal to have not only workshop and lightning talks, but also games. That might be a really good new session format for the next IGF. Are there any other questions in the audience or online? I’m happy to take them now. Otherwise, I would invite my panelists for the final remarks. And partly you have answered them already, but you might summarize it and make it a bit more precise, so I can write that down. So, you articulated wishes for the IGF, but you might also articulate wishes for the German government or other governments when we now would get funding for another process. What are your three main wishes? What should be the outcome? And would you support us? Gbenga Sesan, you want me to start this time? Why don’t we let Julia start? As you want. You want me to start? Yes, please.


Julia Pohler: Okay. That’s a tough question when I have to think that the German government should be doing this. I was actually… thinking that what would I would like to have this kind of exercise on. But yeah, as Anriette just said, maybe we have to be a bit more courageous and in kind of tackling the elephants in the room. So I think what would be one of the things I would like to see a foresight exercise on is the practices of big technology companies in creating digital barriers and closed ecosystems. Because I think there’s a lot of talk recently about the potential fragmentation of our digital space due to governments and government regulation and digital sovereignty and a lot of fear related to this. Much of it is coming out of a particular idea that we need a certain kind of digital space that is even free from governments. I think we had this discussion right now. But I would like to see more attention being paid to how the dominant business models of our current platform economy fragment our online spaces and lead to many of the phenomenon that Anriette also just mentioned. So I think that would be one of the issues that should be tackled. Whether the German government is in a position to do this, I don’t know. But maybe we have to be courageous.


Anriette Esterhuysen: I think it would be interesting to to I support what Julia just said. I think because the role of states emerged as so important in the exercise. Maybe some activity to look at the role of states, but in a more creative way, not just look at, you know, digital services, digital market, you know, not just look at. I mean, often I feel governments feel that in their toolbox, there’s basically repression and regulation. In fact, governments have a huge toolbox that they can use. They can do so many good and engaging. And exactly. And as Philip is saying, this kind of thing, but maybe to use this in the IGF context, perhaps to work with other governments about, you know, what what is it really that governments can do to help us to enable this, what the multistakeholder ideal represents, which is inclusion, accountability and creativity. In other words, so instead of always, you know, governments being kind of the the silent partner or sometimes the problematic partner in this multistakeholder journey.


J

Julia Pohler

Speech speed

175 words per minute

Speech length

2908 words

Speech time

991 seconds

Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties

Explanation

Julia explains that strategic foresight is not about predicting the exact future but rather a process that helps deal with uncertainties by exploring possible futures. It’s about thinking how to prepare for different scenarios rather than trying to guess what will actually happen.


Evidence

The German task force created four distinct scenarios for internet governance in 2040, ranging from continued geoeconomic competition to complete systemic collapse and fragmentation


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Strategic foresight methodology is valuable for participatory processes and creative thinking


Disagreed with

– Anriette Esterhuysen

Disagreed on

Level of abstraction in foresight methodology and its practical utility


The German task force developed four distinct scenarios for internet governance in 2040 through structured participatory discussions

Explanation

Julia describes how 15 task force members from diverse German communities (academia, business, civil society, technical community) worked together to develop scenarios. The process involved collecting influential factors, discussing impacts, and drafting four possible futures for the next 15 years.


Evidence

The four scenarios covered: continuation of geoeconomic competition trends, complete systemic collapse and internet fragmentation, total regulation and control of the digital world, and complete transformation away from economic competitive logic toward public goods


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Geopolitics and state actions emerged as the primary driving factors across most scenarios, more than anticipated

Explanation

Julia notes that the most important factor in almost all scenarios was the role of states and governments, particularly major powers like the US, China, Russia, and the EU. Geopolitics and geoeconomics were the main drivers for transformations in the scenarios they developed.


Evidence

The scenarios were written before President Trump took office again and before increased geopolitical tensions, yet state actions still emerged as key factors. Julia believes they would have emphasized geopolitics even more if writing today


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Geopolitical developments and state actions are increasingly important drivers in internet governance


Current geopolitical developments are moving faster than the scenarios predicted, with increased state involvement

Explanation

Julia observes that actual geopolitical developments have already overtaken the scenarios written only 6-8 months ago, with reality moving faster than anticipated. She contrasts this with a 2013-15 foresight process where corporate actors and civil society were the key drivers instead of states.


Evidence

The scenarios were developed before recent geopolitical tensions escalated, and Julia notes that 10 years ago in a similar process, corporate actors and civil society were the main focus rather than states


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Geopolitical developments and state actions are increasingly important drivers in internet governance


All scenarios except one showed a bleak future for multistakeholderism, with processes being hollowed out or institutionalized beyond meaning

Explanation

Julia explains that in none of the scenarios except the complete transformation one was there a bright future for multistakeholder governance. The scenarios depicted futures where multistakeholder processes are either undermined by corporate and state actors or become so institutionalized that they lose their bottom-up character and meaningful participation.


Evidence

In the scenarios, commitment to multistakeholderism either becomes lip service or processes become so predictable and professionalized that they lose the ability to include divergent voices from mainstream perspectives


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Gbenga Sesan

Agreed on

Current multistakeholder governance faces significant challenges and needs renewal


Future foresight processes should include government representatives directly in task forces for better implementation

Explanation

Julia suggests that having people from the ministry directly in the task force would be extremely helpful for future exercises. This would allow better dialogue about where the scenarios lead and help both task force members understand their contributions and help government understand the competing perspectives behind the ideas.


Evidence

Julia notes the current process had difficulties with task force members picturing where they were going, and there was an opaque part regarding how the government would use the developed ideas


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Anriette Esterhuysen
– Audience

Agreed on

Future processes should be more interactive and participatory


Disagreed with

– Philipp Schulte

Disagreed on

Government participation in foresight task forces


Digital barriers created by big tech companies fragment online spaces as much as government regulation

Explanation

Julia argues that there should be more attention paid to how dominant business models of the current platform economy fragment online spaces. She suggests this creates barriers similar to those created by government regulation, challenging the idea that digital spaces need to be free from all government involvement.


Evidence

Julia notes there’s much discussion about potential fragmentation due to governments and digital sovereignty, but less attention to how big tech business models create similar fragmentation effects


Major discussion point

Technology’s Impact and Implementation


Topics

Economic | Legal and regulatory


Future exercises should examine big tech business models and government enabling roles more courageously

Explanation

Julia suggests that future foresight exercises should more courageously tackle issues like the practices of big technology companies in creating digital barriers and closed ecosystems. She acknowledges uncertainty about whether the German government is positioned to do this but emphasizes the need for courage in addressing these topics.


Major discussion point

Practical Applications and Future Directions


Topics

Economic | Legal and regulatory


A

Anriette Esterhuysen

Speech speed

155 words per minute

Speech length

2510 words

Speech time

969 seconds

Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints

Explanation

Anriette explains that while she initially found foresight methodology frustrating in the 1990s, she now sees its value for encouraging creative and innovative thinking in internet governance. She believes the abstraction helps move beyond current boring approaches and enables critical thinking about governance evolution.


Evidence

Anriette contrasts her earlier frustrating experience with foresight in post-apartheid South Africa with her current appreciation, noting that internet governance has become ‘boring’ and needs more creativity


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Gbenga Sesan

Agreed on

Strategic foresight methodology is valuable for participatory processes and creative thinking


Disagreed with

– Julia Pohler

Disagreed on

Level of abstraction in foresight methodology and its practical utility


States have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals

Explanation

Anriette argues that how states engage with one another has profound impact on inclusive governance, civil society strength, human rights respect, and democratic institutions. She sees the multistakeholder approach as a way to remind that states cannot govern alone, but this doesn’t diminish states’ important role.


Evidence

She points to the reluctance to discuss ‘enhanced cooperation’ and notes that multistakeholder governance should connect diverse decision-making processes rather than create a uniform alternative dimension


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Julia Pohler
– Gbenga Sesan

Agreed on

Geopolitical developments and state actions are increasingly important drivers in internet governance


The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance

Explanation

Anriette explains that internet governance is a diverse ecosystem with many types of decision-making processes, some led by governments, others by technical communities or private sector. The multistakeholder approach should remind us to connect these processes and ensure they overlap and engage with one another.


Evidence

She contrasts this with the ‘fairytale notion’ of multistakeholder governance as a perfect alternative dimension, arguing instead for practical connection of existing diverse governance processes


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Early hopes that technology would be an equalizer between rich and poor have not fully materialized

Explanation

Anriette reflects on the naive belief from the WSIS era that access to communications technology would be an equalizer between rich and poor, center and periphery, and different genders. While technology still provides some of these benefits, it didn’t pan out as expected.


Evidence

She references her work building internet connectivity in Africa in the 1980s-2000s and the shift from WSIS to WSIS Plus 20, noting the belief that technology would create engagement and cooperation


Major discussion point

Technology’s Impact and Implementation


Topics

Development | Human rights


Technology interacts complexly with geopolitical conflicts and societal changes rather than operating independently

Explanation

Anriette emphasizes that technology should be viewed both as a force with its own impact and as a sector that interacts with geopolitical conflicts and different forms of societal change. The challenge is understanding how individuals and societies engage with and are changed by technology in different scenarios.


Evidence

She notes that when faced with unpredictability, there’s a tendency to look for institutions with capacity to prevent things from going wrong, but corporations and states are also unpredictable


Major discussion point

Technology’s Impact and Implementation


Topics

Legal and regulatory | Sociocultural


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything

Explanation

Anriette calls for a redesigned and braver IGF that is more participative and innovative in its methodologies, and more willing to ask difficult questions where there won’t be consensus. She argues that the reality is stakeholders don’t all want the same thing and won’t always have consensus.


Evidence

She suggests using scenario games with 500 people in interactive formats rather than traditional panel discussions, and notes the need for an IGF that ‘allows the wind in’ rather than staying in a safe multistakeholder space


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Gbenga Sesan

Agreed on

Current multistakeholder governance faces significant challenges and needs renewal


Disagreed with

– Gbenga Sesan

Disagreed on

Approach to defining stakeholder goals – dreaming vs. concrete specificity


The IGF should adopt more participative methodologies including scenario games rather than traditional panel formats

Explanation

Anriette proposes collaborating to create games for the next IGF that would engage participants interactively rather than having traditional sessions where panelists talk and audiences listen. She suggests there are methodologies that allow participative engagement even with 500 people.


Evidence

She contrasts current IGF format where people sit and talk while others listen with proposed interactive methodologies that would allow everyone to participate in foresight and change discussions


Major discussion point

Practical Applications and Future Directions


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Audience

Agreed on

Future processes should be more interactive and participatory


G

Gbenga Sesan

Speech speed

163 words per minute

Speech length

1762 words

Speech time

646 seconds

The methodology helps organizations adjust strategies as reality unfolds compared to predicted scenarios

Explanation

Gbenga explains that one of the beautiful things about possible futures is that when something happens close to a discussed scenario, organizations have the opportunity to either align with or move away from certain developments. He emphasizes the importance of adjusting strategies as scenarios unfold rather than rigidly following original plans.


Evidence

He gives the example of planning for ‘level seven’ scenarios but needing to adjust when reality reaches ‘level nine,’ noting it would be insanity to take actions planned for level seven when at level nine


Major discussion point

Strategic Foresight Methodology and Process


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Anriette Esterhuysen

Agreed on

Strategic foresight methodology is valuable for participatory processes and creative thinking


Multistakeholder governance faces threats from those seeking more pragmatic but less inclusive models

Explanation

Gbenga notes that because multistakeholder governance hasn’t lived up to its ideals of equal partnership, some people are advocating for less perfect but more pragmatic models. He sees this as a challenge because while adjustment is necessary, there’s a risk of moving from optimism straight to pessimism without maintaining realistic hope.


Evidence

He acknowledges that not everyone is equal around the multistakeholder table and that the ideal hasn’t worked perfectly, but argues for maintaining a healthy dose of realism while believing things can become better


Major discussion point

Future of Multistakeholder Governance


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Anriette Esterhuysen

Agreed on

Current multistakeholder governance faces significant challenges and needs renewal


The scenarios can serve as conversation starters but need updating with current geopolitical realities

Explanation

Gbenga believes the report will be useful as a starter for conversations and appreciates both its content and underlying principles. However, he suggests it needs quick updating with new realities, possibly through an addendum, because geopolitical developments have moved faster than anticipated in the scenarios.


Evidence

He notes that geopolitics discussed in the scenarios wasn’t as deep as current experiences, and emphasizes the principle of creating possible futures and adjusting strategies as developments emerge


Major discussion point

Practical Applications and Future Directions


Topics

Legal and regulatory


Stakeholders must maintain optimism and define what they want while being realistic about challenges

Explanation

Gbenga emphasizes the importance of coming to the table with desired scenarios and ideals, even when faced with frustrating realities. He uses the analogy of sailing against the wind – you can either submit to the wind’s direction or calculate the right approach to reach your desired destination despite opposition.


Evidence

He references his work on digital rights in Africa where conversations with governments often involve clampdowns, but having a clear desired destination helps create pathways from current reality to goals. He also mentions the ‘Internet We Want’ paper from the leadership panel


Major discussion point

Practical Applications and Future Directions


Topics

Human rights | Legal and regulatory


Disagreed with

– Anriette Esterhuysen

Disagreed on

Approach to defining stakeholder goals – dreaming vs. concrete specificity


A

Audience

Speech speed

152 words per minute

Speech length

890 words

Speech time

351 seconds

Reports should be made accessible and used in interactive formats like games for broader stakeholder engagement

Explanation

Professor Roberta Haar from Maastricht University proposes collaboration to adapt the scenario data into games for policy stakeholder engagement. She describes their successful experience with scenario testing workshops and games developed with the EU Commission’s Joint Research Center, suggesting this approach could make the German scenarios more interactive and useful.


Evidence

She provides examples of their scenario games on military AI played at Erasmus University with extremely good results, and mentions upcoming workshops in Rome, Helsinki, and Brussels


Major discussion point

Practical Applications and Future Directions


Topics

Legal and regulatory


Agreed with

– Julia Pohler
– Anriette Esterhuysen

Agreed on

Future processes should be more interactive and participatory


Governments need enhanced cooperation mechanisms as they cannot solve digital issues alone

Explanation

Bertrand de la Chapelle argues that geopolitical tensions demonstrate that governments together cannot solve digital problems, noting the irony that the only agreement among all governments on digital issues in 20 years since WSIS was a cybercrime convention sponsored by a country heavily involved in cybercrime. He emphasizes the need for multistakeholder environments to bring different actors together.


Evidence

He points to the lack of government agreements on digital issues except the cybercrime convention, and notes the limitations of the multilateral system while emphasizing the fundamental importance of preserving states’ role


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory | Cybersecurity


P

Philipp Schulte

Speech speed

150 words per minute

Speech length

1668 words

Speech time

663 seconds

States should act as enablers ensuring all stakeholder groups can perform their roles effectively

Explanation

Philipp suggests that the responsibility of the state in the multistakeholder environment is like tending a garden – ensuring that all different stakeholder groups can perform in the roles they want to perform and perform best. He sees the state’s role as facilitative rather than directive in multistakeholder governance.


Evidence

He uses the metaphor of the state as a gardener in a garden of multistakeholders, responsible for creating conditions where all groups can flourish in their respective roles


Major discussion point

Role of States in Internet Governance


Topics

Legal and regulatory


Disagreed with

– Julia Pohler

Disagreed on

Government participation in foresight task forces


Agreements

Agreement points

Strategic foresight methodology is valuable for participatory processes and creative thinking

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties


Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints


The methodology helps organizations adjust strategies as reality unfolds compared to predicted scenarios


Summary

All speakers agree that strategic foresight is a valuable methodology that helps stakeholders think creatively about possible futures and prepare for uncertainties, with the process itself being as important as the outcomes


Topics

Legal and regulatory


Geopolitical developments and state actions are increasingly important drivers in internet governance

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Geopolitics and state actions emerged as the primary driving factors across most scenarios, more than anticipated


States have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals


Current geopolitical developments are moving faster than the scenarios predicted, with increased state involvement


Summary

There is consensus that states and geopolitical factors have become more prominent in internet governance than previously anticipated, with this trend accelerating faster than expected


Topics

Legal and regulatory


Current multistakeholder governance faces significant challenges and needs renewal

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

All scenarios except one showed a bleak future for multistakeholderism, with processes being hollowed out or institutionalized beyond meaning


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Multistakeholder governance faces threats from those seeking more pragmatic but less inclusive models


Summary

All speakers acknowledge that multistakeholder governance is facing serious challenges and requires significant reform to remain relevant and effective


Topics

Legal and regulatory


Future processes should be more interactive and participatory

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Audience

Arguments

Future foresight processes should include government representatives directly in task forces for better implementation


The IGF should adopt more participative methodologies including scenario games rather than traditional panel formats


Reports should be made accessible and used in interactive formats like games for broader stakeholder engagement


Summary

There is agreement that future governance processes and forums should move beyond traditional formats to more interactive, participatory approaches that engage all stakeholders more meaningfully


Topics

Legal and regulatory


Similar viewpoints

Both speakers believe that future governance discussions need to be more courageous in addressing difficult topics, including the role of big tech companies and challenging existing assumptions about multistakeholder processes

Speakers

– Julia Pohler
– Anriette Esterhuysen

Arguments

Future exercises should examine big tech business models and government enabling roles more courageously


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Topics

Legal and regulatory | Economic


Both speakers acknowledge that early optimistic visions about technology’s impact haven’t fully materialized, but emphasize the importance of maintaining hope and clear goals while being realistic about current challenges

Speakers

– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Early hopes that technology would be an equalizer between rich and poor have not fully materialized


Stakeholders must maintain optimism and define what they want while being realistic about challenges


Topics

Development | Human rights


Both speakers view the role of states and multistakeholder governance as facilitative and connecting, rather than controlling or replacing existing governance mechanisms

Speakers

– Anriette Esterhuysen
– Philipp Schulte

Arguments

The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Topics

Legal and regulatory


Unexpected consensus

States playing a more prominent role in internet governance is not necessarily negative

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Philipp Schulte

Arguments

Geopolitics and state actions emerged as the primary driving factors across most scenarios, more than anticipated


States have always had profound impact on governance inclusivity and should not be seen as undermining multistakeholder ideals


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Explanation

Despite the traditional internet governance community’s wariness of state involvement, there was unexpected consensus that increased state engagement could be positive if states act as enablers rather than controllers, and that their involvement was perhaps inevitable and necessary


Topics

Legal and regulatory


The need for more courageous and direct discussions in multistakeholder forums

Speakers

– Julia Pohler
– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Future exercises should examine big tech business models and government enabling roles more courageously


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Stakeholders must maintain optimism and define what they want while being realistic about challenges


Explanation

There was unexpected consensus that the multistakeholder community needs to move away from seeking consensus on everything and instead engage in more direct, potentially confrontational discussions about difficult issues like big tech power and government overreach


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The speakers demonstrated strong consensus on several key issues: the value of strategic foresight methodology, the increasing importance of geopolitical factors in internet governance, the need for multistakeholder governance reform, and the importance of more participatory processes. There was also unexpected agreement that increased state involvement isn’t necessarily negative if properly channeled, and that the community needs more courageous discussions about difficult topics.


Consensus level

High level of consensus with constructive disagreement mainly on implementation details rather than fundamental principles. This suggests the internet governance community is ready for significant reforms and new approaches, with broad agreement on the direction of needed changes. The consensus around the need for renewal and more direct engagement indicates potential for meaningful evolution of governance processes.


Differences

Different viewpoints

Level of abstraction in foresight methodology and its practical utility

Speakers

– Anriette Esterhuysen
– Julia Pohler

Arguments

Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints


Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties


Summary

Anriette acknowledges that abstraction is still an issue with foresight methodology and questions whether the report itself will be particularly useful, emphasizing that the exercise process is more valuable than the output. Julia focuses more on the methodology’s value in creating plausible scenarios for preparation, suggesting the reports can be practically useful for decision-making.


Topics

Legal and regulatory


Approach to defining stakeholder goals – dreaming vs. concrete specificity

Speakers

– Gbenga Sesan
– Anriette Esterhuysen

Arguments

Stakeholders must maintain optimism and define what they want while being realistic about challenges


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Summary

Gbenga emphasizes the importance of maintaining optimism and ‘dreaming’ about desired outcomes even when facing challenges, while Anriette argues that it’s not just about dreaming but about concrete things, and criticizes that multistakeholder fora often produce watered-down statements to avoid offending stakeholders. She wants more specific, potentially controversial positions.


Topics

Legal and regulatory | Human rights


Government participation in foresight task forces

Speakers

– Julia Pohler
– Philipp Schulte

Arguments

Future foresight processes should include government representatives directly in task forces for better implementation


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Summary

Julia advocates for direct government participation in task forces to improve dialogue and implementation, while Philipp expresses reluctance about government involvement, stating they were hesitant to be on the task force to avoid having a government-steered process that would use stakeholder scenarios as lip service.


Topics

Legal and regulatory


Unexpected differences

Value and utility of the foresight report output versus process

Speakers

– Anriette Esterhuysen
– Julia Pohler
– Gbenga Sesan

Arguments

Foresight exercises are valuable for the participatory process itself, allowing creative thinking beyond current constraints


Strategic foresight creates plausible future scenarios rather than predictions to help prepare for uncertainties


The scenarios can serve as conversation starters but need updating with current geopolitical realities


Explanation

Unexpectedly, the panelists who participated in the foresight exercise disagreed on its practical utility. Anriette, despite finding the process valuable, questioned whether the report itself would be useful and emphasized that participatory processes like this are mainly valuable to participants. Gbenga was more optimistic about the report’s utility as conversation starters, while Julia focused on the methodology’s value. This disagreement is unexpected because all three were involved in the same process but came away with different assessments of its practical value.


Topics

Legal and regulatory


Overall assessment

Summary

The main areas of disagreement centered on methodological approaches to foresight exercises, the balance between idealism and pragmatism in multistakeholder governance, and the appropriate level of government involvement in stakeholder processes. Despite participating in the same foresight exercise, speakers had different views on its practical utility and implementation.


Disagreement level

The level of disagreement was moderate and constructive rather than fundamental. Speakers shared common concerns about the future of multistakeholder governance and the increasing role of states, but differed on approaches and solutions. The disagreements reflect different perspectives on how to strengthen and evolve internet governance rather than fundamental opposition to shared goals. This suggests a healthy debate within the community about methods and strategies rather than irreconcilable differences on core principles.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers believe that future governance discussions need to be more courageous in addressing difficult topics, including the role of big tech companies and challenging existing assumptions about multistakeholder processes

Speakers

– Julia Pohler
– Anriette Esterhuysen

Arguments

Future exercises should examine big tech business models and government enabling roles more courageously


The IGF needs renewal and redesign to tackle difficult questions without seeking consensus on everything


Topics

Legal and regulatory | Economic


Both speakers acknowledge that early optimistic visions about technology’s impact haven’t fully materialized, but emphasize the importance of maintaining hope and clear goals while being realistic about current challenges

Speakers

– Anriette Esterhuysen
– Gbenga Sesan

Arguments

Early hopes that technology would be an equalizer between rich and poor have not fully materialized


Stakeholders must maintain optimism and define what they want while being realistic about challenges


Topics

Development | Human rights


Both speakers view the role of states and multistakeholder governance as facilitative and connecting, rather than controlling or replacing existing governance mechanisms

Speakers

– Anriette Esterhuysen
– Philipp Schulte

Arguments

The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance


States should act as enablers ensuring all stakeholder groups can perform their roles effectively


Topics

Legal and regulatory


Takeaways

Key takeaways

Strategic foresight is a valuable methodology for exploring plausible futures rather than making predictions, helping stakeholders prepare for uncertainties and disruptions


Geopolitics and state actions have emerged as the primary driving factors in internet governance scenarios, with current developments moving faster than anticipated


All developed scenarios except one showed a bleak future for multistakeholder governance, with processes being either hollowed out or over-institutionalized


The multistakeholder approach should focus on connecting diverse decision-making processes rather than creating uniform governance structures


States play a crucial enabling role in internet governance and should not be viewed as undermining multistakeholder ideals


The participatory process of developing scenarios is often more valuable than the final report itself


Technology’s role as an equalizer has not materialized as hoped, and big tech business models create digital barriers that fragment online spaces


The IGF needs renewal and redesign to become more participative, innovative, and willing to tackle difficult questions without requiring consensus


Resolutions and action items

The German government will proceed with publishing the strategic foresight report under the new ministry responsible for strategic foresight


Collaboration proposed between the German foresight project and the Remit Research project to develop scenario testing workshops and games


Future foresight processes should include government representatives directly in task forces for better implementation and understanding


The IGF should consider adopting new session formats including scenario games rather than traditional panel discussions


The scenarios need updating with current geopolitical realities through addendums or annexes


Stakeholders should work together to define ‘what we want’ in concrete terms rather than abstract ideals


Unresolved issues

How to make foresight reports more accessible and useful for daily work beyond the participatory process


The challenge of maintaining stakeholder engagement throughout lengthy foresight exercises


How to balance the need for government involvement with maintaining genuine multistakeholder processes


The tension between being realistic about current challenges while maintaining optimism for desired outcomes


How to address the role of big technology companies in fragmenting digital spaces


The future mandate and structure of the IGF, particularly regarding the 2026 discussions


How to make multistakeholder forums more willing to tackle controversial issues without losing participants


Suggested compromises

Governments should act as enablers ensuring all stakeholder groups can perform their roles effectively, rather than dominating processes


Multistakeholder governance should embrace diverse decision-making processes that overlap and engage with each other rather than seeking uniform approaches


Future scenario exercises should balance German/European perspectives with global and diverse viewpoints through targeted interviews and validation


The IGF should become ‘braver’ in asking difficult questions while maintaining its inclusive character


Foresight exercises should examine both government regulation and big tech business models as sources of digital fragmentation


Strategic foresight should be used as an ongoing adjustment tool rather than a one-time prediction exercise


Thought provoking comments

I think my only sort of one, I would have liked to be part of a focus group or a group at some point. I think I found it, I would have found it more interesting in some ways to have a group dynamic. And then I think my only other question about it as well is the way in which you treat multi-stakeholder in how you are approaching the future of Internet governance. And I think in that sense, the study itself, I think, perhaps did not unpack or deconstruct what multi-stakeholder means.

Speaker

Anriette Esterhuysen


Reason

This comment was insightful because it identified a fundamental methodological limitation and conceptual weakness in the foresight exercise. Esterhuysen pointed out that the study treated ‘multi-stakeholder’ as a one-dimensional concept without deconstructing its complexity, which is crucial given that multi-stakeholderism is central to internet governance discussions.


Impact

This critique shifted the conversation toward examining the limitations of current multi-stakeholder approaches and sparked deeper reflection on whether the focus should be on ‘multi-stakeholder governance’ or simply ‘effective, accountable governance.’ It also influenced Julia’s later admission that multi-stakeholderism appeared to have a bleak future in most scenarios.


So I think when kind of I we get to the stage where we really wrote the scenarios, and I looked at them with some distance after a while, I think what strikes me most is that the most important factor in almost all scenarios… is actually the role of states and the role of governments… And we wrote these scenarios before President Trump took office again. And before we kind of saw this increase of geopolitical tensions… So I think Today we would have gone even further in emphasizing the role of geopolitics and geoeconomics… the reality is actually moving faster than we thought it would.

Speaker

Julia Pohler


Reason

This observation was particularly thought-provoking because it revealed how rapidly geopolitical realities were outpacing even recent foresight exercises. It highlighted the dominance of state actors over other stakeholders in shaping internet governance futures, which challenges traditional multi-stakeholder ideals.


Impact

This comment fundamentally reframed the discussion around the central role of states in internet governance, leading other panelists to acknowledge this reality rather than resist it. It sparked a conversation about how to work with, rather than around, state power in multi-stakeholder processes.


I think in all of these scenarios we ended up writing possible futures in where multi-stakeholder processes are either being hollowed out or kind of completely undermined by corporate actors and state actors… So I would say that in all of these scenarios, somehow multi-stakeholderism and governance has outlived its promises.

Speaker

Julia Pohler


Reason

This was a stark and honest assessment that challenged the fundamental assumptions of the internet governance community. The admission that their scenarios showed multi-stakeholderism failing across different futures was a sobering reality check for the field.


Impact

This comment created a turning point in the discussion, moving from abstract scenario planning to concrete concerns about the viability of current governance models. It prompted other speakers to defend and redefine multi-stakeholderism, leading to more nuanced discussions about what effective governance actually means.


So much as we like this, I don’t know, there’s this kind of fairytale notion of multi-stakeholder governance as this alternative dimension of perfect governance. I mean, I see it as a way, a way of arriving at more accountable, inclusive, effective governance. And states are a big part of that.

Speaker

Anriette Esterhuysen


Reason

This comment was insightful because it reframed multi-stakeholder governance from an idealistic end goal to a pragmatic methodology. It challenged the community’s tendency to romanticize multi-stakeholderism while acknowledging the legitimate and necessary role of states.


Impact

This reframing helped move the conversation away from defending an idealized model toward discussing practical approaches to inclusive governance. It provided a more mature perspective that influenced subsequent discussions about how different stakeholders can work together effectively.


I think we do need to think more creatively. And I’m just going to give one example… to see the European states in particular, shell-shocked, because it was so difficult for them to operate in this context, when a long-time partner in the Internet governance and World Summit process, the US, was moving outside or taking on a different position. My first thought during that entire week was, I wish these governments had all done some foresight work.

Speaker

Anriette Esterhuysen


Reason

This concrete example powerfully illustrated the practical value of foresight exercises. By describing how unprepared governments were for geopolitical shifts, it demonstrated why scenario planning is essential for effective governance and diplomacy.


Impact

This example shifted the discussion from abstract methodology to concrete applications, helping participants understand the real-world value of foresight work. It reinforced the argument for more widespread adoption of these techniques in government and international relations.


I think sometimes we say what we want and particularly when we try and say it in a multi-stakeholder way, you know, it sounds like some kind of sort of watered down set of wedding vows… I want fair tax payment by big tech so that countries who need revenue to actually build a fiber optic backbone… I want data flows that are not based on an extractive sort of colonial type model… but it’s almost impossible to say those things in the context of so many multi-stakeholder fora because you don’t want to offend the private sector.

Speaker

Anriette Esterhuysen


Reason

This comment was particularly provocative because it exposed the tendency of multi-stakeholder processes to avoid difficult topics in favor of consensus-building, resulting in bland, ineffective outcomes. The specific examples made abstract governance discussions concrete and political.


Impact

This critique sparked a broader conversation about the need for ‘braver’ multi-stakeholder processes that can tackle controversial issues. It influenced the final recommendations about redesigning the IGF to be more participatory and willing to address difficult questions without requiring consensus.


Overall assessment

These key comments fundamentally shifted the discussion from a celebratory presentation of a foresight exercise to a critical examination of the current state and future viability of internet governance models. The conversation evolved through several phases: initial methodological critiques led to acknowledgment of the dominant role of states, which prompted honest assessment of multi-stakeholderism’s limitations, ultimately resulting in calls for more pragmatic, brave, and creative approaches to governance. The panelists’ willingness to challenge sacred assumptions about multi-stakeholder governance created space for more mature and realistic discussions about how to achieve effective, inclusive governance in a rapidly changing geopolitical landscape. The discussion demonstrated how foresight exercises can serve not just as planning tools, but as catalysts for fundamental reconsideration of existing approaches and assumptions.


Follow-up questions

How can the scenarios be updated to reflect rapidly changing geopolitical realities, particularly after recent political developments?

Speaker

Gbenga Sesan


Explanation

The scenarios were developed before recent geopolitical changes and may need updating as reality is moving faster than anticipated, requiring an addendum or annex to maintain relevance


How can multi-stakeholder processes be transformed to make them more meaningful and avoid being hollowed out or institutionalized to the point of losing their bottom-up character?

Speaker

Julia Pohler


Explanation

All scenarios except one showed a bleak future for multi-stakeholderism, suggesting current models may be outliving their promises and need transformation


How can strategic foresight exercises be made more participatory and engaging, potentially through gaming methodologies?

Speaker

Professor Roberta Haar


Explanation

There’s interest in adapting the scenario data into interactive games and collaborative workshops to make policy discussions more engaging and meaningful


What concrete actions should the German government take based on the scenarios developed, and how can this be made more transparent to stakeholders?

Speaker

Julia Pohler


Explanation

There’s a need for clearer understanding of how the scenarios will be implemented and what specific policy actions will result from the foresight exercise


How can government representatives be better integrated into future foresight task forces to improve mutual understanding?

Speaker

Julia Pohler


Explanation

Having government officials directly participate in scenario development could help both stakeholders understand the impact of their contributions and help governments understand different perspectives


What is the future mandate and structure of the IGF, and when will this be discussed?

Speaker

Bertrand de la Chapelle


Explanation

The evolution of IGF’s institutional arrangements needs to be addressed, particularly in the context of WSIS+20 discussions and the need for more effective multi-stakeholder governance


How can the IGF be redesigned to be more participative, innovative, and willing to tackle difficult questions without consensus?

Speaker

Anriette Esterhuyse


Explanation

Current IGF formats may be too institutionalized and risk-averse, requiring new methodologies and greater courage to address contentious issues effectively


What role should governments play in creating enabling environments for multi-stakeholder governance beyond just regulation and repression?

Speaker

Anriette Esterhuyse


Explanation

Governments have broader toolkits available and could be more creative partners in enabling inclusive, accountable, and creative governance processes


How do dominant technology companies’ business models fragment digital spaces and create barriers, and what are the implications for internet governance?

Speaker

Julia Pohler


Explanation

There’s insufficient attention to how platform economy business models contribute to digital fragmentation, compared to focus on government-driven fragmentation


How can foresight methodologies better account for ‘black swan’ events and unexpected developments that are difficult to anticipate?

Speaker

Bertrand de la Chapelle


Explanation

Historical examples show that major technological and social changes often come from unexpected directions, challenging the predictive capacity of scenario planning


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #395 Applying International Law Principles in the Digital Space

WS #395 Applying International Law Principles in the Digital Space

Session at a glance

Summary

This workshop session explored how existing international law frameworks can be applied to protect human rights in digital spaces, examining the intersection of international human rights law, international humanitarian law, and international criminal law. The discussion was moderated by Sanhawan Srisod and featured panelists from various organizations working on digital rights and international law.


Chantal Joris from Article 19 opened by explaining that while there is broad consensus that international law applies to cyberspace, the practical implementation remains controversial and fragmented. She highlighted how different legal frameworks—human rights law, humanitarian law, and criminal law—each address similar issues like incitement to violence and hate speech, but often operate separately without clear coordination. This fragmentation creates protection gaps for affected communities who may not receive adequate remedies regardless of which legal framework technically applies.


Tiesa McCruff from Hamle provided a concrete example from Gaza, describing how tech companies have failed to meet their obligations under the UN Guiding Principles on Business and Human Rights. She documented systematic censorship of Palestinian voices on social media platforms, disproportionate content moderation, and the role of online platforms in potentially facilitating genocide through hate speech and dehumanization campaigns.


Nieves Molina from the Danish Institute for Human Rights emphasized the growing accountability gap in digital spaces, noting that crimes enabled by digital technology are often treated with “exceptionalism” rather than being addressed through existing international law frameworks. She pointed to the blurred relationship between states and corporations as creating difficulties in determining responsibility levels, and suggested that fragmentation enables forum shopping and avenues for impunity.


Francisco Brito Cruz, participating online from Brazil, discussed the challenge of translating corporate responsibility principles into practical platform accountability. He emphasized that both action and inaction by platforms can produce human rights violations, and highlighted Brazil’s recent experience with blocking Platform X as an example of how complex these accountability questions become in practice. He stressed the importance of building methodologies for human rights due diligence that include transparency, monitoring, and proper expertise.


Mikiko Otani, former chair of the UN Committee on the Rights of the Child, brought attention to children’s specific vulnerabilities online while emphasizing that children want to use digital spaces safely rather than be excluded from them entirely. She highlighted the Committee’s 2021 General Comment requiring states and businesses to integrate child rights impact assessments into digital product design and emphasized the importance of hearing directly from children about their experiences and needs.


The discussion revealed several key challenges: the fragmentation of international law creates protection gaps and accountability loopholes; the close relationship between states and corporations complicates responsibility attribution; existing legal frameworks may be sufficient but lack proper implementation and enforcement mechanisms; and there is tension between protecting rights and avoiding overregulation that could harm freedom of expression. Participants concluded that while new legislation may not be the answer, better coordination between existing legal frameworks and stronger implementation of current obligations is essential for protecting human rights in digital spaces.


Keypoints

## Major Discussion Points:


– **Fragmentation of International Law in Digital Spaces**: The discussion highlighted how different bodies of international law (human rights law, humanitarian law, criminal law) operate separately when addressing digital issues, creating protection gaps and accountability challenges. While there’s consensus that international law applies online, the practical implementation remains fragmented and uncoordinated.


– **Platform Accountability and Corporate Responsibility**: Panelists examined the challenges of holding tech companies accountable under international frameworks like the UN Guiding Principles on Business and Human Rights, particularly regarding content moderation, censorship, and their role in enabling human rights violations during conflicts like in Gaza.


– **State-Corporate Collaboration and Double Standards**: The discussion addressed how states and platforms often work together in ways that can violate human rights, such as government requests for content takedowns that silence dissent while allowing harmful content to proliferate, creating a blurred line of responsibility.


– **Protection of Vulnerable Groups, Especially Children**: The conversation emphasized the disproportionate impact of digital harms on specific populations, particularly children, and the need for comprehensive approaches that consider evolving capacities and the seamless nature of online/offline experiences for young users.


– **Gaps in Access to Justice and Remedies**: Panelists discussed the growing accountability gap for victims of digital human rights violations, noting that crimes enabled by digital technology are often treated with “exceptionalism” rather than being addressed through existing international legal frameworks.


## Overall Purpose:


The workshop aimed to explore how existing international law can provide a regulatory framework to protect individuals and communities from human rights abuses in digital spaces. The session was part of a broader Digital Democracy Initiative project to clarify how international law should be progressively interpreted to address ambiguities and contradictions in the digital context.


## Overall Tone:


The discussion maintained a serious, academic tone throughout, with participants demonstrating deep expertise and concern about the challenges presented. The tone was collaborative and constructive, with panelists building on each other’s points rather than disagreeing. There was an underlying sense of urgency about addressing these issues, particularly when discussing real-world examples like the situation in Gaza and conflicts in India-Pakistan. The conversation remained professional and solution-oriented, even when addressing complex and sensitive topics involving state accountability and corporate responsibility.


Speakers

**Speakers from the provided list:**


– **Sanhawan Srisod** – Moderator of the session, Senior legal advisor at the International Commission of Jurists


– **Chantal Joris** – Senior legal advisor at Article 19


– **Tiesa Meccrewfy** – EU advocacy officer of Hamle (Palestinian digital rights group)


– **Francisco Brito Cruz** – Law professor at Funda Getulio Vargas in Brazil, consultant at OSHR BTEC projects (formerly Executive Director of Internet Lab)


– **Nieves Molina** – Chief advisor of tech business and human rights at the Danish Institute for Human Rights


– **Mikiko Otani** – Former chair of the UN committee on the right of the child and ICJ commissioner


– **Audience** – Various audience members asking questions during the Q&A session


**Additional speakers:**


– **Nadim Nashif** – Founder and director of Hamle (Palestinian digital rights group) – mentioned as originally scheduled panelist but unable to travel due to ongoing war between Israel and Iran


Full session report

# International Law in Digital Spaces: Addressing Fragmentation and Accountability Challenges


## Executive Summary


This workshop session, moderated by Sanhawan Srisod from the International Commission of Jurists, examined the critical intersection of international law and digital rights protection. The discussion brought together legal experts, digital rights advocates, and human rights practitioners to explore how existing international legal frameworks can be applied to protect individuals and communities from human rights abuses in digital spaces. The session was part of a broader Digital Democracy Initiative project co-implemented by ICJ and the Danish Institute for Human Rights, aimed at clarifying how international law should be progressively interpreted to address ambiguities and contradictions in the digital context.


The conversation revealed ongoing challenges in translating legal principles into effective protection mechanisms, particularly given the rapid pace of technological advancement and the complex relationships between states and corporations in digital governance. Panelists identified significant gaps between theoretical legal frameworks and practical implementation, with particular attention to fragmentation across different bodies of international law.


## Opening Framework and Context


Moderator Sanhawan Srisod opened the session by explaining that the workshop was part of the Digital Democracy Initiative, a project co-implemented by ICJ and the Danish Institute for Human Rights. She noted that Nadim Nashif from 7amleh was originally scheduled to participate but could not travel due to the Israel-Iran war, and was replaced by Tiesa Meccrewfy from the same organization.


The session followed a structured format with 25-50 minutes of presentations from five panelists, followed by 25 minutes of Q&A and a brief wrap-up. Participants included both in-person attendees and online participants, with Francisco Brito Cruz joining remotely from São Paulo.


## Fragmentation of International Legal Frameworks


Chantal Joris from Article 19 established the foundational framework for the discussion by addressing the fragmentation challenge in applying international law to digital spaces. She explained that while there is consensus that international law applies to cyberspace, different bodies of international law—including international human rights law, international humanitarian law, and international criminal law—each address similar issues such as incitement to violence and hate speech, but operate separately without clear coordination.


Joris emphasized that international human rights law has developed the most advanced understanding of digital application, but noted the challenge that technology advances much more quickly than the ability of international bodies and domestic parliaments to create appropriate rules. She observed that we are “not in the golden age of treaty making” and must therefore rely on existing international law rules and their interpretation.


From the perspective of impacted communities, Joris noted, “it is not that relevant whether it’s humanitarian law or human rights law and which one is like specialis.” The academic distinctions between legal frameworks become less meaningful when victims cannot access adequate remedies regardless of which legal framework technically applies to their situation.


## Palestinian Digital Rights and Platform Accountability


Tiesa Meccrewfy from 7amleh, a Palestinian digital rights organization, provided concrete examples of systematic censorship of Palestinian voices on social media platforms. She referenced a report published by 7amleh on “digital rights, genocide and big tech accountability in Gaza,” documenting discriminatory content moderation policies that suppress Palestinian narratives while allowing harmful content to proliferate.


Meccrewfy highlighted how government requests for content takedowns complicate transparency and introduce bias into content moderation processes. She described how online platforms can play a role in potentially facilitating serious human rights violations through hate speech and dehumanization campaigns during conflicts, arguing that tech companies have clear obligations under the UN Guiding Principles on Business and Human Rights that are not being effectively implemented.


## State-Corporate Relationships and Accountability Gaps


Nieves Molina from the Danish Institute for Human Rights addressed the growing accountability gap in digital spaces, noting that crimes enabled by digital technology are often treated with “exceptionalism” rather than being addressed through existing international law frameworks. She identified the blurred relationship between states and corporations as creating difficulties in determining responsibility levels.


Molina highlighted that this close relationship creates a “blurred reality regarding levels of responsibility,” making it challenging to attribute accountability for digital harms. She suggested that fragmentation enables forum shopping and creates avenues for impunity, allowing both state and corporate actors to evade responsibility.


Significantly, Molina raised the possibility that international law itself may have become a victim of misinformation campaigns, observing that international law and human rights content may have been targeted by campaigns that relativize international law and question its usefulness. She also noted that some scholars are discussing expanding international criminal responsibility to include legal personalities, potentially bringing corporations under frameworks like the ICC-Rome Statute.


## Corporate Responsibility and Platform Governance


Francisco Brito Cruz, participating online from São Paulo, disclosed that he is no longer Executive Director of Internet Lab but is now a law professor and consultant for BTEC (Business, Technology and Human Rights), a project within the Office of the High Commissioner for Human Rights. He emphasized a crucial paradox in digital governance: “inaction can produce violations, but action can produce violations as well.”


Brito Cruz stressed the importance of building specific methodologies for human rights due diligence that go beyond general principles, arguing for transparency, monitoring, and proper expertise in these processes. He highlighted Brazil’s recent experience with blocking Platform X and the arrest of Telegram’s CEO in Paris as examples of how complex these accountability questions become in practice, demonstrating the need for stakeholder engagement, participation, and transparency tools in human rights due diligence processes.


## Children’s Rights in Digital Spaces


Mikiko Otani, former chair of the UN Committee on the Rights of the Child, brought attention to children’s specific vulnerabilities online while emphasizing that children want to use digital spaces safely rather than be excluded from them entirely. She cited a 2017 UNICEF statistic that one in three internet users worldwide is a child, making the digital environment crucial for children’s rights realization.


Otani explained that the Committee’s 2021 General Comment, adopted after a two-year consultation process that included hearing directly from children, requires states and businesses to integrate child rights impact assessments into digital product design, development, and operation. She challenged traditional protective approaches by noting that children themselves reject overprotection in favor of safe access.


Crucially, Otani observed that for children, “offline and online is seamless. It’s not so easy to differentiate what is the online and offline for the children.” Children, she reported, “want to use the online space safely” rather than being totally protected or excluded from digital spaces.


## Audience Questions and Discussion


The Q&A session included several specific questions that highlighted practical challenges:


**India-Pakistan Example**: Ulvia from London Story described how during tensions between India and Pakistan, platforms amplified violent content while 8,000 accounts with alternative narratives were taken down, illustrating how states can use content moderation to silence dissent while failing to stop harmful content.


**Domestic vs International Law Coordination**: An audience member asked about coordination between domestic and international legal frameworks, highlighting the complexity of multi-level governance in digital spaces.


**Separate Digital Rights Legislation**: Ana Galate from UC Berkeley questioned whether separate comprehensive legislation for digital rights is needed or if existing international law frameworks are sufficient with better implementation.


**Binding Frameworks**: Christian Fazili from DRC asked about the need for new binding frameworks and due diligence obligations, reflecting ongoing debates about whether current voluntary approaches are adequate.


## Key Challenges Identified


The discussion revealed several persistent challenges:


**Legal Fragmentation**: Different bodies of international law address similar digital issues separately, creating protection gaps and coordination difficulties.


**Technology-Law Gap**: Technology advances faster than legal frameworks can adapt, creating ongoing governance challenges.


**Attribution Problems**: The blurred relationship between states and corporations makes it difficult to determine responsibility for digital harms.


**Implementation Gaps**: While legal principles exist, translating them into effective protection mechanisms remains challenging.


**Remedies and Access**: Ensuring access to justice for victims of digital rights violations across fragmented legal systems remains problematic.


## Emerging Approaches and Suggestions


Panelists suggested several potential approaches:


**Focus on Existing Law**: Rather than creating entirely new treaty frameworks, emphasis should be placed on operationalizing and clarifying existing international law.


**Iterative Approach**: Adopting an approach of “intervention and testing” for digital legislation, acknowledging that some regulatory innovations may require adjustment.


**UN Guiding Principles**: Using the UN Guiding Principles on Business and Human Rights as an operative layer to navigate tensions between different international law frameworks.


**Transparency Requirements**: Developing transparency requirements for both state and corporate actions in digital spaces to enable proper scrutiny.


**Inclusive Consultation**: Ensuring consultation processes include diverse voices, including children, marginalized communities, and regional perspectives.


## Conclusion


The workshop session highlighted both the complexity and urgency of applying international law to digital spaces. While existing international legal frameworks provide a foundation for protecting human rights online, practical implementation faces significant challenges due to fragmentation, rapid technological change, and complex state-corporate relationships.


The discussion demonstrated that current approaches to international law in digital spaces face implementation gaps that affect real people and communities. The panelists’ insights suggest that while new legislation may not be the primary solution, better coordination between existing legal frameworks and stronger implementation of current obligations is essential.


The conversation emphasized that protecting human rights in digital spaces requires moving beyond technical legal analysis to acknowledge the human impact of these challenges. The path forward requires not only legal innovation but also political will to address the accountability gaps that currently allow both state and corporate actors to evade responsibility for digital human rights violations.


The session concluded with recognition that effective digital governance requires concrete mechanisms for accountability that can keep pace with technological advancement while preserving fundamental rights and freedoms, and that this work must center the voices and experiences of affected communities.


Session transcript

Sanhawan Srisod: Good afternoon. For those who join us here both in person and online, welcome to the workshop sessions Applying International Law Principles in the Digital Space. So my name is Chantal Joris and I’m a senior legal advisor at the International Commission of Jurists and I’m going to moderate the session today. So let me begin with the purpose of today’s session. So the title is Applying International Law Principles in the Digital Space and what we’re going to discuss today is we’re going to explore how existing international law provides a regulatory framework to protect individuals and communities from human rights abuse in the digital space. And we’re also going to discuss about how different bodies of law interact in this increasingly complex digital world. We also touch upon some state obligations, corporate responsibility and involving directions of accountability for both state and corporate actors under international law and also discuss about recent developments and remaining gaps as well. So this discussion is also part of a broader project co-implemented by the International Commission of Jurists and the Danish Institute for Human Rights who is our co-partner today under the Digital Democracy Initiative. So under this initiative, us with a group of experts whom some of them is also from part of the panelists today, we draft a set of principles we seek to clarify how international law should be progressively interpreted and to address some ambiguity and contradiction if they’re able to exit. So as for the format of the sessions, I think we’ll spend the first 25 to 50 minutes for presentation by panelists and then another 25 minutes for Q&A comments in the interactive dialogue with participants and we conclude the sessions with brief wrap-up remarks from panelists. So let me begin with like a brief introductions of the panelists today. Before introducing all the panelists I would like to acknowledge that indeed we already have panelists with us which is Nadim Nashif, the founder and director of Hamle which is a Palestinian digital rights group. He was originally on the list of the panelists but he unfortunately unable to travel today due to the ongoing war between Israel and Iran. However we have his colleague with us today Tessia McCruffy, she’s a EU advocacy officer of Hamle. Another panelist who joins us from online is Francisco Brito Cruz, he’s a law professor at Funda Getulio Vargas in Brazil, also executive director of Internet Lab and consultant at OSHR BTEC projects. Another panelist with us here in the room is Chantal Joris, she’s a senior legal advisor at Article 19 and next to her here is Nervis Molina-Cromont, chief advisor of the tech business and human rights at the Danish Institute for Human Rights. And last that side is Mikiko Otani, the former chair of the UN committee on the right of the child and also ICJ commissioners. So let’s begin the discussions. So for the first issues I would like to begin with Chantal. If we talk about the topic we are discussing today which is on international law in the digital space. So of course this topic begin with a concept that international law of life apply online. like there’s a certain kind of consensus on this but in in the reality when we talk about international law people we ask like which one we are talking about are you talking about international human right law are you talking about international human talent law are you talking about international criminal law are you talking about everything or are you talking about the framework on corporate so probably you can help us set the scene how this framework interact or are they even interact with each other are there any some harmonization between all the so-called international law of this path work of international law or not if there’s any gaps that exist or remain the floor is yours.


Chantal Joris: Thank you very much and thanks everyone for for joining us yeah as you said there is broad consensus that international law applies to cyberspace and that cyberspace is is not a lawless space but what that means in practice is is still very much a subject to discussion and to controversial discussions for example to what extent cyber conduct might constitute an illegal use of force or a violation of the prohibition of the principle of non-intervention and these questions are being discussed within different initiatives under international humanitarian law for example the ICRC has been working extensively on understanding how those norms that have been established in particular the binding rules that have been established many decades ago how these norms can still be interpreted in a manner that they continue to be relevant in in cyberspace I would say international human rights law has probably the most advanced understanding I would say about how it applies as you as you mentioned that the same rights are understood to apply offline as much as they apply online. There are extensive reports, for example, by mandate holders, by the Human Rights Committee, to explain what it means, for example, to be able to enjoy freedom of expression online, the role of online platforms and so on. International criminal law is also catching up to the realities of how cyber conduct can contribute or even constitute atrocity crimes. Tomorrow, for those who will be here, there will be another session discussing the initiative of the International Criminal Court’s Office of the Prosecutor to establish a policy on how cyber-enabled crimes might fall under the scope of the Rome Statute. Of course, there’s also a well-known, the Lean Manual, Oxford Statement. So, there are a lot of initiatives recognizing that it is important for international law to remain relevant, that we also understand how it applies in cyberspace. And certain questions, I would say, are anyway controversial questions under international law, and they also manifest in cyberspace, one example being for the extraterritorial application of human rights. Information operations by a certain state, whether they target the home population or the population in a foreign country, does that shape the obligations that they have? Does it mean that there could be a protection gap when it comes to the rights holders that might be impacted by certain operations? So, what I would say is that Those initiatives are very relevant, but I think there could be more effort probably to avoid the fragmentation of the responses to how international law applies to cyberspace and to make it more, to harmonize it more, to avoid a protection gap. For example, coming back to a potential, say, disinformation campaign that is inciting and dehumanizing and targets people in another country. Potentially, you could have the prohibition of propaganda for war that applies under Article 20 of the ICCPR or the prohibition of hate speech under Article 20, Paragraph 2 of the ICCPR. Certain of those operations could also be prohibited under international humanitarian law. For example, the obligation to respect international humanitarian law. It could potentially be a violation of the prohibition of direct and public incitement to genocide. So we have all these different frameworks that have something to say about this type of content, this type of state conduct, if that’s what we want to focus on. But how they interrelate is not always so established. And from the perspective of the impacted communities and the rights holders who might be harmed by those types of information operations, it is not that relevant whether it’s humanitarian law or human rights law and which one is like specialis. And those can make for very interesting legal discussions. But I think it is important that we find a way to, again, operationalize it, clarify it and make clear what each actor’s obligations are and what, again, the rights of the impacted communities really are with respect to all these legal frameworks.


Sanhawan Srisod: Thank you so much. I think you point out very important issues and very important, one of the most concerning issues about the fragmentations of law is indeed, for example, we talk about incitement of hate violence or propaganda of war. It is there in almost every section of the law, every forms, I mean in ICL, in IHL, as well as under the human rights framework, but it has been discussed or interpreted separately. Like international human rights law, we talk about roundabout plan of actions, in which we try to interpret these sections, but haven’t taken into consideration IHL, ICL at all. So yeah, and I want to link it to our next speaker, Tasia, because you have been working in the context in which three of these law has been collided, has been interacted, including on the issue that Chantal gave an example as well, the propaganda of war, incitement of hate or violence. I understand that at the end of last year, Hamle also have just published a report about digital rights, genocide and big tech accountability in Gaza as well, in which you also record that there’s a race on online hate speech and dehumanization targeting Palestinians. So probably you can explain about this a little bit, and probably you can provide your first-hand observation on how this law operates in practice, and if you see the gaps when it comes to like a fragmentation of the law in practice, especially in the context of Gaza.


Tiesa Meccrewfy: Thank you so much. Non-state actors, including big tech companies, have obligations under international frameworks, such as the UN Guiding Principles on Business and Human Rights to respect and protect digital and human rights. These principles mandate that businesses must conduct due diligence to identify, prevent, mitigate, and account for how they address their impacts on human rights. The right to access the Internet, freedom of expression, freedom of opinion, and privacy are all essential for individuals to share their experiences, seek justice, and advocate for their rights. Violations of these rights during such critical times, such as what’s happening in the Gaza Strip right now, not only silence marginalized voices, but also hinder efforts to address and prevent atrocities. So, in the context of the war on Gaza, the protection of digital rights is paramount. Tech companies and online platforms play a real critical role in documenting human rights abuses, sharing information, and mobilizing support as well. Systematic censorship and discriminatory content moderation policies by these platforms, as seen in the suppression of Palestinian voices, undermine these digital rights. And the disproportionate over-moderation leads to restrictions limiting the reach of Palestinian content at the international level. In some cases, it can completely suspend users. Palestinian and international news outlets, as well as journalists, have all experienced content takedowns and account restrictions on Instagram and Facebook specifically. Another contributing factor to censorship is obviously government requests for content takedowns on social media platforms, which complicates the issue of transparency and bias in content moderation. The United Nations Committee on the Elimination of Racial Discrimination already expressed serious concern about the sharp increase in racist hate speech and dehumanization directed at Palestinians since October 7th. particularly on the internet and in social media and the ICG order on the plausibility of genocide highlights the gravity of the situation as they are considering in this case the documented use of online platforms to incite genocide against Palestinians in Gaza and all social media services in Israel and Palestine need to prioritize a comprehensive approach that really mainstreams and safeguards human rights and addresses the root causes of discrimination against the community and narratives in full transparency and in line with the United Nations guiding principles on business and human rights and to finish because there is an intersection of digital rights and genocide in Gaza we should highlight the urgent need for robust protections and accountability to ensure that digital spaces remain open and equitable for all thank you.


Sanhawan Srisod: Thank you so much Tassia. In the context of Gaza that’s a I think you already list out I mean there’s a lot of study also statement made by special proctor UN mechanisms and others on the use of platform the use of platform to commit it some prohibited act that may amount to crime under international law she’s including genocide and also the protection gaps as well which is now not there yet so we touch upon another lead lie because it’s in the situation of Gaza now it’s not just about protection but there’s an issue about accountability as well if you talk about the possibility of crime under international law to be committed in that so I would like to move to the next speaker to touch upon the issue of accountability when it comes to the human right violation or abuse that committed in a digital space including those that may amount to to crime under international law so probably the nearest you can talk about you know like the accountability framework as well and challenge in accessing to justice and remedies for those who were victims of human rights violations or abuses that committed online and probably those who have to face challenge through fragmented legal framework as well.


Nieves Molina: Thank you, well this is a quite impressive room, yes I mean building on what my colleagues here have been talking about, I would like to talk about the growing gap in the accountability and as a consequence of course the growing gap on victims access to remedies. Although international law has a well-established framework of obligations in relation to access to reparations, access to justice, prevention of impunity, procedural remedies, compensation, rehabilitation and in what includes like taking all possible measures so that violations don’t occur again. We see that crimes committed or violations or wrongdoings committed being enabled by digital or cyber technology are treated with a level of exceptionalism. So there is this paradox while most legal scholars think that international law has most of the principles that would help us to provide remedies or to provide a regulation, there is certain exceptionalism on producing new legislation and attempting to create new laws that take time to create, but at the same time create a danger of fragmentation of the law. So, from my point of view, there are two or three things that appear as, well, there are more, but I want to talk only about two or three things in these five minutes that appear as a barrier, no? And one of them is this idea of the fragmentation of the law, the idea that a specialist in environmental law, a specialist on IHL, on ISL, on cyber law, seem to be operating separately and not coordinated, not knowing what the other specialties are doing. And then that creates a situation where it creates lack of certainty, it enables forum shopping, where different actors seek for the most favourable regulation for their conduct. And it creates also difficulties for cooperation, and ultimately the result is avenues for impunity. The human rights system is a coherent system where all human rights are interrelated, we say, but there has been an attempt to try to analyse right by right, which, from my point of view, there is no right that is not affected by digital and cyber and new technologies as we advance in a quite fast pace. Finally, the issue of the close relationship between state and corporations has created a blurred reality in which it is difficult to know or to define what the levels of responsibility that different actors have in a given situation. A number of situations have also cast a question when international crimes have been committed with the help or with the assistance of cyber or digital technologies. There are some scholars that are starting to call for the international criminal responsibility of companies as well and there are a number of initiatives on whether it would be possible to expand the ICC-ROM treaty to include also legal personalities. Eventually, for us, there are two ideas that I would like to put forward. One is, do we have all the tools in international law that we require? Eventually, law is by definition a reaction to social changes and human rights are the safeguards that we put for every turn of the advancement of societies. And the second question that I would like to put to you is that in the middle of all this information overload, whether or not international law and the content of human rights have been also a victim or have been also been targeted by misinformation campaigns, relativizing international law and questioning their usefulness. creating spaces where there is some gaps of accountability and gaps of regulation eventually. I want to stop there so that we can have time for an interaction.


Sanhawan Srisod: Thank you so much, Nieves. I think you put on a very important point of the fact that now there’s still a broad line between state and corporate when it comes to who shall be held accountable for certain actions. And it seems like the current international law framework still not catch up to that challenge, but there’s still an ongoing effort to address them. So, moving next to Francisco, who probably could fill in the gaps of the corporate accountability side as well. So, Francisco, under international law, corporates are required to conduct due diligence, they have due diligence obligations, and they also expect to be held accountable as well when there’s a certain kind of conduct that committed on that platform, for example. But in reality, most of the time, we haven’t seen a lot of examples where the platform may be held accountable when there’s a suspect of involvement in certain kind of actions. So, probably you can shed light on these issues, like what’s the status of international law now when it comes to platform accountability? And also, at the same time, because under international law, whatever we do, we have to avoid disproportionate restriction on human rights of the online user as well. So, how can we strike the balance between those accountability and protection of the right? And probably you can give example of the recent rulings in Brazil on this issue as well.


Francisco Brito Cruz: Thank you. I hope you are all listening to me. Hello from Sao Paulo. I’m wanting to be with all of you guys in Norway. I’m happy that my colleagues laid the ground first in terms of international law and these discussions. Just a disclosure, I’m not the Executive Director of Internet Lab anymore. Now I am a law professor and an expert consultant of the BTEC project within the office of the High Commissioner for Human Rights at the United Nations. So in these five minutes I will deal with three questions. The first one is, it seems when we entered this, and as our moderator was questioning me, that we have some tensions within international human rights law. We have prohibition, a number of different discourses of incitement, for example, but also protections for freedom of expression and the need to strike a balance. And we also have tools like the United Nations Guiding Principles on Business and Human Rights. So the first question is how to see this. There are tensions, these are layers. I think that it is interesting to see how the United Nations Guiding Principles on Human Rights can act as a more operative layer on that. And maybe a good way to depart this discussion is the assertion that the action can produce violations, but also action can produce violations. Also the statement that corporate can participate in violations and corporate power and also state actors. can participate on violations. And this makes us difficult for navigating, but I think that the guiding principles can provide us at least a more toolbox of different approaches on how we see corporate responsibility. So the second question is, can we turn corporate responsibility into platform accountability? I think this is the main challenge when we see the United Nations guiding principles on human rights. What will make this is to not only assert that principles and human rights law are valid. I think this is very important, but also we should, as the guiding principles show us, we need to build method to that. And thinking about what means human rights diligence as not any form of the diligence, but a specific one that we need to embolden method, we need to embolden transparency, and we need to raise the bar. And to that, I would like to mention the resources that the BTEC project has. BTEC project is a project within the Office of the High Commissioner that is trying to make this translation. So what does it mean to take all of the principles and human rights that are in different documents and in different tools for international human rights law, and how we can build method to bring companies to this kind of compliance? So in terms of AI, for example, we have a different set of tools that we need to apply to that, that combine not only transparency to. but also building expertise on red teaming, for example, that need to build expertise on content regulation, for example. And that put us in a position that we need to not only point out to methods, but also to think on ways to monitor how they are being deployed and what are the results that we are having during time. And to end it, to end my first contribution here, landing this discussion in context, it’s very difficult. So in Brazil, for example, we are seeing the judiciary trying to build a platform accountability field. It’s important to say that, as a Brazilian, I would like to share this. We have a judiciary with many number of peculiarities that can be very proactive. But kind of the judiciary is trying to play an important role after an attempted coup and also after challenges from the tech sector leadership defying the rule of law. And on that, we are seeing on how much is difficult to build this field and to build an idea of platform accountability and even human rights diligence without, for example, state capacities or without a regulator. But of course, we are, as I’m saying on the beginning of my contribution here, not only inaction can produce violations, but action can produce violations as well. So making those steps are very difficult. And I just want to share one thing to make us think on how much the discussion about incitement is important. very important, as my colleagues were commenting before me. We have an interesting episode, the blocking of the Platform X in Brazil. And we saw how much this was spoken about in the international stage, and how much this information was still to be the key motive for the Supreme Court to block this platform in Brazil. And this is not true. The core case that the Supreme Court made that was an incitement case against a law enforcement officer. So advancing on this translation, it’s really, really important for not only setting up standards for different contexts, but also on grounding on this context, the capacity that we need to prevent violations, not only from the state power, but also violations that can be facilitated by the private sector. So I’ll leave it here, and I’m anxious to hear all of you in our interactions.


Sanhawan Srisod: Thank you so much, Francisco. So we have the last panelist with us, and indeed, that’s one of the topics in which we haven’t yet fully covered during our intervention today, which is on the disproportionate impact of online harm on specific groups, especially children, whose lives are increasingly shaped by the digital environment. So Mikiko, under international law, it’s clear, there’s clear recognition of the children’s rights online. However, are there any specific obligations that are imposed on the state and corporate itself, you know, when it comes to protection of students in the online space? So probably you can share with us about those obligations. you know like and probably you share how the protection of this group could be strengthened as well. The floor is yours.


Mikiko Otani: Thank you very much. So in the five minutes I have I’d like to bring in the perspective of child rights and so there are many other group of persons who need special attention but in my case children. So in 2017 UNICEF report said worldwide one in three internet users is a child. So it’s such a reality we learned in 2017 and when I joined in the committee on the right of the child actually 2017 we started learning more and more how the digital environment is important to children. So what we learned the reality is that digital is really part of children’s daily lives. Of course still there is some digital divide however children are living with the digital and children’s rights are impacted. I have to say I have to emphasize positively and negatively we very often emphasize the negative impact of the digital on the children’s rights. However children’s rights are also promoted or enhanced by digital so this is the reality. So in 2019 the committee on the rights of the child decided that we need to work on the children’s rights and in digital relation. Convention on the rights of the child adopted by general assembly in 1989. So we were convinced by the reality that convention on the rights of the child need to be read, understood, applied and implemented in relation to the digital environment because this is the life the children are nowadays are living. So we cannot ignore this reality. and we have to show how the Convention on the Rights of the Child is relevant to the children’s rights. But I also want to emphasize something which I learned through the process of the committee’s work to develop this general comment on children’s rights in relation to digital environment. So it took two years in drafting because we had a wider public consultation including the consultation with the children. So what I’m going to share with you is what I learned from the children. So children living in digital world space, however, offline and offline, online and offline is seamless. It’s not so easy to differentiate what is the online and offline for the children. And secondly, almost all the children’s rights under the Convention are impacted by the children. Of course, the freedom of expression and privacy, however, also many other things like right to play, education, health. In particular these days, mental health is such a serious issue for the children’s rights in relation to the digital. And also how to develop the relationship, personal relationship with others, starting from the children’s and parents’ relationship. And so all those things are actually impacted by the children’s rights. And also I have to emphasize the important role of the parents. So if we talk about a lot of privacy, for example, but if the parents are not aware how they are actually imposing the children to the risk online, so the children’s rights are not protected. So those are what I learned. But what is most important message from the children are they want to use the digital. So they claim to us, the committee, that they don’t want to be… totally protected or excluded from the digital space, but they want to use the online space safely. So this is a very strong message from the children. So what the committee said in this general comment adopted in 2021 after the two-year consultation, I’d like to bring up three issues. So one, the committee said that the states and the business should integrate child rights impact assessment. And particularly, I’d like to emphasize one thing. So the committee said privacy and safety in relation to the design, engineering, development, operation, distribution and marketing of their products and services is very, very important to protect the children’s safety and privacy in the digital world. Second, what I want to share today is that it’s very important to understand the children’s rights comprehensively if we want to address the children’s rights in the digital space. In particular, children are all the persons under 18 years old. So you can imagine how the digital space will impact in the younger age of the children or adolescents. So the evolving capacities is a very important concept. Thirdly, remedies. So for the children, what does that mean to access to justice and remedies on the online home? So we need to integrate those perspectives. And to do that, we need to hear from the children, their lens and their experiences, what they think, how to protect themselves is very key. Thank you very much.


Sanhawan Srisod: Thank you so much, Mikiko. Next, I would like to invite all participants to ask questions and share perspectives. I know that after five speakers, most of the… What we share are challenges due to the fragmentation of international law and especially in terms of providing protections to individual communities and also on accountabilities. Of course, there are still gaps, but there is still an effort globally to address them. If anyone would like to share, would like to ask any questions, please.


Audience: Hello, my name is Ulvia. I’m from the London Story. We recently published a report which is called Escalate, where we describe how during the India-Pakistan tensions and military conflict in April-May 2025, platforms like X and Meta amplified violent and hateful content, while the Indian government pushed for takedowns of critical voices like journalists and human rights defenders. So, around 8,000 accounts have been taken down, which verbalized alternative narratives. For us, this raises a serious issue when states use the content moderation to silence dissent, but don’t stop harmful content that can fuel the violence. In this situation, population is left more vulnerable online, especially during the conflict. So, my question would be, when it’s the state itself contributing to the erosion of online civic space, who is responsible for protecting civilians in these digital spaces? And how should international law respond to these double standards, where both the… The States and the platform failed to act in the public interest. So I would like just to hear your perspective on this because we also work on this issue. Thank you so much. Are you taking questions in turn? Are you going to take questions in turn?


Sanhawan Srisod: I think we probably can take one more question and then we answer.


Audience: Yeah, sounds good. I think my question relates to what was just mentioned about the role of the state. We’ve heard from the panel the idea of international law placing duties and responsibilities in terms of due diligence on corporations. And I think this is kind of the language or the idea that we are seeing increasingly more also in domestic legislation. So platform regulation also kind of requires companies to have some kind of due diligence, assess risk, risk mitigation, risk identification, or kind of duty of care. If not, for example, the UK Online Safety Act embodies this concept. And then my question is, how do you see the two of them fitting together or not? Is kind of international law and this kind of domestic efforts to hold platforms to account friends and foes? How can we make them friends and foes? Because you also hear from some voices, and I think some of the previous comments was in that direction, that states can also violate international human rights law. So how can regulation kind of both international and domestic kind of be driving change in accountability in the same direction and not be kind of conflicting? Yeah, that’s my question. Thank you so much.


Sanhawan Srisod: I think we take the first round probably for questions.


Audience: My audio device isn’t working. as well, so I’m not sure if I’m being heard right now. Okay, great. Hi, I’m Ana Galate. I represent UC Berkeley. I work as a public interest cybersecurity researcher and practitioner, and I have background in software development, psychology, philosophy, so on, so forth. I want to say thank you, first of all, for this overview of international law and its application. But during our time here, the focus largely appears to be how current international law, human rights, criminal, could be applied or amended to better intersect with rights needing to be protected for within the digital spheres. But the nuances are far more complex, given how intertwined and interdisciplinary our digital selves journey truly is, you know, complex domains such as political infrastructure, behavioral psychology, economic power, tech design, these are all purposely interwoven to converse within like a digital users, daily systems. My question, are there any efforts underway for separate legislation for our digital selves, you know, one that is far more holistic to protect, for example, like cognition of cognition and mind and our thoughts, like there are new technology deployed day by day at an exponential rate that is attempting to scrape and source by any means possible to better build our digital profile and kind of use us as a pawn. So just wondering if there are any efforts, like separate efforts, given all that. Thank you.


Sanhawan Srisod: Can I proceed? Yes, please.


Audience: Okay, thank you. My name is Christian Fazili. I’m from the Democratic Republic of Congo, and I work as a lecturer and researcher at the University of Goma. I’m also a civil magistrate. Thank you for the wonderful presentation. I have a few questions related to this topic. Beyond voluntary commitment, what binding framework would ensure equitable digital governance such as a global treaty on tech accountability, another one related to due diligence, to what extent are states obligated to prevent malicious cyber activity, organizing from their territory? And the last one, how can this be enforced without infringing sovereignty? Thank you very much.


Sanhawan Srisod: Thank you so much for all the questions. I think we have the questions that on when the state itself contributes to all our erosions, who’s going to protect civilians, and respond to double standards. Probably who wants to respond, Chantal? You want to respond to the first question? And we can give the second question to Francisco as well. Probably you want to answer the third question. Okay, probably you, the first question first.


Chantal Joris: Let me perhaps make sort of overarching observations on I think a number of the questions. I think one of the challenges is that we see that there are massive challenges as to the conduct in the digital space. There are digital harms, new digital harms, technologies evolving fast. How can we keep up, right? And we know that those advancements are often much quicker than the ability of certainly international bodies, but of domestic parliaments as well, to come up with good rules. So I think one of the problems that we have right now is perhaps it’s, I don’t think we’re exactly in the golden age of treaty making. So that also means that we will often try to rely on existing international law rules and see how can we operate with them, how can we interpret them for them to remain relevant, because it seems reasonably unlikely that there will be a new Geneva Convention. adopted, ratified, and widely implemented anytime soon, I think. The second challenge is also, I think, I understand the quest for wanting to regulate some of these issues, but I think the devil is in the detail, and also following up on what Francisco said and what other panelists said, it’s a very complex space, and again, as has been mentioned, inaction can be problematic, and action can also be problematic, so what sort of rules, there has been a question about a global treaty on digital accountability, I think, or again, lawmaking to address some of these technologies, it’s just some of the legislations we see are very reactive, not based on human rights, so you have this disconnect between the international law obligations of states, particularly human rights obligations, and what they do domestically, they don’t bring the domestic legislation in rhyme with human rights, they also don’t do it based on proper expertise, so it’s not necessarily good lawmaking, again, you might have to bring a psychologist, and you want to hear from children’s rights groups when you regulate those issues, so I think it’s sometimes, again, the response can be almost as problematic as the lack of response, perhaps also one point on the role of online platforms, also that we’ve seen in armed conflicts, India, Pakistan, I think one of the things also to mention, and I think Nieves has mentioned this as well, often platforms also operate or respond to government demands as well, so it’s not, you know, it’s not states do one thing and platforms do another, more and more, depending on the governments, depending on the power of the governments, there is very close cooperation and engages both states’ obligations and companies’ obligations, and I think there is also a big responsibility for us to understand whether the right measures have been taken. I’ll wrap up. The devil is in the details with these measures, be it, as Francisco mentioned, a blocking of eggs in Brazil or an arrest of the Telegram CEO in Paris. We need to have really the details, understand why exactly it has been done. There needs to be transparency as to the real reason, so we can also really scrutinise and assess whether those measures are in line with international law. I tried to sort of connect a few things.


Nieves Molina: There are a couple of interesting points. I mean, the issue of a state actor’s behaviour in platforms is one that needs to be addressed. And I’m not sure, I agree with Chantal, I’m not sure it’s the golden age of new legislation.


Chantal Joris: It’s not.


Nieves Molina: It’s not.


Chantal Joris: No, no, no.


Nieves Molina: That’s what I said. That I agree. Yeah, yeah. No, it’s not the golden age of it, because we have seen at international level, but also at national level, we have a couple of projects where we try to compare national legislation with international obligations. And every single time that we face this reaction of legislating more, and every time that we get it, then it’s more repressive kind of legislation. So that’s one thing. The other thing is that there are the legislation in relation to incitement, so in relation to justification of war, to justification of crimes. It’s all, we have examples of that. We have the media trial, but before that, Nuremberg, you know, so the concepts of law might exist, the way we implement them and the willingness for enforcing them. is where where I feel that we are lacking behind and I want to I want to address the issue that that I think the second question brought up on the issue of how technology is advancing so much that that there is there are some when what a friend of mine Suante or wrote about no is the issue of freedom of thought now takes a different connotation because technology actually my advance to that point that you that what you think can be reachable even if you don’t announce it as technology will advance and we will have to take decisions on whether the legislation that we have covers but what I would warn against is whether the we do have a real vacuum or what we have is an willingness to enforce international obligation that exists yeah I think I leave it there.


Sanhawan Srisod: Yeah probably also on the second question Francisco you want to help answer as well on domestic registration on platform regulations and also probably you can help answer on the beyond monetary commitment are there any buying framework especially on tech and due diligence as well yeah I’ll make


Francisco Brito Cruz: some comments are you listening yes yes great great so the question is if platform regulation for accountability and and even international human rights law if they’re friends or foes I think they can be both depending on the situation actually Like as Chantal was saying, action and inaction are both pervasive if you’re seeing what’s happening. And then there is no recipe for this. But maybe a few points should be made. First, with or without legislation, with or without some kind of regulatory framework, I believe that human rights legislations coming and looking to the United Nations Guiding Principles on Business and Human Rights, they can be fruitful in terms of at least laying the ground for some kind of intervention that can help us to know more what’s happening, helping us to know more what is working or not, and helping us evolve on which is the right way to intervene or to demand corporate responsibility in a very concrete way. But I know that all of that, there are baby steps, right? It’s not something that you can sketch up from one day to another. And also, all of that should happen with involving stakeholder engagement, involving participation, involving transparency tools for civil society. This is the kind of part of a human rights due diligence process that can also ensure that everyone is participating and everyone is in the same page. But also, with legislation, when we’re seeing regulatory frameworks, I think it’s also key on monitoring how this legislation is performing. So, seeing out what are the checks and balances for state action and state power, and thinking about that there is no end line. There is intervention and testing, and more intervention and testing. And some things can go wrong, and we need to acknowledge that some things can go wrong. Some innovations of legislation can go wrong as well. And I don’t think, I don’t see this as a way of paralysis. I see this as a way of learning what’s working or not. And to end, remedies are also key, right? Not only remedies around corporate power and corporate, on corporate decisions, but also remedies on state decisions. To really end, just a question here, and I would love to exchange more on that. We’re talking a lot about sovereignty, and how important for ensuring corporate responsibility this can be as well. But I think an important question should be made, like sovereignty, it can be a path for many things, right? Rule of law can be a path for many things. And it’s, I think we as human rights defenders, and people that believe on human rights, international law as well, we can, we should see sovereignty and rule of law in all of these complexities, to understand what are the ways that can serve as a path for protection also of human rights. So, but this is more of a question than an answer. And I thank you for it.


Sanhawan Srisod: Thank you so much. and Francisco. Probably last question, I think we are running out of time, probably the last one I ask Mikiko then, because there’s a question about, judging from the exiting floor that we are talking about today, are there any effort on separate legislations that try to holistically address the issues, especially when technology day by day has been like, with the threat of technology day by day, probably you can talk about it from the student right perspective as well, and probably you can update a little bit on the effort that we are engaging or involving in.


Mikiko Otani: Thank you so much, so I heard, so the first question for the panel, we started talking about the international law means, international criminal law, international human rights law, international humanitarian law, and then I brought in the child rights perspective, and I emphasise that the children’s is not only the one group which requires the specific attention, and I heard from Nieves talking about the risk of fragmentation, so how can we holistically approach all those issues, I don’t want to mean by sharing the children’s perspective to again go into the silo or fragmented approach, so I think we really need to bring in various perspectives and diverse voices when we talk about how this big question, international law, applies and should play a role in the digital space. So there is no answer, however the consultation process and hearing from various voices, not only about the groups, but also regionally, because there is a lot of different initiatives and challenges, so it’s not so much an answer, however I think this is the way we should go. Thank you.


Sanhawan Srisod: Thank you so much, I think it’s already time’s up, so thank you for all of your time and I hope today’s session was helpful for you, I think it’s just the beginning of talking more of this topic and we hope to engage more with all of you on this topic in the coming years as well. Thank you so much.


C

Chantal Joris

Speech speed

137 words per minute

Speech length

1225 words

Speech time

533 seconds

International law applies to cyberspace but practical implementation remains controversial

Explanation

While there is broad consensus that international law applies to cyberspace and that it is not a lawless space, what this means in practice is still very much subject to discussion and controversial debates. Questions about cyber conduct constituting illegal use of force or violations of non-intervention principles are being actively discussed.


Evidence

Examples include discussions about whether cyber conduct might constitute an illegal use of force or a violation of the prohibition of the principle of non-intervention


Major discussion point

Application of International Law in Digital Spaces


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Nieves Molina
– Francisco Brito Cruz

Agreed on

International law applies to cyberspace but implementation remains challenging


International human rights law has the most advanced understanding of digital application

Explanation

Among different bodies of international law, human rights law has developed the most comprehensive framework for understanding how legal principles apply in digital spaces. The same rights that apply offline are understood to apply online as well.


Evidence

Extensive reports by mandate holders and the Human Rights Committee explain what it means to enjoy freedom of expression online and the role of online platforms


Major discussion point

Application of International Law in Digital Spaces


Topics

Human rights | Legal and regulatory


Different legal frameworks (IHL, ICL, human rights law) address similar issues separately, creating fragmentation

Explanation

Various international legal frameworks including international humanitarian law, international criminal law, and human rights law all have something to say about similar digital conduct, but they operate separately without clear coordination. This creates confusion about how these frameworks interrelate and which takes precedence.


Evidence

Example of disinformation campaigns that could potentially violate Article 20 of ICCPR (propaganda for war/hate speech), international humanitarian law obligations, or constitute incitement to genocide


Major discussion point

Application of International Law in Digital Spaces


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Nieves Molina
– Sanhawan Srisod

Agreed on

Fragmentation of legal frameworks creates protection gaps and accountability challenges


Technology advances much quicker than ability of international bodies and domestic parliaments to create good rules

Explanation

One of the major challenges in digital governance is that technological advancements occur at a much faster pace than the ability of international organizations and domestic legislative bodies to develop appropriate regulatory responses. This creates a gap between technological capabilities and legal frameworks.


Major discussion point

Challenges in Digital Governance and Regulation


Topics

Legal and regulatory | Development


Agreed with

– Nieves Molina
– Audience

Agreed on

Technology advances faster than legal frameworks can adapt


Not in the golden age of treaty making, so must rely on existing international law rules and interpretation

Explanation

Given the current international political climate, it seems unlikely that new comprehensive international treaties will be adopted and widely ratified soon. Therefore, the focus must be on interpreting and applying existing international law rules to remain relevant in the digital age.


Evidence

Mentions that it seems reasonably unlikely that there will be a new Geneva Convention adopted, ratified, and widely implemented anytime soon


Major discussion point

Challenges in Digital Governance and Regulation


Topics

Legal and regulatory | Human rights


Disagreed with

– Nieves Molina
– Francisco Brito Cruz
– Audience

Disagreed on

Approach to new legislation vs. existing law interpretation


Reactive legislation often not based on human rights creates disconnect between international obligations and domestic implementation

Explanation

Much of the current digital legislation is reactive rather than proactive and often fails to incorporate human rights principles. This creates a disconnect between states’ international human rights obligations and their domestic legal frameworks, resulting in poor lawmaking that lacks proper expertise.


Evidence

Notes that domestic legislation often doesn’t align with human rights obligations and isn’t based on proper expertise, mentioning the need to involve psychologists and children’s rights groups when regulating digital issues


Major discussion point

Challenges in Digital Governance and Regulation


Topics

Legal and regulatory | Human rights


Disagreed with

– Nieves Molina
– Francisco Brito Cruz
– Audience

Disagreed on

Approach to new legislation vs. existing law interpretation


N

Nieves Molina

Speech speed

111 words per minute

Speech length

931 words

Speech time

499 seconds

Need for harmonization to avoid protection gaps and forum shopping

Explanation

The fragmentation of legal approaches creates situations where different actors can seek the most favorable regulation for their conduct, leading to forum shopping. This fragmentation also creates lack of certainty and difficulties for cooperation, ultimately resulting in avenues for impunity.


Evidence

Mentions that fragmentation creates lack of certainty, enables forum shopping where different actors seek the most favorable regulation, and creates difficulties for cooperation


Major discussion point

Application of International Law in Digital Spaces


Topics

Legal and regulatory | Human rights


Agreed with

– Chantal Joris
– Sanhawan Srisod

Agreed on

Fragmentation of legal frameworks creates protection gaps and accountability challenges


Law is by definition a reaction to social changes, and human rights are safeguards for societal advancement

Explanation

Legal frameworks naturally evolve in response to social changes and technological developments. Human rights serve as essential safeguards that must be maintained and adapted as societies advance and face new challenges.


Major discussion point

Application of International Law in Digital Spaces


Topics

Human rights | Legal and regulatory


Close relationship between state and corporations creates blurred reality regarding levels of responsibility

Explanation

The increasingly close relationship between state actors and corporate entities has created a complex situation where it becomes difficult to clearly define and distinguish the levels of responsibility that different actors have in given situations. This blurring of lines complicates accountability mechanisms.


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Legal and regulatory | Economic


Some scholars are calling for international criminal responsibility of companies and expanding ICC-ROM treaty to include legal personalities

Explanation

In situations where international crimes have been committed with the assistance of cyber or digital technologies, some legal scholars are advocating for extending international criminal responsibility to corporate entities. This would involve expanding the International Criminal Court’s Rome Statute to include legal personalities beyond individuals.


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Legal and regulatory | Cybersecurity


Growing gap in accountability and victims’ access to remedies for digital crimes

Explanation

Despite well-established international legal frameworks for access to reparations, justice, and remedies, there is an increasing gap in accountability for digital crimes. Victims of digital violations face significant challenges in accessing justice and obtaining appropriate remedies.


Evidence

Notes that international law has well-established frameworks for access to reparations, justice, prevention of impunity, procedural remedies, compensation, and rehabilitation


Major discussion point

Accountability Gaps and Access to Justice


Topics

Legal and regulatory | Human rights


Crimes enabled by digital technology are treated with exceptionalism despite existing legal frameworks

Explanation

There is a paradoxical situation where crimes committed or enabled by digital or cyber technology are treated as exceptional cases requiring new laws, even though existing international legal frameworks contain most of the principles needed for regulation. This exceptionalism creates delays and fragmentation.


Evidence

Mentions the paradox where legal scholars believe international law has most principles needed for regulation, yet there’s exceptionalism in creating new legislation


Major discussion point

Accountability Gaps and Access to Justice


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Chantal Joris
– Francisco Brito Cruz

Agreed on

International law applies to cyberspace but implementation remains challenging


Disagreed with

– Chantal Joris
– Francisco Brito Cruz
– Audience

Disagreed on

Approach to new legislation vs. existing law interpretation


Fragmentation creates lack of certainty, enables forum shopping, and creates avenues for impunity

Explanation

The fragmented approach to digital governance, where different legal specialties operate separately without coordination, creates legal uncertainty. This allows different actors to seek the most favorable regulatory environment for their conduct and ultimately creates opportunities for avoiding accountability.


Evidence

Mentions that specialists in environmental law, IHL, ISL, and cyber law seem to operate separately without coordination


Major discussion point

Accountability Gaps and Access to Justice


Topics

Legal and regulatory | Human rights


Technology may advance to the point where freedom of thought takes different connotation as thoughts become reachable

Explanation

As technology continues to advance rapidly, there may come a point where even human thoughts become accessible through technological means, even if not explicitly announced. This would fundamentally change the concept of freedom of thought and require new legal considerations.


Major discussion point

Challenges in Digital Governance and Regulation


Topics

Human rights | Cybersecurity


Agreed with

– Chantal Joris
– Audience

Agreed on

Technology advances faster than legal frameworks can adapt


T

Tiesa Meccrewfy

Speech speed

131 words per minute

Speech length

417 words

Speech time

189 seconds

Tech companies have obligations under UN Guiding Principles on Business and Human Rights to conduct due diligence

Explanation

Non-state actors, including big tech companies, have clear obligations under international frameworks such as the UN Guiding Principles on Business and Human Rights. These principles mandate that businesses must conduct due diligence to identify, prevent, mitigate, and account for how they address their impacts on human rights.


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Human rights | Legal and regulatory


Agreed with

– Francisco Brito Cruz
– Audience

Agreed on

Corporate accountability requires moving beyond voluntary commitments to concrete methodologies


Systematic censorship and discriminatory content moderation policies suppress Palestinian voices and undermine digital rights

Explanation

Tech companies and online platforms engage in systematic censorship and discriminatory content moderation that specifically targets and suppresses Palestinian voices. This disproportionate over-moderation leads to restrictions that limit the reach of Palestinian content internationally and can result in complete user suspensions.


Evidence

Palestinian and international news outlets, as well as journalists, have experienced content takedowns and account restrictions on Instagram and Facebook specifically


Major discussion point

Digital Rights Violations and Censorship


Topics

Human rights | Sociocultural


Government requests for content takedowns complicate transparency and bias in content moderation

Explanation

Government requests for content takedowns on social media platforms add another layer of complexity to the issue of censorship and bias in content moderation. These requests compromise transparency and can contribute to discriminatory enforcement of platform policies.


Major discussion point

Digital Rights Violations and Censorship


Topics

Human rights | Legal and regulatory


F

Francisco Brito Cruz

Speech speed

119 words per minute

Speech length

1382 words

Speech time

696 seconds

Need to translate corporate responsibility into platform accountability through specific methodologies

Explanation

The main challenge with the UN Guiding Principles on Business and Human Rights is moving beyond simply asserting that principles and human rights law are valid to actually building concrete methods for implementation. This requires developing specific methodologies that can effectively translate corporate responsibility into measurable platform accountability.


Evidence

Mentions the BTEC project within the Office of the High Commissioner that is trying to make this translation from principles to practical methods


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Human rights | Legal and regulatory


Agreed with

– Tiesa Meccrewfy
– Audience

Agreed on

Corporate accountability requires moving beyond voluntary commitments to concrete methodologies


Human rights due diligence requires specific methods, transparency, and raising the bar beyond general due diligence

Explanation

Effective human rights due diligence is not just any form of due diligence but requires specific methodologies, enhanced transparency, and higher standards. This includes building expertise in areas like red teaming for AI and content regulation, along with methods to monitor deployment and results over time.


Evidence

Examples include building expertise on red teaming for AI and content regulation, with methods to monitor how they are being deployed and their results


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Human rights | Cybersecurity


Building platform accountability is difficult without state capacities or proper regulation

Explanation

The experience in Brazil shows how challenging it is to establish platform accountability and human rights due diligence without adequate state capacities or proper regulatory frameworks. Even proactive judicial systems face significant difficulties in creating effective accountability mechanisms.


Evidence

Brazil’s judiciary trying to build platform accountability after an attempted coup and challenges from tech sector leadership defying rule of law


Major discussion point

Accountability Gaps and Access to Justice


Topics

Legal and regulatory | Economic


Both inaction and action can produce violations, making navigation complex

Explanation

The complexity of digital governance lies in the fact that both failing to act and taking action can result in human rights violations. This creates a challenging environment where decision-makers must carefully balance interventions, as even well-intentioned actions can have negative consequences.


Evidence

Example of the blocking of Platform X in Brazil, where the core case was actually an incitement case against a law enforcement officer, not just misinformation as internationally reported


Major discussion point

Accountability Gaps and Access to Justice


Topics

Human rights | Legal and regulatory


Agreed with

– Chantal Joris
– Nieves Molina

Agreed on

International law applies to cyberspace but implementation remains challenging


Need for stakeholder engagement, participation, and transparency tools in human rights due diligence processes

Explanation

Effective human rights due diligence processes must include meaningful stakeholder engagement, broad participation from affected communities, and robust transparency tools for civil society. This participatory approach is essential to ensure that everyone is involved and aligned in the process of corporate accountability.


Major discussion point

Challenges in Digital Governance and Regulation


Topics

Human rights | Legal and regulatory


M

Mikiko Otani

Speech speed

137 words per minute

Speech length

956 words

Speech time

418 seconds

One in three internet users worldwide is a child, making digital environment crucial for children’s rights

Explanation

According to a 2017 UNICEF report, children represent a significant portion of internet users globally, with one in three users being under 18. This reality makes the digital environment a crucial space for children’s rights protection and promotion.


Evidence

2017 UNICEF report stating that worldwide one in three internet users is a child


Major discussion point

Children’s Rights in Digital Environment


Topics

Human rights | Sociocultural


Digital impacts children’s rights both positively and negatively across all rights under the Convention

Explanation

The digital environment affects virtually all children’s rights under the Convention on the Rights of the Child, not just obvious ones like freedom of expression and privacy. Rights such as play, education, health (particularly mental health), and the development of personal relationships are all significantly impacted by digital technologies.


Evidence

Examples include right to play, education, health, mental health issues, and development of relationships including children-parent relationships


Major discussion point

Children’s Rights in Digital Environment


Topics

Human rights | Sociocultural


Children want to use digital spaces safely rather than be totally protected or excluded

Explanation

Through consultations with children during the development of the General Comment, the Committee learned that children do not want to be completely protected from or excluded from digital spaces. Instead, they want to be able to use online spaces safely while maintaining access to digital opportunities.


Evidence

Strong message from children during the Committee’s consultation process for the General Comment on children’s rights in digital environment


Major discussion point

Children’s Rights in Digital Environment


Topics

Human rights | Cybersecurity


States and businesses should integrate child rights impact assessment in design and development of digital products

Explanation

The Committee’s General Comment adopted in 2021 emphasizes that both states and businesses must integrate child rights impact assessments into their processes. This is particularly important in the design, engineering, development, operation, distribution, and marketing of digital products and services to protect children’s safety and privacy.


Evidence

Committee’s General Comment adopted in 2021 after two-year consultation process


Major discussion point

Children’s Rights in Digital Environment


Topics

Human rights | Legal and regulatory


Need to understand children’s rights comprehensively considering evolving capacities from younger ages to adolescents

Explanation

Protecting children’s rights in digital spaces requires a comprehensive understanding that accounts for the wide age range of children (all persons under 18) and their evolving capacities. The impact of digital spaces varies significantly between younger children and adolescents, requiring nuanced approaches.


Evidence

Concept of evolving capacities is emphasized as very important, considering children are all persons under 18 years old


Major discussion point

Children’s Rights in Digital Environment


Topics

Human rights | Development


Need to bring various perspectives and diverse voices when discussing international law’s role in digital space

Explanation

To avoid fragmentation and silo approaches in digital governance, it’s essential to incorporate various perspectives and diverse voices from different groups and regions. This includes not only different vulnerable groups but also regional perspectives, as there are different initiatives and challenges across different contexts.


Major discussion point

Holistic Approaches and Future Directions


Topics

Human rights | Legal and regulatory


A

Audience

Speech speed

132 words per minute

Speech length

751 words

Speech time

340 seconds

States use content moderation to silence dissent while failing to stop harmful content that fuels violence

Explanation

During the India-Pakistan tensions and military conflict, there was a clear double standard where platforms took down accounts that provided alternative narratives (including journalists and human rights defenders) while amplifying violent and hateful content. This demonstrates how states can manipulate content moderation to suppress dissent while allowing harmful content to proliferate.


Evidence

During India-Pakistan tensions in April-May 2025, around 8,000 accounts that verbalized alternative narratives were taken down while platforms like X and Meta amplified violent and hateful content


Major discussion point

Digital Rights Violations and Censorship


Topics

Human rights | Cybersecurity


Digital users’ daily systems involve complex domains like political infrastructure, behavioral psychology, and economic power

Explanation

The digital environment is far more complex than current legal frameworks account for, involving intricate interconnections between political infrastructure, behavioral psychology, economic power structures, and technical design. These domains are purposely interwoven to influence digital users’ daily experiences and decision-making processes.


Major discussion point

Holistic Approaches and Future Directions


Topics

Sociocultural | Economic


Question whether separate legislation for digital selves is needed given exponential rate of new technology deployment

Explanation

Given the exponential rate at which new technologies are being deployed to scrape and source data to build digital profiles and manipulate users, there’s a question about whether entirely separate legislation specifically for digital selves might be needed. This would be more holistic legislation that protects aspects like cognition, mind, and thoughts from technological manipulation.


Evidence

New technology deployed daily at exponential rate attempting to scrape and source data to build digital profiles and use users as pawns


Major discussion point

Holistic Approaches and Future Directions


Topics

Human rights | Legal and regulatory


Agreed with

– Chantal Joris
– Nieves Molina

Agreed on

Technology advances faster than legal frameworks can adapt


Need for binding frameworks like global treaty on tech accountability beyond voluntary commitments

Explanation

There is a need to move beyond voluntary commitments to establish binding international frameworks that would ensure equitable digital governance. This could include mechanisms like a global treaty on tech accountability that would create enforceable obligations rather than relying solely on voluntary corporate commitments.


Major discussion point

Holistic Approaches and Future Directions


Topics

Legal and regulatory | Human rights


Agreed with

– Tiesa Meccrewfy
– Francisco Brito Cruz

Agreed on

Corporate accountability requires moving beyond voluntary commitments to concrete methodologies


Disagreed with

– Chantal Joris
– Nieves Molina
– Francisco Brito Cruz

Disagreed on

Approach to new legislation vs. existing law interpretation


S

Sanhawan Srisod

Speech speed

144 words per minute

Speech length

1888 words

Speech time

781 seconds

International law framework faces fragmentation challenges with different bodies addressing similar issues separately

Explanation

The moderator highlights that when discussing international law in digital spaces, there are multiple frameworks (international human rights law, international humanitarian law, international criminal law, corporate frameworks) that may address similar issues but lack harmonization. This creates confusion about which framework applies and how they interact with each other.


Evidence

Examples include incitement of hate violence or propaganda of war being addressed in ICL, IHL, and human rights frameworks but interpreted separately, such as the Rabat Plan of Action interpreting these sections without considering IHL or ICL


Major discussion point

Application of International Law in Digital Spaces


Topics

Legal and regulatory | Human rights


Agreed with

– Chantal Joris
– Nieves Molina

Agreed on

Fragmentation of legal frameworks creates protection gaps and accountability challenges


Digital rights violations during conflicts require intersection of multiple legal frameworks

Explanation

In contexts like Gaza, there is a collision and interaction of three bodies of law (human rights law, international humanitarian law, and international criminal law) particularly around issues of propaganda of war, incitement of hate, and violence. This demonstrates the practical need for coordinated legal approaches in conflict situations involving digital spaces.


Evidence

Reference to Hamle’s report on digital rights, genocide and big tech accountability in Gaza documenting rise in online hate speech and dehumanization targeting Palestinians


Major discussion point

Digital Rights Violations and Censorship


Topics

Human rights | Legal and regulatory | Cybersecurity


Accountability gaps exist between state and corporate responsibility in digital spaces

Explanation

The moderator identifies that current international law frameworks have not adequately addressed the challenge of determining accountability when there are blurred lines between state and corporate actions. This creates situations where it’s unclear who should be held responsible for certain digital violations or abuses.


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Legal and regulatory | Human rights


Children face disproportionate impact from online harm requiring specific legal protections

Explanation

The moderator emphasizes that children represent a specific group whose lives are increasingly shaped by the digital environment and who face disproportionate impacts from online harm. This requires examination of specific obligations imposed on states and corporations for protecting children in online spaces.


Evidence

Recognition that children’s rights online are clearly established under international law but implementation of specific obligations remains challenging


Major discussion point

Children’s Rights in Digital Environment


Topics

Human rights | Cybersecurity


Platform accountability challenges arise in balancing corporate responsibility with user rights protection

Explanation

The moderator highlights the complex challenge of holding platforms accountable for their role in facilitating harmful content while simultaneously avoiding disproportionate restrictions on human rights of online users. This requires striking a careful balance between accountability measures and rights protection.


Evidence

Reference to recent rulings in Brazil as examples of attempts to address platform accountability


Major discussion point

Corporate Accountability and Platform Responsibility


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

International law applies to cyberspace but implementation remains challenging

Speakers

– Chantal Joris
– Nieves Molina
– Francisco Brito Cruz

Arguments

International law applies to cyberspace but practical implementation remains controversial


Crimes enabled by digital technology are treated with exceptionalism despite existing legal frameworks


Both inaction and action can produce violations, making navigation complex


Summary

All speakers agree that while international law clearly applies to digital spaces, translating this into practical implementation faces significant challenges due to the complexity of digital governance and the rapid pace of technological change.


Topics

Legal and regulatory | Human rights | Cybersecurity


Fragmentation of legal frameworks creates protection gaps and accountability challenges

Speakers

– Chantal Joris
– Nieves Molina
– Sanhawan Srisod

Arguments

Different legal frameworks (IHL, ICL, human rights law) address similar issues separately, creating fragmentation


Need for harmonization to avoid protection gaps and forum shopping


International law framework faces fragmentation challenges with different bodies addressing similar issues separately


Summary

There is strong consensus that the current fragmented approach to international law in digital spaces, where different legal specialties operate separately, creates significant gaps in protection and accountability mechanisms.


Topics

Legal and regulatory | Human rights


Corporate accountability requires moving beyond voluntary commitments to concrete methodologies

Speakers

– Tiesa Meccrewfy
– Francisco Brito Cruz
– Audience

Arguments

Tech companies have obligations under UN Guiding Principles on Business and Human Rights to conduct due diligence


Need to translate corporate responsibility into platform accountability through specific methodologies


Need for binding frameworks like global treaty on tech accountability beyond voluntary commitments


Summary

Speakers agree that while corporate obligations exist under current frameworks like the UN Guiding Principles, there is a critical need to develop concrete methodologies and potentially binding frameworks to ensure effective platform accountability.


Topics

Human rights | Legal and regulatory


Technology advances faster than legal frameworks can adapt

Speakers

– Chantal Joris
– Nieves Molina
– Audience

Arguments

Technology advances much quicker than ability of international bodies and domestic parliaments to create good rules


Technology may advance to the point where freedom of thought takes different connotation as thoughts become reachable


Question whether separate legislation for digital selves is needed given exponential rate of new technology deployment


Summary

There is consensus that the rapid pace of technological advancement significantly outpaces the ability of legal and regulatory systems to develop appropriate responses, creating ongoing challenges for digital governance.


Topics

Legal and regulatory | Human rights | Cybersecurity


Similar viewpoints

Both speakers recognize that current political realities make new comprehensive international treaties unlikely, requiring focus on interpreting existing law while acknowledging the practical difficulties of implementation without adequate institutional capacity.

Speakers

– Chantal Joris
– Francisco Brito Cruz

Arguments

Not in the golden age of treaty making, so must rely on existing international law rules and interpretation


Building platform accountability is difficult without state capacities or proper regulation


Topics

Legal and regulatory | Human rights


Both highlight how content moderation is being weaponized to suppress marginalized voices and dissent while allowing harmful content to proliferate, demonstrating systematic bias in platform governance.

Speakers

– Tiesa Meccrewfy
– Audience

Arguments

Systematic censorship and discriminatory content moderation policies suppress Palestinian voices and undermine digital rights


States use content moderation to silence dissent while failing to stop harmful content that fuels violence


Topics

Human rights | Cybersecurity


Both speakers emphasize the complex interconnection between state and corporate actors in digital spaces and the need for more transparent, participatory approaches to accountability that involve multiple stakeholders.

Speakers

– Nieves Molina
– Francisco Brito Cruz

Arguments

Close relationship between state and corporations creates blurred reality regarding levels of responsibility


Need for stakeholder engagement, participation, and transparency tools in human rights due diligence processes


Topics

Legal and regulatory | Human rights


Unexpected consensus

Children’s agency in digital spaces should be respected rather than imposing total protection

Speakers

– Mikiko Otani
– Audience

Arguments

Children want to use digital spaces safely rather than be totally protected or excluded


Digital users’ daily systems involve complex domains like political infrastructure, behavioral psychology, and economic power


Explanation

There is unexpected consensus that rather than completely protecting children from digital spaces, the focus should be on enabling safe participation. This challenges traditional protective approaches and recognizes children’s agency while acknowledging the complex manipulative systems they navigate.


Topics

Human rights | Sociocultural


International criminal law may need to expand to include corporate entities

Speakers

– Nieves Molina
– Sanhawan Srisod

Arguments

Some scholars are calling for international criminal responsibility of companies and expanding ICC-ROM treaty to include legal personalities


Accountability gaps exist between state and corporate responsibility in digital spaces


Explanation

There is emerging consensus on the potentially radical idea of extending international criminal responsibility to corporations, particularly in cases involving digital-enabled atrocity crimes. This represents a significant departure from traditional individual-focused international criminal law.


Topics

Legal and regulatory | Cybersecurity


Overall assessment

Summary

The speakers demonstrate strong consensus on the fundamental challenges facing international law in digital spaces: fragmentation of legal frameworks, the gap between technological advancement and legal adaptation, the need for concrete corporate accountability mechanisms, and the complexity of balancing protection with rights preservation. There is also agreement on the inadequacy of current voluntary approaches and the need for more systematic, coordinated responses.


Consensus level

High level of consensus on problem identification and challenges, with emerging agreement on some innovative solutions like expanding international criminal law to corporations and respecting children’s agency in digital spaces. The consensus suggests a mature understanding of the issues but highlights the urgent need for coordinated international action to address the identified gaps and fragmentation in digital governance.


Differences

Different viewpoints

Approach to new legislation vs. existing law interpretation

Speakers

– Chantal Joris
– Nieves Molina
– Francisco Brito Cruz
– Audience

Arguments

Not in the golden age of treaty making, so must rely on existing international law rules and interpretation


Reactive legislation often not based on human rights creates disconnect between international obligations and domestic implementation


Crimes enabled by digital technology are treated with exceptionalism despite existing legal frameworks


Need for binding frameworks like global treaty on tech accountability beyond voluntary commitments


Summary

Speakers disagreed on whether to focus on interpreting existing international law or creating new binding frameworks. Chantal and Nieves emphasized working with existing laws due to challenges in creating new treaties, while audience members called for new binding frameworks like global treaties on tech accountability.


Topics

Legal and regulatory | Human rights


Unexpected differences

Role of state capacity in platform accountability

Speakers

– Francisco Brito Cruz
– Chantal Joris

Arguments

Building platform accountability is difficult without state capacities or proper regulation


Reactive legislation often not based on human rights creates disconnect between international obligations and domestic implementation


Explanation

While both speakers acknowledged challenges in platform accountability, Francisco emphasized the necessity of state capacity and regulation (citing Brazil’s experience), while Chantal warned against reactive state legislation that violates human rights. This created an unexpected tension between the need for state intervention and concerns about state overreach.


Topics

Legal and regulatory | Economic | Human rights


Overall assessment

Summary

The main areas of disagreement centered on approaches to legal frameworks (new vs. existing), the role of state intervention in platform accountability, and the balance between protection and access in digital rights. However, there was broad consensus on core problems: fragmentation of legal approaches, accountability gaps, and the complexity of digital governance.


Disagreement level

The level of disagreement was moderate and primarily methodological rather than fundamental. Speakers shared common goals of protecting human rights and ensuring accountability in digital spaces, but differed on implementation strategies. This suggests that while there are different approaches being pursued, there is potential for convergence around shared principles and coordinated efforts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers recognize that current political realities make new comprehensive international treaties unlikely, requiring focus on interpreting existing law while acknowledging the practical difficulties of implementation without adequate institutional capacity.

Speakers

– Chantal Joris
– Francisco Brito Cruz

Arguments

Not in the golden age of treaty making, so must rely on existing international law rules and interpretation


Building platform accountability is difficult without state capacities or proper regulation


Topics

Legal and regulatory | Human rights


Both highlight how content moderation is being weaponized to suppress marginalized voices and dissent while allowing harmful content to proliferate, demonstrating systematic bias in platform governance.

Speakers

– Tiesa Meccrewfy
– Audience

Arguments

Systematic censorship and discriminatory content moderation policies suppress Palestinian voices and undermine digital rights


States use content moderation to silence dissent while failing to stop harmful content that fuels violence


Topics

Human rights | Cybersecurity


Both speakers emphasize the complex interconnection between state and corporate actors in digital spaces and the need for more transparent, participatory approaches to accountability that involve multiple stakeholders.

Speakers

– Nieves Molina
– Francisco Brito Cruz

Arguments

Close relationship between state and corporations creates blurred reality regarding levels of responsibility


Need for stakeholder engagement, participation, and transparency tools in human rights due diligence processes


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

International law applies to cyberspace but practical implementation remains fragmented across different legal frameworks (international human rights law, international humanitarian law, international criminal law)


There is broad consensus that existing international law provides regulatory framework for digital spaces, but harmonization between different bodies of law is lacking, creating protection gaps


Corporate accountability under UN Guiding Principles on Business and Human Rights exists but translating this into effective platform accountability requires specific methodologies, transparency, and stronger enforcement mechanisms


Digital rights violations disproportionately affect marginalized communities, with systematic censorship and discriminatory content moderation suppressing voices during conflicts


Children’s rights in digital environments require comprehensive protection considering their evolving capacities, with emphasis on safe access rather than exclusion from digital spaces


Technology advances faster than legal frameworks can adapt, creating accountability gaps where both state and corporate actors can evade responsibility


The blurred relationship between states and corporations in digital governance complicates attribution of responsibility and enables forum shopping for favorable regulations


Resolutions and action items

Need for more coordinated efforts to avoid fragmentation of international law responses in cyberspace


States and businesses should integrate child rights impact assessment in design, development, and operation of digital products and services


Requirement for human rights due diligence processes that include stakeholder engagement, participation, and transparency tools for civil society


Development of specific methodologies to translate corporate responsibility principles into operational platform accountability measures


Need for monitoring and evaluation mechanisms to assess how digital legislation and regulations are performing in practice


Unresolved issues

How to effectively harmonize different bodies of international law (human rights, humanitarian, criminal) when they address similar digital issues separately


Who is responsible for protecting civilians in digital spaces when states themselves contribute to erosion of online civic space


How to balance platform accountability with protection of user rights without creating disproportionate restrictions


Whether separate comprehensive legislation for digital rights is needed or if existing international law frameworks are sufficient


How to enforce state obligations to prevent malicious cyber activity from their territory without infringing sovereignty


What binding frameworks beyond voluntary commitments could ensure equitable digital governance


How to address the exponential rate of new technology deployment that attempts to access and profile users’ thoughts and cognition


How to ensure remedies and access to justice for victims of digital rights violations across fragmented legal systems


Suggested compromises

Focus on operationalizing and clarifying existing international law rather than creating entirely new treaty frameworks, given the current challenges in international treaty-making


Adopt iterative approach of ‘intervention and testing’ for digital legislation, acknowledging that some regulatory innovations may fail and require adjustment


Use UN Guiding Principles on Business and Human Rights as an operative layer to navigate tensions between different international law frameworks


Emphasize consultation processes and hearing diverse voices (including children, marginalized communities, regional perspectives) when developing digital governance approaches


Balance children’s desire for safe digital access rather than complete protection or exclusion from digital spaces


Develop transparency requirements for both state and corporate actions in digital spaces to enable proper scrutiny of measures taken


Thought provoking comments

Those initiatives are very relevant, but I think there could be more effort probably to avoid the fragmentation of the responses to how international law applies to cyberspace and to make it more, to harmonize it more, to avoid a protection gap… from the perspective of the impacted communities and the rights holders who might be harmed by those types of information operations, it is not that relevant whether it’s humanitarian law or human rights law and which one is like specialis.

Speaker

Chantal Joris


Reason

This comment is deeply insightful because it shifts the focus from academic legal distinctions to the practical reality faced by victims. It highlights a fundamental problem in international law – that legal fragmentation creates protection gaps for those who need help most. The observation that victims don’t care about which legal framework applies, they just need protection, is both profound and practical.


Impact

This comment established the central theme of the entire discussion – fragmentation as a barrier to justice. It set up the framework that subsequent speakers built upon, with each panelist addressing different aspects of this fragmentation problem. It moved the conversation from theoretical legal analysis to victim-centered practical concerns.


There is this paradox while most legal scholars think that international law has most of the principles that would help us to provide remedies or to provide a regulation, there is certain exceptionalism on producing new legislation and attempting to create new laws that take time to create, but at the same time create a danger of fragmentation of the law.

Speaker

Nieves Molina


Reason

This observation reveals a critical paradox in how the international community responds to digital challenges. It’s thought-provoking because it suggests that the very attempt to solve problems through new legislation may be creating bigger problems through fragmentation. The concept of ‘exceptionalism’ in treating digital crimes differently is particularly insightful.


Impact

This comment deepened the discussion by introducing the paradox that efforts to create solutions might be creating new problems. It influenced the conversation by making participants question whether new legislation is always the answer, and it connected to later discussions about the challenges of treaty-making in the current global context.


Children living in digital world space, however, offline and offline, online and offline is seamless. It’s not so easy to differentiate what is the online and offline for the children… children want to use the digital. So they claim to us, the committee, that they don’t want to be totally protected or excluded from the digital space, but they want to use the online space safely.

Speaker

Mikiko Otani


Reason

This comment is profoundly insightful because it challenges the traditional binary thinking about online/offline spaces and protection/access. The revelation that children themselves reject overprotection in favor of safe access represents a sophisticated understanding of digital rights that many adults lack. It reframes the entire approach to digital protection.


Impact

This comment shifted the discussion from a paternalistic approach to digital protection to one that recognizes agency and voice of affected communities. It influenced how other participants thought about balancing protection with access, and it reinforced the theme that those affected by digital harms should be centered in policy discussions.


The assertion that the action can produce violations, but also action can produce violations… inaction can produce violations, but action can produce violations as well.

Speaker

Francisco Brito Cruz


Reason

This seemingly simple observation captures one of the most complex challenges in digital governance. It’s thought-provoking because it acknowledges that there are no easy solutions – both regulating and not regulating can cause harm. This paradox is at the heart of many digital policy dilemmas.


Impact

This comment introduced a crucial nuance to the discussion about accountability and regulation. It prevented the conversation from becoming overly simplistic about solutions and forced participants to grapple with the complexity of digital governance. It influenced later discussions about the need for careful, monitored approaches to regulation.


In the middle of all this information overload, whether or not international law and the content of human rights have been also a victim or have been also been targeted by misinformation campaigns, relativizing international law and questioning their usefulness.

Speaker

Nieves Molina


Reason

This is a meta-level insight that’s particularly thought-provoking because it suggests that the very framework being discussed (international law) may itself be under attack through digital means. It raises the possibility that misinformation campaigns are deliberately undermining faith in international legal frameworks, creating a recursive problem.


Impact

This comment added a new dimension to the discussion by suggesting that the challenges aren’t just about applying international law to digital spaces, but about protecting international law itself from digital attacks. It introduced the concept that the erosion of trust in international law might be a deliberate strategy, adding urgency to the discussion.


Overall assessment

These key comments fundamentally shaped the discussion by establishing fragmentation as the central challenge, introducing crucial paradoxes about regulation and protection, and centering the voices of affected communities. The conversation evolved from a technical legal discussion to a more nuanced exploration of the tensions between protection and access, action and inaction, and the need for victim-centered approaches. The comments collectively moved the discussion away from seeking simple solutions toward acknowledging complexity and the need for holistic, coordinated responses. Most importantly, they established that the current fragmented approach to international law in digital spaces is failing those it’s meant to protect, creating a compelling case for more integrated, community-centered approaches to digital governance.


Follow-up questions

How can different bodies of international law (human rights law, humanitarian law, criminal law) be better harmonized to avoid fragmentation and protection gaps in digital spaces?

Speaker

Chantal Joris


Explanation

This addresses the core challenge of legal fragmentation where the same conduct (like incitement to violence) is covered by multiple legal frameworks but interpreted separately, creating confusion and potential gaps in protection


How do we operationalize and clarify what each actor’s obligations are across different legal frameworks when dealing with digital harms?

Speaker

Chantal Joris


Explanation

There’s a need for practical guidance on how to apply overlapping legal obligations from different international law frameworks in real-world digital scenarios


How can we build effective methods for human rights due diligence that go beyond general principles to create specific, measurable corporate accountability?

Speaker

Francisco Brito Cruz


Explanation

While principles exist, there’s a gap in translating them into concrete methods that can be monitored and enforced against tech companies


Should international criminal responsibility be expanded to include legal personalities (corporations) under frameworks like the ICC-Rome Statute?

Speaker

Nieves Molina


Explanation

Given the blurred lines between state and corporate responsibility in digital harms, there’s growing discussion about whether companies should face international criminal liability


Has international law itself become a victim of misinformation campaigns that relativize its importance and create accountability gaps?

Speaker

Nieves Molina


Explanation

This explores whether disinformation efforts are deliberately undermining trust in international legal frameworks to create spaces for impunity


When states themselves contribute to online civic space erosion, who is responsible for protecting civilians in digital spaces?

Speaker

Audience member (Ulvia)


Explanation

This addresses the accountability vacuum when the primary duty-bearer (the state) is itself the violator of digital rights


How can international law and domestic platform regulation work together rather than conflict, especially when states can also violate human rights?

Speaker

Audience member


Explanation

There’s tension between international obligations and domestic regulatory approaches that need to be resolved for effective platform accountability


Are separate legislative frameworks needed for digital rights that holistically address the interdisciplinary nature of digital harms, including protection of cognition and thought?

Speaker

Audience member (Ana Galate)


Explanation

Current frameworks may be inadequate for emerging technologies that can access and manipulate human thoughts and cognitive processes


What binding frameworks beyond voluntary commitments could ensure equitable digital governance, such as a global treaty on tech accountability?

Speaker

Audience member (Christian Fazili)


Explanation

There’s a question about whether voluntary corporate responsibility frameworks are sufficient or if binding international treaties are needed


To what extent are states obligated to prevent malicious cyber activity originating from their territory, and how can this be enforced without infringing sovereignty?

Speaker

Audience member (Christian Fazili)


Explanation

This addresses the balance between state sovereignty and international obligations to prevent cross-border digital harms


How can sovereignty and rule of law serve as paths for human rights protection rather than barriers in the digital context?

Speaker

Francisco Brito Cruz


Explanation

This explores how traditional concepts of sovereignty can be reframed to support rather than hinder digital rights protection


How can diverse voices and regional perspectives be systematically integrated into international law development for digital spaces to avoid fragmented approaches?

Speaker

Mikiko Otani


Explanation

There’s a need for inclusive processes that bring together different stakeholder groups and regional experiences in developing digital governance frameworks


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #56 Shaping Africas Digital Future a Forum on Data Governance

Open Forum #56 Shaping Africas Digital Future a Forum on Data Governance

Session at a glance

Summary

This discussion focused on shaping Africa’s digital future through effective data governance, bringing together government officials, civil society representatives, academics, and private sector leaders to examine the challenges and opportunities in implementing data governance frameworks across the continent. The panel was moderated by Anriet Esterhuisen and featured participants including Sierra Leone’s Minister of Communications Salima Bah, Kenyan Ambassador Bitange Ndemo, and representatives from Research ICT Africa, CIPESA, and Meta.


Minister Bah emphasized the importance of political buy-in and strategic prioritization in moving from policy to practice, highlighting Sierra Leone’s journey over the past 25 years with significant progress in recent years through dedicated institutions and presidential support. Ambassador Ndemo stressed the need to demonstrate data’s value to citizens before implementing complex regulations, arguing that Africa should abandon overly restrictive frameworks and focus on showing practical benefits first. Wakabi Wairagala from CIPESA outlined the essential elements of rights-based data governance, including privacy protection, independent oversight mechanisms, and special attention to marginalized groups’ data rights.


Kojo Boakye from Meta discussed the challenges of operating across multiple African jurisdictions with varying data protection laws, advocating for greater harmonization through the African Union Data Policy Framework while warning against over-regulation based on lessons from other regions. Priya Chetty from Research ICT Africa highlighted critical implementation gaps, including fragmented legislation, weak enforcement mechanisms, and the need for greater user-centered approaches that ensure meaningful inclusion and data literacy.


The discussion revealed tensions between protection and innovation, with participants debating the balance between comprehensive data governance frameworks and practical implementation that delivers tangible benefits to African citizens and economies.


Keypoints

## Major Discussion Points:


– **Moving from Policy to Implementation**: A central theme focused on bridging the gap between data governance policies and their practical implementation, with Sierra Leone’s Minister highlighting the importance of political buy-in, strategic prioritization, and institutional capacity building as key success factors.


– **Rights-Based Data Governance and Digital Inclusion**: Discussion of how to ensure data governance frameworks protect privacy and human rights while promoting digital inclusion, with emphasis on the need for meaningful citizen participation, data literacy, and protection of marginalized groups’ rights.


– **Harmonization vs. National Sovereignty**: Debate over balancing the need for harmonized data governance frameworks across Africa (to facilitate cross-border trade and digital transformation) against individual countries’ sovereignty and specific national needs, with particular reference to the African Union Data Policy Framework.


– **Enforcement Challenges and Institutional Capacity**: Examination of weak enforcement mechanisms across African nations, despite having data protection laws, and the need for stronger regulatory oversight, adequate institutional mandates, and effective complaint mechanisms.


– **Multi-stakeholder Collaboration and Private Sector Engagement**: Discussion of the need for better collaboration between governments, civil society, and private sector (particularly big tech companies) in developing and implementing data governance frameworks, with calls for more meaningful participation in national policy processes.


## Overall Purpose:


The discussion aimed to examine Africa’s digital transformation journey through the lens of data governance, bringing together diverse stakeholders (government officials, civil society, private sector, and academia) to share insights on challenges and solutions for implementing effective, rights-based data governance that balances protection with innovation and economic development.


## Overall Tone:


The discussion maintained a constructive and collaborative tone throughout, characterized by mutual respect among panelists and a shared commitment to Africa’s digital development. While there were moments of gentle disagreement (particularly around harmonization approaches and regulatory frameworks), the conversation remained diplomatic and solution-oriented. The tone was notably optimistic about Africa’s potential as a leader in data governance innovation, with panelists expressing confidence in African-developed frameworks and encouraging continued partnership and investment in the continent’s digital future.


Speakers

**Speakers from the provided list:**


– **Moderator (Anriet Esterhuisen)** – Organiser of the African School on Internet Governance, past chair of the UN IGF MAG, associated with the Association for Progressive Communications


– **Salima Monorma Bah** – Honourable Minister of Communications, Technology and Information, Sierra Leone


– **Bitange Ndemo** – His Excellency Ambassador of Kenya to the European Union (based in Brussels), past collaborator, hosted IGF in Nairobi in 2011


– **Kojo Boakye** – Vice President, Public Policy for Africa at META, 19th IGF participant


– **Wakabi Wairagala** – Representative from CEPESA (Centre for Collaborative Internet Policy in Eastern South Africa), veteran of internet governance


– **Participant 1** – Role/expertise not clearly specified in transcript


– **Audience** – Various audience members asking questions


**Additional speakers:**


– **Priyad Chetty** – Incoming Executive Director of Research ICT Africa (mentioned by moderator but appears to be speaking as “Participant 1” in transcript)


– **Kodjo Waji** – Vice President, Public Policy for Africa from META (mentioned by moderator, but appears the actual speaker was Kojo Boakye)


– **Guy Berger** – Convener of the African Alliance for Access to Data, working with the African Commission


– **Osei Keja** – From Ghana (audience member)


– **Amy** – From Nigeria (audience member)


– **Kuku** – Audience member working with African Union Commission, ISD division


Full session report

# Shaping Africa’s Digital Future: A Comprehensive Discussion on Data Governance Implementation


## Introduction and Context


This panel discussion, moderated by Anriet Esterhuizen from the Association for Progressive Communications, brought together a distinguished group of stakeholders to examine the critical challenges and opportunities in implementing data governance frameworks across Africa. The conversation featured high-level government officials, including Sierra Leone’s Minister of Communications Salima Monorma Bah and Kenyan Ambassador to the European Union Bitange Ndemo, alongside representatives from civil society, academia, and the private sector, including Wakabi Wairagala from CIPESA (Centre for Collaborative Internet Policy in Eastern South Africa), the incoming Executive Director of Research ICT Africa, and Kodjo Waji from Meta.


The discussion emerged against the backdrop of Africa’s rapidly evolving digital landscape, with the conversation aimed at moving beyond theoretical frameworks to examine practical pathways for effective data governance that balances protection with innovation whilst ensuring meaningful citizen participation.


## Government Perspectives: From Policy to Practice


### Sierra Leone’s Strategic Approach


Minister Salima Monorma Bah provided insights into Sierra Leone’s journey in data governance, emphasising the transformative shift in governmental attitudes towards data. She noted that data governance has evolved “from something that was an afterthought for a lot of African governments to now being just a central piece to what we’re trying to do.” This evolution reflects a broader recognition of data’s dual nature as both an economic asset and a sovereignty issue.


The Minister highlighted critical success factors for effective implementation: strategic prioritisation, political buy-in from the highest levels of government, and understanding both the economic value of data and data sovereignty issues. She stressed that moving from policy to practice requires dedicated institutions and sustained presidential support, drawing from Sierra Leone’s experience of significant progress in recent years through focused institutional development.


Addressing the challenges smaller economies face when dealing with large technology companies, Minister Bah acknowledged that power imbalances sometimes lead countries to resort to “more draconian approaches” in their regulatory responses. This highlighted the complex dynamics between national sovereignty and global technology governance.


### Ambassador Ndemo’s Perspective


Ambassador Bitange Ndemo presented a different perspective on Africa’s approach to data governance. He briefly suggested reconsidering the current African Union data protection framework, arguing that regulations were being implemented before fully understanding what data could do for people. He advocated for demonstrating data benefits to citizens through practical applications.


The Ambassador also stressed the fundamental importance of digital literacy and capacity building as prerequisites for maximising technology use and effective data management, arguing that without these foundations, regulatory frameworks remain less effective.


## Civil Society and Academic Perspectives


### Rights-Based Data Governance Framework


Wakabi Wairagala from CIPESA outlined elements of rights-based data governance, emphasising that data governance must uphold fundamental rights including privacy, access to information, and protection from surveillance. He highlighted the necessity of independent arbiters and remedial mechanisms to mediate disputes and enforce penalties for data rights violations.


Wairagala also emphasised the importance of protecting data rights of minorities and marginalised groups, ensuring that vulnerable populations receive specific attention in governance frameworks. He called for meaningful spaces for citizen participation and awareness-building about data rights.


Addressing harmonisation challenges, Wairagala suggested that working at regional economic community levels, such as through the East African Community, could facilitate harmonisation and domestication of African Union frameworks more effectively than attempting continent-wide coordination immediately.


### Research and Implementation Gaps


The incoming Executive Director of Research ICT Africa brought a critical user-centred perspective to the discussion, challenging participants to consider whether current frameworks adequately emphasise the user perspective. She posed a fundamental question about whether sufficient emphasis was being placed on users across different aspects of digital policy, from digital public infrastructure to artificial intelligence and data governance.


She highlighted several critical implementation gaps, including fragmented legislation, weak enforcement mechanisms, and insufficient attention to user demands and inequalities. She emphasised the need for sustainable mechanisms for data inclusion that enable users to participate meaningfully in and draw value from data systems.


Her analysis revealed that current data governance frameworks insufficiently address structural inequalities and fail to build adequate user capacity for meaningful participation in the data economy. She called for evidence-based policy making that uses data as a tool for better governance.


## Private Sector Perspectives


### Meta’s Continental View


Kodjo Waji from Meta, attending his 19th IGF, provided insights from the private sector perspective, highlighting both opportunities and challenges in Africa’s data governance landscape. He noted that whilst Africa has made progress with national data protection laws, the lack of harmonisation between national policies creates operational complexities for companies operating across multiple jurisdictions.


Waji identified opportunities in the African Union Data Policy Framework for governments to continue creating legislation that addresses national priorities whilst creating opportunities for harmonisation. However, he also noted the institutional complexity that arises when multiple organisations claim jurisdiction over data governance issues.


Addressing enforcement concerns, Waji advocated for approaches that focus on improving data governance for users rather than revenue generation. He also emphasised the importance of thoughtful communication about artificial intelligence benefits and risks.


Drawing from global experiences, Waji referenced discussions about regulatory approaches in other regions, encouraging African policymakers to learn from these experiences whilst developing their own frameworks.


## Key Areas of Agreement and Consensus


### Political Will and Strategic Prioritisation


There was strong consensus among participants that political will and strategic prioritisation represent essential foundations for effective data governance implementation. Both Minister Bah and the Research ICT Africa representative emphasised that successful data governance requires strong political commitment at the highest levels of government.


### African Union Framework Recognition


Despite some disagreements about implementation approaches, there was notable recognition of the African Union Data Policy Framework. The moderator emphasised its importance alongside the African Commission on Human and People’s Rights resolution as significant African instruments for data governance.


### Enforcement Challenges


Participants demonstrated clear consensus that enforcement mechanisms remain weak across African data governance systems. Multiple speakers emphasised the need for stronger enforcement capacity and regulatory cooperation.


### User-Centred Approaches


Both Wakabi Wairagala and the Research ICT Africa representative agreed on the crucial importance of user-centred approaches and meaningful participation in data governance, emphasising the need for citizen participation and awareness-building about data rights.


## Audience Engagement and Broader Concerns


### Enforcement Capacity Questions


Osei Keja from Ghana raised critical questions about strengthening enforcement mechanisms, particularly given the weak enforcement capacity in many African nations. This intervention highlighted the persistent gap between policy development and practical implementation across the continent.


### Learning from Global Experiences


Amy from Nigeria asked how Africa could learn from the successes and mistakes of other nations in data governance, seeking to identify best practices that could be adapted for African contexts.


### Private Sector Engagement


An audience member from the African Union Commission raised questions about how technology organisations could be better engaged in country-level policy development processes rather than only participating in high-level international platforms.


## Additional Contributions


### Multi-Stakeholder Initiatives


Guy Berger highlighted the African Alliance for Access to Data (dataalliance.africa) and the African Commission’s Resolution 620, which calls for practical guidelines on data governance. These initiatives represent ongoing efforts to create collaborative frameworks that bring together diverse stakeholders in developing practical approaches to data governance.


## Emerging Themes and Future Directions


### Capacity Building and Digital Literacy


Multiple speakers emphasised the fundamental importance of capacity building and digital literacy as prerequisites for effective data governance. This emerged as a foundational requirement for people to effectively participate in and benefit from data governance systems.


### Balancing Protection and Innovation


A recurring theme throughout the discussion was the need to balance data protection with innovation and economic development. This balance emerged as particularly challenging for African economies seeking to attract investment whilst protecting citizen rights and maintaining data sovereignty.


### Harmonisation Strategies


The discussion revealed ongoing debates about the most effective approaches to harmonising data governance frameworks across Africa, with suggestions ranging from continental approaches to regional economic community-level coordination.


## Conclusion and Implications


This discussion revealed both progress and persistent challenges in Africa’s data governance journey. Whilst there is clear recognition of data governance as central to Africa’s digital transformation, different perspectives remain about implementation approaches, timing, and institutional frameworks.


The conversation highlighted various viewpoints on data governance across different stakeholder groups, from government officials emphasising political will and sovereignty concerns, to civil society advocates focusing on rights-based approaches, to private sector representatives highlighting operational challenges and harmonisation opportunities.


The emphasis on the African Union Data Policy Framework and related continental instruments suggests a foundation for continued collaboration, whilst the call for enhanced multi-stakeholder engagement, strengthened enforcement mechanisms, and user-centred approaches provides direction for future efforts.


The discussion demonstrated that Africa’s data governance conversation continues to evolve, with stakeholders working to address implementation challenges whilst balancing protection with innovation and ensuring meaningful citizen participation in the continent’s digital transformation.


Session transcript

Moderator: Good afternoon everyone, welcome to this open future on Shaping Africa’s digital future and looking at that future through data governance. We have a fantastic panel, so I’m going to start, we’ve lost a little bit of time, so I’m going to quickly introduce this panel. Firstly, we have Honourable Salima Bah from Sierra Leone and we’re really, really pleased to have her. She’s the Sierra Leonean Minister of Communications, Technology and Information. On the other side of the panel, we have His Excellency Ambassador Bitange Ndemo, past collaborator and in fact something they didn’t put in your bio is that you once hosted the IGF in Nairobi in 2011. Currently Bitange is the Ambassador of Kenya to the European Union, so he’s based in Brussels. On my right is Priyad Chetty, the incoming Executive Director of Research ICT Africa, which I think is Africa’s foremost think tank and I think in general producer of data evidence and analysis in policy and digital development in Africa. And then on my left here, I have Wakabi. Wakabi is from CEPESA, the Centre for Collaborative Internet Policy in Eastern South Africa and also I think a veteran of internet governance and I think one of the organisations that has done just such an immense job in building the capacity of civil society in Africa in trying to be active or being active voices in policy. And then on the far left, we have Kodjo Waji, who is Vice President, Public Policy for Africa from META. And I think Kodjo, this is your second IGF? Third?


Kojo Boakye: No, this is my 19th IGF.


Moderator: Your ninth?


Kojo Boakye: Nineteenth.


Moderator: Nineteenth?


Kojo Boakye: Yeah, I feel like I’ve participated.


Moderator: Were you in the youth caucus at the first one?


Kojo Boakye: No, I am a seventies baby.


Moderator: Well, welcome. It’s really good to have all of you here. And my name is Anriet Esterhuisen. I am the organiser of the African School on Internet Governance. I’m a past chair of the UN IGF MAG and associated with the Association for Progressive Communications. I think just really thanks to the organisers for this session, because data governance I think is indeed really one of those important cross-cutting topics that we need to engage at the IGF. And it’s particularly important at the moment for Africa, looking at where Africa is in its journey on digital transformation. There’s a rollout of digital IDs, national data exchanges, AI strategies and cross-border trade initiatives. For example, the implementation of the African continental free trade area. And all of these efforts hold promise for Africa’s development and for increased digital equality in Africa. But behind every digital system and innovation lies a very important critical foundation. In fact, I think two critical foundations. Firstly, data governance, effective, accountable, inclusive, development-oriented data governance. But then also digital inequality, which is the context that still tends to undermine the implementation of so many innovative and forward-looking digital transformation strategies on the continent. But data governance is not just about regulation. I think it’s about systems, coordination and institutions that ensure that data is managed in ways that are secure, transparent and trusted. And I think, just adding this, this is particularly relevant to the IGF. Trusted across stakeholder groups and by different types of actors. Trusted by private sector operators, by privacy advocates, by trade justice advocates, by governments and by the technical community as well. And it’s only really when you have this trust that digital transformation can be implemented, but work and achieve benefits for people. For governments, there really is a challenge at this point in terms of how quickly to move from policy to practice, how to build systems that are technically sound, and I think also systems that are future-proof, but at the same time systems that recognize that the realities of the context that those governments are trying to work in are still so characterized, so deeply characterized by digital exclusion at multiple levels, at the level of institutional capacity, people’s access, the capacity to have the devices and the skills that are needed to really participate and benefit from these systems. So this session is trying to bring together these quite diverse, different stakeholders to talk about this. They’re all involved in this in one way or another, and so they’re here to share their insights on what the challenges are, what the possible solutions are, what we’ve learned from the data governance implementation and framework development that’s taken place so far. And to start us, I’m going to ask Minister Salimah Bah from Sierra Leone to just give us a sense in terms of what Sierra Leone has done and the experience that you’ve gained over the last few years. What do you think, what are the critical factors, the variables that really make that difference between not effectively implemented policy and strategy or incoherent or non-harmonized policy and strategy to policy and strategy that actually makes that leap from theory, from ideas, from strategy to practice to implementation?


Salima Monorma Bah: Thank you, thank you so much for organizing this, and definitely to the organizers at GIZ and everybody else who’s participated. I think, as you said, this is such a really critical conversation to have, specifically when we look at Africa’s digital transformation journey and agenda and how data is the essential part of that. I think I was having a conversation with somebody, I was like, it went from something that was an afterthought, I think, for a lot of African governments to now, I think, being just a central piece to what we’re trying to do. And I think, well, one, specifically we understand when we talk about our digital transformation agenda, how issues such as data sovereignty and how our data is managed and what that looks like. I think, you know, now the understanding of how critical that is, I think it’s even more established. But then also, I think the other reality is the economic value when it comes to data as well. I think that is also part of it, that everybody’s getting to the understanding of there’s a huge economic value and we want to ensure that how that is exploited or how that is managed or leveraged or tapped into, we want to be part of those conversations as well. But definitely key points in terms of how do you move from policy and strategy to practice, I think one of the first for Sierra Leone, what has really worked, because this has been a journey that we’ve been on for maybe over 20, 25 years, but really we’ve only started seeing effective outputs from that, I think, for the past six, seven years. And I think it’s been as a result of really strategic prioritization. So for example, from the very top level, by His Excellency the President, we get to see that political buy-in and interest. For example, when the President came in for the very first time, he appointed a Chief Innovation Officer, he established a dedicated agency that focuses on technology and how do we implement, he’s included it in making technology a big five.


Bitange Ndemo: With these few things that I have mentioned here plus digital literacy, we can begin to maximize the use of this technology and also manage our data in a in a better way that can move us forward.


Moderator: Thanks, thanks a lot for that Bitanga. And I now want to turn to you Wacabi and you know SPSA has done such pioneering work in terms of digital rights and the protection of privacy and recognizing how that and data protection is an enabler of other rights. I mean for you, what does meaningful rights-based data governance look like in practice at this point in time? And I think particularly now that we’re no longer just talking about GDPR or privacy, data governance is now so much more than the protection of personal information. Thank you very much for joining us. I’m going to turn it over to you, Dr. Wairagala, to talk about how the ecosystem involves so many different types of data. So how do you feel we can effectively apply this rights-based approach to this evolving landscape of data governance?


Wakabi Wairagala: ≫ Thanks, Henriette. So, yeah, from the outset, as you know, we have a lot of data. And we have a lot of data that is relevant to our lives and our participation in society. So if the right is not promoted, if then the appetite for citizens to partake of public services, for them to participate in public affairs, ETC, will all be undermined. So how then do we have rights-based approaches? So, for example, we should ensure that the right to privacy and access to information, and access to data protection, we should ensure that they uphold the right to privacy and related rights like access to information, and protection from surveillance. On the other hand, we should ensure that our laws are aligned with the high-level instruments such as the Constitution, and that they are not limited to the right to privacy and access to information, and that they govern a lot of rights around the continent, as well as with constitutional guarantees in the individual countries. But Dr. Ndemo has also mentioned the issue of balancing protection with innovation. As much as we want to protect rights, we also need the use of data. So, we are not saying that it is specifically for the public interest. A lot of the time, those principles of data-protection, which we keep talking about, but if we were to do a sort of assessment, many of those in the data-protection system, we would say that it is not for the public interest, it is for the private interest, and it becomes our role to do assessments of countries to see whether they are actually living up to those principles, or they are not. The other element I would like to speak to is that of the independent arbiters and remedial mechanisms, and the Minister from Sierra Leone actually spoke about them. So, we are going to be able to have the right to be able to mediate, to issue guidance, to issue penalties, that, again, is an area in which we are going to be able to have the rights respected. A final one is clearly acknowledging and instituting measures to ensure that data rights of minorities and marginalised groups are clearly upheld, which brings in the question of the right to be able to mediate, to issue penalty, to issue penalties, namely, that data should not be collected, should not be used in any way that undermines the rights of the marginalised groups. As I sum up, what are some of the elements that need to be done to ensure that citizen voices are heard? One of them is clearly having meaningful spaces for citizens to be able to express their views, and for those views to be taken into practice, but there is also the area of privacy and data rights are fairly new rights, not many people know their data rights, or how to express them, so creating awareness of the rights, creating awareness of how to protect yourself in digital environments, and in policy-making are areas also that can go somewhere in this regard.


Moderator: Yes, I was actually going to ask you, if you think people know what data rights are, and if the concept of data justice is one that is well understood, and I think it is challenging, particularly because data rights also spans different types of data, but absolutely, that’s, you know, that’s exactly the role that civil society should be doing. Kojo, let me move to you. As Meta has to operate in this space, how do you, from your experience of a company that operates in multiple jurisdictions, a very powerful company, a company that’s a very data-intensive data, so central to your business model, how do you find the policy and regulatory environment in Africa from the perspective of a business model? How do you find the policy and regulatory environment in Africa from the perspective of complexity, suitability, harmonisation, compliance, and accountability and transparency on your part?


Kojo Boakye: Great question. Big question as well. The first thing I’ll say is a huge thanks to the government of Norway for putting this conference on, having spent so much time at a number of them, most recently Japan and Saudi and others, I know what a big undertaking it is, but also a huge thanks to my fellow panellists. I told Dr Ndemo unashamedly that he mentored me from afar, certainly in my time with the Alliance for Affordable Internet and still with the Global Digital Inclusion Partnership, et cetera, and some of the changes he made to the policy environment in Kenya were ground-breaking in terms of how they accelerated access to the mobile, et cetera, and Wakabi and Priya who I worked with, Alison and many others, and I told the minister that I worked in Sierra Leone for a while, so huge respect for the panellists. In terms of the policy and regulatory environment in Africa itself, and most of my work across Africa, Middle East and Turkey is with sub-Saharan Africa, although I do look after North Africa as well, but I’ll probably focus on sub-Saharan, I see a lot of development. We have 13 national laws that speak to this. Some of the most well-known are obviously Papaya in South Africa, Nigeria’s Data Protection Act, Kenya’s Data Protection Act as well, which highlight how thoughtful governments are being about this particular issue. Part of your question, Ariane, spoke to the level of harmonisation we see and the impact it has on companies like ours, and I will say smaller companies, if you want to describe it that way, who seek not only to operate in their national jurisdictions but also access African markets abroad, and I’m not going to go on about the Africa Continental Free Trade Agreement and the promise that it holds. What I’m probably seeing less of is harmonisation between those national policies. Yes, there are similarities, but I think if you think about companies that are seeking, both big and small, that are seeking to take advantage of a one billion person opportunity on the continent itself, 1.3 billion if you want to look at North Africa and Sub-Saharan Africa, and growing and getting younger, some level of harmonisation is needed. I think the AU data policy framework offers a huge opportunity not only for governments to continue creating legislation that speaks to their national priorities, whatever those may be, but also creates opportunities to harmonise those frameworks and let organisations, whether they be purely private sector or public sector or NGO, CSO organisations, work across, have some certainty and predictability about how data can flow across the continent and travel. I just want to make a couple of quick comments on some of the great interventions from colleagues. My learned colleague from Sierra Leone, whom I’ve just met and has already impressed me with her policy stance, spoke about the creation of institutions. You asked, Ariane, I just want to link this to the question, you asked about how companies operate. I will say I would love to know more about the kind of institutions being made, because in our experience, sometimes when CODJO is asked to have a call with country X, I’m speaking to this head of an organisation that’s just been created, I’m speaking to the current DG of the regulator, who has been there for some time and the organisation has been there for some time, I’m speaking to the digital development organisation as well, all who understandably believe they have jurisdiction over this growing and important area. And sometimes, not in Sierra Leone, but in some countries, you get a complexity that creates challenges between the institutions themselves and the individual players, as well as for companies both large and small as well. So I do think that people should think carefully about that. I thought Dr Ndemo’s intervention about the lessons learned from Are we telling them that there’s a huge opportunity that they can feed into, that governments can feed into, that commercial companies can feed into to help improve their lives, as Dr. Ndeme said we could have done in the past? Or are we suggesting that big companies like Meta, like Google, like Microsoft, or governments with nefarious purpose, who have nefarious intent or purposes, are going to use AI to, you know, whatever the nefarious purposes may be? And I do think the communication about the promise of AI, the benefits, as well as the risks, need to be really, really thoughtfully approached and delivered to people who will benefit most from it. I hope that answers your question a bit.


Moderator: No, you absolutely did. Thanks very much for that, Kojo. And I just want to emphasize, Kojo talked about the African Union Data Policy Framework. For those of you that are not from Africa, I really urge you to have a look at it. In fact, there are people around this table who helped develop it, and GIZ played a very supportive role. But it’s an African instrument that has been adopted by African states. And it’s a framework that I think very effectively balances the protection of data with the use of data, the making availability, the sharing of data for economic and social and public interest benefits. And then there’s another African instrument. I’m going to let you speak, don’t worry. I know you. There’s something that the African Commission on Human and People’s Rights, a resolution that was passed last year. This is an African body. It developed soft norms, but they’re very powerful norms. And they passed a resolution on access to data and the role that data plays and access to data in realizing the right to access to information. And again, a very powerful, forward-looking, human rights-oriented instrument that again, looks at the use, the protection, the sharing of data. Kojo, you wanted to have a quick response.


Kojo Boakye: I want to be super quick, Ariane. Thank you so much for the additional time. The one other thing I would say about the framework is I believe as a historian of African history, and I’ve spoke about my respect for the panelists, but my heroes include Kwame Nkrumah and many of the kind of post-imperialist or decolonization heroes of our time. I think the framework also offers an opportunity for us to create a harmonized policy framework that is bespoke for Africa. And I say bespoke for Africa because I know data is a global thing, but fits not only our national need, but also our continental need and helps us assume that position. I quickly did a search on Mario Draghi’s report, which I’ve read in full, but I asked Meta.ai, you can also use chat GPT if you want, but I asked Meta.ai, or DeepSeek, whatever you like. I used Meta.ai to ask you what were the main findings of the Mario Draghi report, the Mario Draghi report. And the Mario Draghi report says that, I won’t read it all, but the key thing is Europe has fallen behind its other regions. Lack of investment, no big, a small number of big tech players, no big tech players, I’ll say diplomatically, less diplomatically, many other issues that have happened because of this, in short, over-regulation. Okay, let’s get back to… No, there’s a pushback because…


Moderator: I’m going to push back too.


Kojo Boakye: You can, because there is..


Moderator: We’re not going to talk about European regulation.


Kojo Boakye: No, no, I’m not going to talk about that. I’m talking about the lessons learned from that and the lessons we can take as a continent from that. And for those of you who are interested, go and read the report. It’s not me saying it, it’s one of Europe’s foremost politicians who has said that lessons could be learned, and I would encourage our continent and others to learn


Moderator: from that. I’m teasing Carter, but because he’s from Meta and I think Meta deserves… I’m from Ghana. But that’s true. No, but I think your point about the data policy framework, I think in the context of data flows that are still very extractive, you’re absolutely correct about the power of the data policy framework. Priya, in Research ICT Africa’s experience, you’ve been working with governments in implementing and domesticating the principles in the African Union data policy framework. Can you also work with data and evidence-based policy as a tool? What do you see as the most common critical gaps in implementation? What are governments really struggling with? Is there a multi-stakeholder dimension to this that we might not have touched on enough? So what do you think are the issues that we really need to focus on if we want to utilize the power of data in a rights-oriented and development-oriented way?


Participant 1: Thank you so much, Chair, and thank you for the opportunity to be here today and speak on this panel. I love coming in at this point because I think that people have raised different gaps and challenges already, and this correlates very much with what we are seeing in our work. In our work, we work at the national level, we work at the institutional level, we work at the regional level, and more recently, we have worked as a knowledge partner to the G20 as well. In our work at the moment, data governance has evolved from a standalone concept to how it’s integrated into concepts such as DPI, the extent to which it can be used as a tool for AI transparency and accountability, and more recently, we’ve become a little bit obsessed with the data value chain and participation and agency and inclusion for all of society, including micro, small enterprises as well, in having access to some of those data benefits. So people have spoken about the different gaps and challenges in terms of the fragmented legislation, not necessarily speaking to each other, and this is a global phenomenon as well. It’s a very dynamic field and definitions of data are changing by the day, in fact, and we cannot resolve on a single definition, and even when we speak about the data value chain, it’s so dynamic, it’s moving very, very fast, and in fact, it’s in some ways unfair to us, the policy, to keep up with the pace of innovation. People have spoken about the enforcement mechanisms and weaknesses in the enforcement mechanisms and the regulatory oversight models that we have, and that coherence and alignment in what we speak about now as a data ecosystem, where we realize everyone’s got a role to play, the different government departments, we heard how science, tech and innovation are coming in, we know about how statistics are being called upon to play a role, so that coherence in the data ecosystem as well, and someone raised political will, and as being one of the, and we’ve all had experiences with that, you know, investment and money and being resourced is the one challenge in order to make these kinds of institutional shifts and capacitate the different institutions, but on the other hand, you have to have a real political will and commitment to seeing it through, and of course, we remain with these very structural inequalities, so when we speak as the global south, when we speak from a continental perspective, we entrench that we need meaningful inclusion, and that means that we also need data literacy and digital skills that enable us to participate in the opportunities that we are speaking about. The work that we have been doing is then in that evolution of the data governance framework, so what do we want to see from a regulation perspective, institutional perspective, and I don’t want to box us into an African perspective, because as Anwet mentioned, we have a world-class African Union data policy framework, and it is being consulted on as a novel approach to data governance across the world, and because it introduces novel principles on how we can bring more equity and agency for everyone in the benefits of the data economy, but what are the things we are looking for as we make these shifts? For those who work with us, our partners, you know, we always speak about our situational analysis, where we want to understand the political economy and the kind of infrastructure set up, and where do we stand before we work on data governance. We also look at the rights framework, so we want… We want to know if there’s the right to privacy, right to access information. We also want to know if there’s something that can be interpreted as the right to meaningful connectivity, or the rights to benefit from the data value chain. We try to improve on the adequacy of the privacy frameworks and the access to information frameworks. And more recently, we’ve been placing emphasis on the access to information frameworks. How do we manipulate the proactive disclosures and mandatory disclosures in this, that we start to open up the data and have it available to those that need to use it. We also want to try and improve the adequacy of the enforcement mechanisms by going and having a look at the mandate of the institutions and of the regulators. If we look at the consumer protection regulators, the information regulators, if we look at the ministries, if we look at the institutional mandates to carry out standards building, are the mandates sufficient to really give legitimacy to what they are going to input into the ecosystems? There’s also no point in developing any of these if it doesn’t have legitimacy coming from the institutional mandates. So there’s often a need to change the mandates and to expand those. But what I wanted to place focus here that we haven’t spoken about is to bring the civil society perspective in, which is that in all the work that we are doing, are we really putting emphasis on the user? And whether we’re speaking about DPI, whether we’re speaking about AI, whether we’re speaking about data governance that informs those, are we putting the emphasis on the user? Have we considered their demands? Have we considered their inequalities, their baseline of where they’re coming from? And can we look at these different mechanisms, the laws and the policies and the institutional frameworks, and can we use that as a means of coalescing to build the user’s capacity to participate and draw value? And that, I think, is the most fascinating part of our work at the moment, is how do we build sustainable mechanisms for that data inclusion? It’s something personally that we’re very excited about.


Moderator: And do you feel that’s something we’re not doing enough of right now?


Participant 1: And we’re not doing it. And if I even think about conversations we’re having here, I would challenge us to say, are we putting enough emphasis on that?


Moderator: Thanks. I know the panel want to respond, but we only have 10 minutes left. So before you respond, are there any questions online, Joshua? Are there any questions in the room? If you want to ask a question, go and stand by a microphone. Nothing online. Perfect. Please, you have to be super, super brief. Your name and your question. And make it a question, please. It’s active. You can just speak.


Audience: Okay. So mine is a comment related to Kodjo’s submission. So currently, the African Union Commission, ISD division to be precise, is working with African countries to implement the African Union Data Policy Framework, for which GIZ is supporting. And the aim is harmonization. But there are 54 or some say 55 countries on the continent. And so we should accept that this is not going to be a very easy task. So the harmonization issue is on the agenda. But this is just a call for action for Kodjo and other big tech organizations. What we have realized so far in implementing the African Union Data Policy Framework is that usually, and I know it’s difficult, but big tech in the country processes usually tend to shy away. And they are more interested in platforms like this. And I think where we need to do a bit of work is in the country processes, so that they support with the policy development and the development of the data ecosystem. Thank you.


Moderator: Thanks. And sorry to cut you short, but we have seven minutes left. Please, quickly, and then we’ll take that last point.


Audience: Thank you to the panel and thank you to the moderator for mentioning the African Commission on Human Rights Resolution 620. Please take a look at it, Resolution 620, African Commission on Human and People’s Rights. My name is Guy Berger. I’m convener of the African Alliance for Access to Data that are working with the African Commission to develop guidelines on access to data following Resolution 620. So if you’re interested in the alliance and the work and shaping these guidelines on access to data, public sector data and private sector data, please visit dataalliance.africa. Thank you.


Moderator: Thanks for that, Guy. Please be quick.


Audience: Hello, everyone. I’m Osei Keja from Ghana. My question is many African nations do have data protection laws and largely modeled after the GDPR, yet enforcement capacity has been notoriously weak. What critical strategies or mechanisms can be put in place to make enforcement work? Thank you very much.


Moderator: Thanks very much. And the last question.


Audience: Hi, I’m Amy from Nigeria, and my question is how can we learn from the successes and mistakes from other nations who are currently trying to improve data governance so that we can actually make some successful data governance changes within Africa ourselves?


Moderator: Thanks very much for that. And Kojo, that’s exactly the opening you were looking for earlier. So, panel, we’ve got one minute each left. I think I’d like to bring in – Kojo mentioned earlier AI. I think I’d like you to respond. Think about the questions. Think about what you don’t want to see as we continue on this journey and what you do want to see. And I am going to start over here with Emmanuel Manasseh, and then we’ll move all the way to – and we’re going to finish with the Minister.


Bitange Ndemo: Thank you. I’ll be very clear. What I want to see is to abandon the African Union data protection law and each country. You mean the Malabo? Yes. And if you recall, before we developed this, we had an open government partnership which had actually dealt with a lot of management of resources and transparency. Once we begin to show what data can do, then we can put data regulations in place. Now we have put data regulation before we even know what data will do to the people. At the African level, it’s not doing anything. Nobody is implementing. It can’t be implemented. It’s just something we keep on waving around that we have.


Moderator: Thanks, Bitange. Priya?


Participant 1: In my closing, I’ll respond quickly to some of the comments as well, which is the first is that we do have to continue the support for developing data governance frameworks at the national level and including in the detail of how institutions evolve. The second is I love the civil society coalition with the African alliance coalition because we must develop these standards from that perspective, with those needs in mind, and this complements what could be done at the government levels and at the regional levels. And the third is to respond to the regulatory oversight mechanisms that, one, we can strengthen practically our complaint mechanisms that we have at the regulators, and secondly, we’ve got a lot of traction with regulatory cooperation, so looking at how competition regulators interact with information regulators, and so you’re closing the loop on different gaps and where some of the challenges might be. So that’s my closing shots and a quick response.


Moderator: Thank you, Priya. Wakabi?


Wakabi Wairagala: Thank you. We are working with GIZ to help the East African community come up with a law, a harmonized law on data governance. We’ve been doing a lot of trainings, ETC, and we think that this working at the level of regional economic communities can be one way in which countries can domesticate the African Union data policy framework. There are a lot of problems, different maturity levels, different laws, but harmonization, we think, can happen, especially in areas where there is already an economic community, so it can deepen free flow of videos.


Moderator: Thank you very much, Wakabi. One minute, Kojo.


Kojo Boakye: Quickly responding, closing and quickly responding to comments, Kuku, I think, we’ve met here. Nice to meet you here. Call to action. I would hope we’re already doing that. I think META has one of the larger policy teams, certainly in Africa. Each of the members, I’m proud to say, was born of the region or whose parents were born of the region and come from various backgrounds, government, private sector, civil society. And I think a lot of call for actions, when we receive them, we are responding to. We want to be involved. For META’s benefit and the benefit of the continent and the benefit of users, we want to be involved in contributing to policy and regulatory development. So we’re happy to work on this particular process as well, especially if it lends itself to harmonisation. On Osei’s comment, my friend Osei just gave a comment about enforcement. He asked, how do we enforce better, in short? I think enforcement should take into consideration the aims and what you’re trying to do to improve data governance on behalf of users, as it always should. I think in my personal experience, that should be the primary reason for enforcement. And we should be thinking less about revenue generation, which is the case that some governments, and there’s some excellent governments who aren’t thinking about that at all, but some governments are thinking about it as, in part, revenue generation. And I think we need to be careful of that. And the last thing I’ll say is on learning from others. The lady asked the question, learning from others. Let’s learn from others. I’ve encouraged people to read the Mario Draghi report, which talks of fragmented regulation and over-regulation and the consequences of that. I would encourage us to be even more confident in ourselves about developing policy and regulation. Arionet has spoken to how the world is now looking at the data policy framework. I would encourage us to do that. And talk to each other, is my last point. I know the EPA’s regulators across the region talk to each other, but talk more broadly as well. Talk to India, to Brazil, to other places as well, to think about how they’re doing it. And seek to talk to us as part of a call for action, because I think tech wants to be involved. Sorry to take so long, Arionet.


Moderator: Thanks a lot, Kodjo. Salima?


Salima Monorma Bah: Yes, thank you. Thank you very much. I really, as a closing, as a response, I think the question asked from the participants from Ghana, I think that’s such a critical conversation. And I’ll show you at the very highest level within government, across governments, that’s something we’re discussing, to see how do we really better enforce. I like what you mentioned about fear, and what you mentioned about maybe each country needs to look at what they need in terms of our data protections. Sometimes the harmonization doesn’t really get into effect. I do think the conversation about enforcement could be a separate panel on its own. And I think it’s a separate panel with private sector. Part of that, I know you mentioned some governments, it’s an issue of the revenue, but really, maybe some, but I’ll show you a large part of it is sometimes an issue of, it doesn’t seem as if I can bring them to the table any other way. Especially when you’re dealing with countries of a smaller economic size, and when they go up against big tech, sometimes there’s that gap in me getting what I want. So maybe I’m going to go the most draconian I can. So I think this is where big tech really needs to come to the table to see how do we do that. So you see actually a lot of the movements going towards the GDPR, because it seems as if it’s the best example of continents outside of the Americas who’s been able to get effective hold of how this is managed. So maybe, I think that’s a separate panel of itself, because you can dive into causations, and how, and different models, and how we get there. But I do think for the continent’s sake, as a closing, I think for the continent’s sake, for the private sector’s sake, and for the citizens’ sake, I think this is a really critical conversation that needs to happen as soon as possible.


Moderator: Thanks very much. We’re out of time, but I don’t think I need to make closing remarks either. So thanks for this fantastic panel. Thanks to the team that organized it. And I think to everyone, I work in data governance in many parts of the world. I think Africa is the place to follow. Join us. Invest in our journey. Partner in our journey. And thank you so much for coming to the session. And a quick thanks to the online participants. I know you didn’t speak, but we saw you. So you were visible. You were in the room with us. Thank you. Thank you.


S

Salima Monorma Bah

Speech speed

162 words per minute

Speech length

694 words

Speech time

255 seconds

Strategic prioritization with political buy-in from the highest levels of government is essential for effective implementation

Explanation

The Minister argues that Sierra Leone’s success in digital transformation over the past 6-7 years resulted from strategic prioritization at the highest government levels. She emphasizes that political buy-in and interest from the President, including appointing a Chief Innovation Officer and establishing dedicated technology agencies, was crucial for moving from policy to practice.


Evidence

Sierra Leone’s President appointed a Chief Innovation Officer, established a dedicated agency focusing on technology implementation, and included technology in the ‘big five’ priorities


Major discussion point

Data Governance Implementation and Policy-to-Practice Transition


Topics

Development | Legal and regulatory


Agreed with

– Participant 1

Agreed on

Political will and strategic prioritization are essential for effective data governance implementation


Moving from policy to practice requires understanding both the economic value of data and data sovereignty issues

Explanation

The Minister explains that data has evolved from being an afterthought for African governments to becoming central to digital transformation agendas. She emphasizes that governments now understand both the critical nature of data sovereignty and the huge economic value that data represents, wanting to be part of conversations about how this value is leveraged.


Evidence

Data went from being an afterthought to a central piece of African governments’ digital transformation agendas


Major discussion point

Data Governance Implementation and Policy-to-Practice Transition


Topics

Economic | Legal and regulatory


Disagreed with

– Bitange Ndemo

Disagreed on

Approach to Data Regulation Timing and Implementation


Smaller countries face challenges in enforcing regulations against large tech companies, leading to more draconian approaches

Explanation

The Minister acknowledges that when smaller economies deal with big tech companies, there’s often a gap in achieving desired outcomes through normal means. This power imbalance sometimes forces smaller countries to adopt more stringent or draconian regulatory approaches as the only way to bring large tech companies to the negotiating table.


Evidence

Countries of smaller economic size face gaps when going up against big tech, leading them to adopt the most draconian approaches they can


Major discussion point

Enforcement Challenges and Mechanisms


Topics

Legal and regulatory | Economic


Disagreed with

– Kojo Boakye

Disagreed on

Enforcement Approach and Revenue Generation


B

Bitange Ndemo

Speech speed

122 words per minute

Speech length

153 words

Speech time

74 seconds

Digital literacy and capacity building are fundamental prerequisites for maximizing technology use and better data management

Explanation

Ambassador Ndemo argues that digital literacy, along with other foundational elements, is essential for societies to effectively utilize technology and manage data properly. He suggests that without these basic capabilities, countries cannot move forward in their digital transformation journey.


Major discussion point

Data Governance Implementation and Policy-to-Practice Transition


Topics

Development | Sociocultural


The current African Union data protection approach should be reconsidered as it’s not being effectively implemented

Explanation

Ambassador Ndemo argues for abandoning the current African Union data protection law approach, stating that it cannot be implemented and is merely something that gets waved around without practical effect. He suggests that data regulations were put in place before understanding what data could do for people, and advocates for showing what data can accomplish before implementing regulations.


Evidence

The AU data protection framework is not being implemented by anyone and remains just symbolic


Major discussion point

Regional Harmonization and Frameworks


Topics

Legal and regulatory | Development


Disagreed with

– Salima Monorma Bah

Disagreed on

Approach to Data Regulation Timing and Implementation


W

Wakabi Wairagala

Speech speed

179 words per minute

Speech length

614 words

Speech time

205 seconds

Rights-based data governance must uphold privacy, access to information, and protection from surveillance while ensuring laws align with constitutional guarantees

Explanation

Wakabi argues that effective rights-based data governance requires protecting fundamental rights like privacy and access to information while preventing surveillance overreach. He emphasizes that these protections are essential for citizens to participate in public services and affairs, and that laws must be aligned with high-level constitutional instruments.


Evidence

Rights protection is necessary for citizens’ appetite to partake in public services and participate in public affairs


Major discussion point

Rights-Based Data Governance and Protection


Topics

Human rights | Legal and regulatory


Independent arbiters and remedial mechanisms are necessary to mediate, issue guidance, and enforce penalties for data rights violations

Explanation

Wakabi emphasizes the importance of having independent institutions that can serve as arbiters in data governance disputes. These institutions should have the authority to mediate conflicts, provide guidance on data rights issues, and impose penalties when violations occur, which is essential for ensuring that data rights are respected.


Evidence

The Minister from Sierra Leone spoke about these mechanisms in her presentation


Major discussion point

Rights-Based Data Governance and Protection


Topics

Human rights | Legal and regulatory


Agreed with

– Participant 1
– Audience

Agreed on

Enforcement mechanisms remain weak and need strengthening across African data governance systems


Data rights of minorities and marginalized groups must be specifically protected and upheld

Explanation

Wakabi argues for explicit measures to ensure that data collection and use do not undermine the rights of marginalized groups. He emphasizes that data governance frameworks must include specific protections for vulnerable populations to prevent discrimination and ensure equitable treatment in data-driven systems.


Evidence

Data should not be collected or used in ways that undermine the rights of marginalized groups


Major discussion point

Rights-Based Data Governance and Protection


Topics

Human rights | Development


Meaningful spaces for citizen participation and awareness-building about data rights are essential components

Explanation

Wakabi stresses the need for creating genuine opportunities for citizens to express their views on data governance and have those views incorporated into practice. He also emphasizes the importance of educating people about their data rights and how to protect themselves in digital environments, as these are relatively new concepts that many people don’t understand.


Evidence

Privacy and data rights are fairly new rights that not many people know about or understand how to express


Major discussion point

Rights-Based Data Governance and Protection


Topics

Human rights | Sociocultural


Agreed with

– Participant 1

Agreed on

User-centered approaches and meaningful participation are crucial for effective data governance


Working at regional economic community levels can facilitate harmonization and domestication of AU frameworks

Explanation

Wakabi explains that his organization is working with GIZ to help the East African Community develop harmonized data governance laws. He suggests that working through existing regional economic communities, despite different maturity levels and laws, can be an effective way for countries to domesticate the African Union data policy framework and achieve harmonization.


Evidence

Working with GIZ to help East African Community develop harmonized data governance law through trainings


Major discussion point

Regional Harmonization and Frameworks


Topics

Legal and regulatory | Development


K

Kojo Boakye

Speech speed

176 words per minute

Speech length

1517 words

Speech time

515 seconds

Africa shows significant development with 13 national data protection laws, but lacks harmonization between national policies

Explanation

Kojo acknowledges the progress Africa has made in data governance with 13 national laws, including well-known examples like South Africa’s POPIA, Nigeria’s Data Protection Act, and Kenya’s Data Protection Act. However, he points out that while there are similarities between these laws, there’s insufficient harmonization, which creates challenges for companies seeking to operate across African markets.


Evidence

Examples include POPIA in South Africa, Nigeria’s Data Protection Act, and Kenya’s Data Protection Act


Major discussion point

Business Perspective on African Data Governance


Topics

Legal and regulatory | Economic


The African Union data policy framework offers opportunities for harmonization while allowing countries to address national priorities

Explanation

Kojo argues that the AU data policy framework provides a significant opportunity for governments to create legislation that serves their national priorities while also harmonizing frameworks across the continent. He sees this as essential for organizations to have certainty and predictability about data flows across Africa’s 1.3 billion person market.


Evidence

The framework can help organizations work across a one billion person opportunity on the continent that is growing and getting younger


Major discussion point

Business Perspective on African Data Governance


Topics

Legal and regulatory | Economic


Agreed with

– Participant 1
– Moderator

Agreed on

The African Union Data Policy Framework represents a world-class approach that should be leveraged for harmonization


Disagreed with

– Bitange Ndemo
– Participant 1

Disagreed on

Effectiveness and Implementation of African Union Data Policy Framework


Institutional complexity creates challenges when multiple organizations claim jurisdiction over data governance

Explanation

Kojo describes the complexity that arises when multiple institutions within a country believe they have jurisdiction over data governance. He explains that companies often find themselves speaking to newly created organization heads, established regulators, and digital development organizations simultaneously, all of whom understandably believe they have authority over this important area.


Evidence

Companies may speak to heads of newly created organizations, current DGs of established regulators, and digital development organizations all claiming jurisdiction


Major discussion point

Business Perspective on African Data Governance


Topics

Legal and regulatory | Development


Communication about AI benefits and risks needs thoughtful approach to build public understanding and trust

Explanation

Kojo emphasizes the importance of how AI is communicated to the public, questioning whether the messaging focuses on opportunities for citizens to benefit and improve their lives, or whether it emphasizes fears about big companies and governments with nefarious purposes. He argues that the communication about both AI’s promise and risks needs to be carefully crafted and delivered to those who would benefit most.


Major discussion point

Business Perspective on African Data Governance


Topics

Sociocultural | Development


Enforcement should focus on improving data governance for users rather than revenue generation

Explanation

Kojo argues that enforcement of data governance regulations should primarily aim to improve data governance on behalf of users, which should always be the primary reason for enforcement. He warns against enforcement approaches that are motivated by revenue generation, noting that while some excellent governments avoid this, others do consider enforcement as a source of revenue.


Evidence

Some governments think about enforcement as revenue generation, which should be avoided


Major discussion point

Enforcement Challenges and Mechanisms


Topics

Legal and regulatory | Economic


Disagreed with

– Salima Monorma Bah

Disagreed on

Enforcement Approach and Revenue Generation


P

Participant 1

Speech speed

167 words per minute

Speech length

1176 words

Speech time

421 seconds

Evidence-based policy making using data as a tool is crucial for effective governance frameworks

Explanation

Participant 1 emphasizes Research ICT Africa’s work in using data and evidence-based policy as tools for effective governance. They work at multiple levels – national, institutional, regional, and with the G20 – to integrate data governance into broader concepts like Digital Public Infrastructure (DPI) and AI transparency and accountability.


Evidence

Research ICT Africa works at national, institutional, regional levels and as knowledge partner to G20


Major discussion point

Data Governance Implementation and Policy-to-Practice Transition


Topics

Legal and regulatory | Development


Agreed with

– Salima Monorma Bah

Agreed on

Political will and strategic prioritization are essential for effective data governance implementation


Current data governance frameworks insufficiently emphasize the user perspective and their demands and inequalities

Explanation

Participant 1 challenges the current approach to data governance, arguing that whether discussing DPI, AI, or data governance frameworks, insufficient emphasis is placed on the user perspective. They stress the need to consider users’ demands, inequalities, and baseline conditions when developing these systems.


Evidence

Challenge posed to the panel and audience about whether enough emphasis is being placed on the user perspective


Major discussion point

User-Centered Approach and Inclusion


Topics

Human rights | Development


Agreed with

– Wakabi Wairagala

Agreed on

User-centered approaches and meaningful participation are crucial for effective data governance


Building sustainable mechanisms for data inclusion and user capacity to participate and draw value is crucial

Explanation

Participant 1 describes this as the most fascinating part of their current work – developing sustainable mechanisms that enable data inclusion and build users’ capacity to participate in and derive value from data governance systems. They emphasize this as essential for meaningful participation in the data economy.


Evidence

This is described as the most fascinating and exciting part of their current work


Major discussion point

User-Centered Approach and Inclusion


Topics

Development | Human rights


Data literacy and digital skills are necessary for meaningful participation in data economy opportunities

Explanation

Participant 1 argues that meaningful inclusion in data governance benefits requires addressing structural inequalities and ensuring people have the data literacy and digital skills needed to participate in the opportunities being discussed. Without these foundational capabilities, the benefits of data governance frameworks cannot be realized by those who need them most.


Evidence

Structural inequalities remain a challenge that must be addressed for meaningful inclusion


Major discussion point

User-Centered Approach and Inclusion


Topics

Development | Sociocultural


The African Union Data Policy Framework represents a world-class approach being consulted globally for its novel equity principles

Explanation

Participant 1 emphasizes that the AU data policy framework is not just an African instrument but a world-class framework that is being consulted globally. They highlight its novel approach to data governance and its introduction of new principles for bringing more equity and agency to everyone in the benefits of the data economy.


Evidence

The framework is being consulted on as a novel approach to data governance across the world


Major discussion point

Regional Harmonization and Frameworks


Topics

Legal and regulatory | Development


Agreed with

– Kojo Boakye
– Moderator

Agreed on

The African Union Data Policy Framework represents a world-class approach that should be leveraged for harmonization


Disagreed with

– Bitange Ndemo
– Kojo Boakye

Disagreed on

Effectiveness and Implementation of African Union Data Policy Framework


Regulatory cooperation between different types of regulators can help close gaps in enforcement mechanisms

Explanation

Participant 1 suggests that strengthening complaint mechanisms at regulators and fostering cooperation between different types of regulators (such as competition regulators working with information regulators) can help address enforcement challenges. This approach helps close loops on different gaps and challenges in the regulatory system.


Evidence

They have gained traction with regulatory cooperation approaches


Major discussion point

Enforcement Challenges and Mechanisms


Topics

Legal and regulatory | Economic


Agreed with

– Wakabi Wairagala
– Audience

Agreed on

Enforcement mechanisms remain weak and need strengthening across African data governance systems


A

Audience

Speech speed

125 words per minute

Speech length

362 words

Speech time

173 seconds

Enforcement capacity remains notoriously weak despite existing data protection laws modeled after GDPR

Explanation

An audience member from Ghana points out that while many African nations have established data protection laws largely modeled after GDPR, the enforcement capacity has been consistently weak. They seek strategies and mechanisms that can make enforcement more effective in practice.


Evidence

Many African nations have data protection laws modeled after GDPR


Major discussion point

Enforcement Challenges and Mechanisms


Topics

Legal and regulatory | Development


Agreed with

– Wakabi Wairagala
– Participant 1

Agreed on

Enforcement mechanisms remain weak and need strengthening across African data governance systems


Harmonization across 54-55 African countries is challenging but necessary for continental data governance

Explanation

An audience member acknowledges the African Union Commission’s work with GIZ to implement the AU Data Policy Framework across African countries, noting that harmonization across 54-55 countries is not an easy task. They call for big tech organizations to participate more actively in country-level policy development processes rather than just platform discussions.


Evidence

African Union Commission ISD division is working with GIZ to support implementation across African countries


Major discussion point

Regional Harmonization and Frameworks


Topics

Legal and regulatory | Development


M

Moderator

Speech speed

133 words per minute

Speech length

1841 words

Speech time

827 seconds

Data governance is a critical cross-cutting foundation for Africa’s digital transformation alongside addressing digital inequality

Explanation

The moderator argues that data governance represents one of two critical foundations for digital transformation in Africa, emphasizing that it must be effective, accountable, inclusive, and development-oriented. She stresses that digital inequality remains a context that undermines the implementation of innovative digital transformation strategies across the continent.


Evidence

Examples include rollout of digital IDs, national data exchanges, AI strategies, and cross-border trade initiatives like the African continental free trade area implementation


Major discussion point

Data Governance Implementation and Policy-to-Practice Transition


Topics

Development | Legal and regulatory


Data governance requires trust across all stakeholder groups to enable effective digital transformation

Explanation

The moderator emphasizes that data governance is not just about regulation but about systems, coordination, and institutions that ensure data is managed securely, transparently, and with trust. She argues that this trust must exist across different stakeholder groups including private sector, privacy advocates, trade justice advocates, governments, and the technical community.


Evidence

Trust is needed across private sector operators, privacy advocates, trade justice advocates, governments, and the technical community


Major discussion point

Multi-stakeholder Collaboration and Trust


Topics

Legal and regulatory | Development


African data governance instruments represent world-class, forward-looking frameworks that balance protection with utilization

Explanation

The moderator highlights two key African instruments: the African Union Data Policy Framework and the African Commission on Human and People’s Rights Resolution 620. She argues these are powerful, forward-looking instruments developed by African bodies that effectively balance data protection with making data available for economic, social, and public interest benefits.


Evidence

The AU Data Policy Framework balances protection with use and sharing of data for benefits, and Resolution 620 addresses access to data for realizing right to information


Major discussion point

Regional Harmonization and Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Kojo Boakye
– Participant 1

Agreed on

The African Union Data Policy Framework represents a world-class approach that should be leveraged for harmonization


Africa is leading in data governance innovation and should be followed and supported by global partners

Explanation

The moderator concludes by asserting that Africa is the place to follow in data governance work globally. She calls on international partners to join, invest in, and partner with Africa’s data governance journey, positioning the continent as a leader rather than a follower in this space.


Evidence

Personal experience working in data governance in many parts of the world


Major discussion point

Africa’s Leadership in Data Governance


Topics

Development | Legal and regulatory


Agreements

Agreement points

Political will and strategic prioritization are essential for effective data governance implementation

Speakers

– Salima Monorma Bah
– Participant 1

Arguments

Strategic prioritization with political buy-in from the highest levels of government is essential for effective implementation


Evidence-based policy making using data as a tool is crucial for effective governance frameworks


Summary

Both speakers emphasize that successful data governance requires strong political commitment and strategic approach at the highest levels of government, combined with evidence-based policy making


Topics

Development | Legal and regulatory


The African Union Data Policy Framework represents a world-class approach that should be leveraged for harmonization

Speakers

– Kojo Boakye
– Participant 1
– Moderator

Arguments

The African Union data policy framework offers opportunities for harmonization while allowing countries to address national priorities


The African Union Data Policy Framework represents a world-class approach being consulted globally for its novel equity principles


African data governance instruments represent world-class, forward-looking frameworks that balance protection with utilization


Summary

All three speakers recognize the AU Data Policy Framework as an exceptional instrument that balances protection with utilization and offers opportunities for continental harmonization while being consulted globally


Topics

Legal and regulatory | Development


Enforcement mechanisms remain weak and need strengthening across African data governance systems

Speakers

– Wakabi Wairagala
– Participant 1
– Audience

Arguments

Independent arbiters and remedial mechanisms are necessary to mediate, issue guidance, and enforce penalties for data rights violations


Regulatory cooperation between different types of regulators can help close gaps in enforcement mechanisms


Enforcement capacity remains notoriously weak despite existing data protection laws modeled after GDPR


Summary

There is consensus that current enforcement mechanisms are inadequate and require strengthening through independent arbiters, regulatory cooperation, and improved capacity


Topics

Legal and regulatory | Development


User-centered approaches and meaningful participation are crucial for effective data governance

Speakers

– Wakabi Wairagala
– Participant 1

Arguments

Meaningful spaces for citizen participation and awareness-building about data rights are essential components


Current data governance frameworks insufficiently emphasize the user perspective and their demands and inequalities


Summary

Both speakers emphasize the need for genuine citizen participation and user-centered approaches in data governance, noting current frameworks don’t adequately address user perspectives and needs


Topics

Human rights | Development


Similar viewpoints

Both acknowledge the power imbalance between smaller countries and big tech companies, with the Minister explaining why countries resort to draconian measures and Kojo advocating for user-focused rather than revenue-focused enforcement

Speakers

– Salima Monorma Bah
– Kojo Boakye

Arguments

Smaller countries face challenges in enforcing regulations against large tech companies, leading to more draconian approaches


Enforcement should focus on improving data governance for users rather than revenue generation


Topics

Legal and regulatory | Economic


Both emphasize the importance of protecting vulnerable populations and ensuring equitable access to data governance benefits through specific protections and capacity building

Speakers

– Wakabi Wairagala
– Participant 1

Arguments

Data rights of minorities and marginalized groups must be specifically protected and upheld


Data literacy and digital skills are necessary for meaningful participation in data economy opportunities


Topics

Human rights | Development


Both stress that digital literacy and capacity building are foundational requirements for people to effectively participate in and benefit from data governance systems

Speakers

– Bitange Ndemo
– Participant 1

Arguments

Digital literacy and capacity building are fundamental prerequisites for maximizing technology use and better data management


Data literacy and digital skills are necessary for meaningful participation in data economy opportunities


Topics

Development | Sociocultural


Unexpected consensus

Criticism of current African Union data protection implementation approach

Speakers

– Bitange Ndemo
– Audience

Arguments

The current African Union data protection approach should be reconsidered as it’s not being effectively implemented


Harmonization across 54-55 African countries is challenging but necessary for continental data governance


Explanation

Despite general praise for the AU framework from other speakers, there’s unexpected consensus that the current implementation approach faces significant challenges, with Ambassador Ndemo calling for abandoning the current approach and audience members acknowledging the difficulty of harmonization across 54-55 countries


Topics

Legal and regulatory | Development


Big tech companies should be more actively involved in country-level policy processes

Speakers

– Kojo Boakye
– Audience

Arguments

Institutional complexity creates challenges when multiple organizations claim jurisdiction over data governance


Harmonization across 54-55 African countries is challenging but necessary for continental data governance


Explanation

Unexpectedly, both the Meta representative and audience members agree that big tech companies need to be more engaged in country-level policy development processes, with Kojo acknowledging the complexity companies face and audience calling for more active participation beyond platform discussions


Topics

Legal and regulatory | Development


Overall assessment

Summary

The speakers demonstrate strong consensus on several key areas: the importance of political will and strategic prioritization, the value of the AU Data Policy Framework as a world-class instrument, the need to strengthen enforcement mechanisms, and the importance of user-centered approaches. There’s also agreement on the challenges faced by smaller countries in dealing with big tech companies and the need for capacity building and digital literacy.


Consensus level

High level of consensus with constructive disagreement on implementation approaches. The consensus suggests a mature understanding of data governance challenges in Africa and points toward collaborative solutions that balance protection with innovation, emphasize user rights, and leverage African-developed frameworks. The disagreements are primarily about implementation strategies rather than fundamental principles, indicating a solid foundation for moving forward with continental data governance initiatives.


Differences

Different viewpoints

Effectiveness and Implementation of African Union Data Policy Framework

Speakers

– Bitange Ndemo
– Kojo Boakye
– Participant 1

Arguments

The current African Union data protection approach should be reconsidered as it’s not being effectively implemented


The African Union data policy framework offers opportunities for harmonization while allowing countries to address national priorities


The African Union Data Policy Framework represents a world-class approach being consulted globally for its novel equity principles


Summary

Bitange Ndemo argues for abandoning the AU data protection framework as it cannot be implemented and is merely symbolic, while Kojo Boakye sees it as offering harmonization opportunities, and Participant 1 views it as a world-class framework being consulted globally for its innovative approach to equity.


Topics

Legal and regulatory | Development


Approach to Data Regulation Timing and Implementation

Speakers

– Bitange Ndemo
– Salima Monorma Bah

Arguments

The current African Union data protection approach should be reconsidered as it’s not being effectively implemented


Moving from policy to practice requires understanding both the economic value of data and data sovereignty issues


Summary

Bitange argues that data regulations were put in place before understanding what data could do for people and advocates showing data benefits before implementing regulations, while Minister Bah emphasizes the importance of understanding both economic value and sovereignty issues as foundations for effective policy implementation.


Topics

Legal and regulatory | Development | Economic


Enforcement Approach and Revenue Generation

Speakers

– Kojo Boakye
– Salima Monorma Bah

Arguments

Enforcement should focus on improving data governance for users rather than revenue generation


Smaller countries face challenges in enforcing regulations against large tech companies, leading to more draconian approaches


Summary

Kojo warns against enforcement motivated by revenue generation and advocates for user-focused enforcement, while Minister Bah explains that smaller countries sometimes resort to draconian approaches because of power imbalances with big tech companies, suggesting enforcement challenges are more complex than revenue motivation alone.


Topics

Legal and regulatory | Economic


Unexpected differences

Fundamental Value of African Union Data Policy Framework

Speakers

– Bitange Ndemo
– Participant 1
– Moderator

Arguments

The current African Union data protection approach should be reconsidered as it’s not being effectively implemented


The African Union Data Policy Framework represents a world-class approach being consulted globally for its novel equity principles


African data governance instruments represent world-class, forward-looking frameworks that balance protection with utilization


Explanation

This disagreement is unexpected because Bitange Ndemo, as a former Kenyan government official and current Ambassador, takes a surprisingly critical stance against a major African continental framework, while other panelists and the moderator celebrate it as innovative and world-class. This suggests significant divergence in how African policy leaders view continental integration efforts in data governance.


Topics

Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on the effectiveness and implementation approach of continental frameworks, the timing and sequencing of data regulation versus demonstrating data benefits, and the motivations behind enforcement mechanisms. There are also partial agreements on harmonization pathways and citizen participation approaches.


Disagreement level

Moderate disagreement with significant implications. The disagreements reveal fundamental tensions between continental integration approaches versus national/regional approaches, between regulation-first versus benefits-first strategies, and between different perspectives on enforcement challenges. These disagreements could impact the coherence and effectiveness of Africa’s data governance development, particularly regarding whether to strengthen existing continental frameworks or pursue alternative approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both acknowledge the power imbalance between smaller countries and big tech companies, with the Minister explaining why countries resort to draconian measures and Kojo advocating for user-focused rather than revenue-focused enforcement

Speakers

– Salima Monorma Bah
– Kojo Boakye

Arguments

Smaller countries face challenges in enforcing regulations against large tech companies, leading to more draconian approaches


Enforcement should focus on improving data governance for users rather than revenue generation


Topics

Legal and regulatory | Economic


Both emphasize the importance of protecting vulnerable populations and ensuring equitable access to data governance benefits through specific protections and capacity building

Speakers

– Wakabi Wairagala
– Participant 1

Arguments

Data rights of minorities and marginalized groups must be specifically protected and upheld


Data literacy and digital skills are necessary for meaningful participation in data economy opportunities


Topics

Human rights | Development


Both stress that digital literacy and capacity building are foundational requirements for people to effectively participate in and benefit from data governance systems

Speakers

– Bitange Ndemo
– Participant 1

Arguments

Digital literacy and capacity building are fundamental prerequisites for maximizing technology use and better data management


Data literacy and digital skills are necessary for meaningful participation in data economy opportunities


Topics

Development | Sociocultural


Takeaways

Key takeaways

Strategic political buy-in from the highest government levels is essential for successful data governance implementation, moving beyond policy to practice


Africa has developed 13 national data protection laws but lacks harmonization between countries, creating challenges for cross-border operations


The African Union Data Policy Framework is recognized as a world-class instrument that balances data protection with economic use and is being consulted globally


Current data governance frameworks insufficiently emphasize user perspectives and needs, requiring more focus on data inclusion and citizen participation


Enforcement mechanisms remain weak across African countries despite existing legal frameworks, with smaller nations struggling against large tech companies


Rights-based data governance must protect privacy and access to information while ensuring special protection for marginalized groups


Digital literacy and capacity building are fundamental prerequisites for effective data governance and citizen participation


Regional economic communities can serve as effective platforms for harmonizing data governance approaches across member states


Resolutions and action items

Call for big tech organizations to actively participate in country-level policy development processes rather than only engaging in international forums


Invitation to join the African Alliance for Access to Data and contribute to developing guidelines following African Commission Resolution 620


Recommendation for regulatory cooperation between different types of regulators (competition, information, consumer protection) to close enforcement gaps


Suggestion for African countries to learn from other regions’ experiences, including studying reports like the Mario Draghi report on regulatory approaches


Proposal for a separate dedicated panel discussion specifically focused on enforcement mechanisms and strategies


Unresolved issues

How to effectively enforce data protection laws given the current weak enforcement capacity across African countries


Whether the African Union data protection framework should be abandoned in favor of country-specific approaches or strengthened for better implementation


How to balance revenue generation motives with user protection objectives in data governance enforcement


How to achieve meaningful harmonization across 54-55 African countries with different maturity levels and legal frameworks


How to ensure big tech companies engage constructively in policy development without resorting to ‘draconian’ regulatory approaches


How to build sustainable mechanisms for data inclusion that address structural inequalities and ensure meaningful citizen participation


Suggested compromises

Balancing data protection with innovation needs by applying data protection principles while allowing beneficial data use for public interest


Using regional economic communities as intermediate platforms for harmonization rather than attempting continent-wide harmonization immediately


Focusing enforcement efforts on improving data governance for users rather than primarily on revenue generation


Developing bespoke African solutions that learn from global experiences while addressing specific continental needs and contexts


Creating meaningful spaces for multi-stakeholder engagement that includes government, private sector, and civil society perspectives


Thought provoking comments

I think it went from something that was an afterthought, I think, for a lot of African governments to now, I think, being just a central piece to what we’re trying to do… But then also, I think the other reality is the economic value when it comes to data as well.

Speaker

Salima Monorma Bah


Reason

This comment reframes data governance from a compliance burden to a strategic economic opportunity, highlighting the evolution in African governments’ thinking about data as an asset rather than just a regulatory concern.


Impact

This set the tone for the entire discussion by establishing that African governments now view data governance as central to economic development, moving the conversation beyond just privacy protection to economic empowerment and sovereignty.


What I’m probably seeing less of is harmonisation between those national policies… some level of harmonisation is needed… the AU data policy framework offers a huge opportunity not only for governments to continue creating legislation that speaks to their national priorities… but also creates opportunities to harmonise those frameworks

Speaker

Kojo Boakye


Reason

This identifies a critical gap between having individual national laws and creating a coherent continental framework, highlighting the tension between national sovereignty and regional integration in data governance.


Impact

This comment shifted the discussion from national-level implementation challenges to continental-level coordination, introducing the complexity of balancing local needs with regional harmonization and setting up later discussions about the AU Data Policy Framework.


Are we telling them that there’s a huge opportunity that they can feed into… Or are we suggesting that big companies like Meta, like Google, like Microsoft, or governments with nefarious purpose… are going to use AI to… whatever the nefarious purposes may be? And I do think the communication about the promise of AI, the benefits, as well as the risks, need to be really, really thoughtfully approached

Speaker

Kojo Boakye


Reason

This comment challenges the panel to consider how data governance narratives are communicated to citizens, questioning whether current approaches emphasize fear over opportunity and highlighting the importance of balanced messaging.


Impact

This introduced a meta-level discussion about communication strategies and public perception, adding a dimension about how data governance policies are presented to and understood by citizens, which influenced later discussions about data literacy and inclusion.


In all the work that we are doing, are we really putting emphasis on the user? And whether we’re speaking about DPI, whether we’re speaking about AI, whether we’re speaking about data governance that informs those, are we putting the emphasis on the user?… And that, I think, is the most fascinating part of our work at the moment, is how do we build sustainable mechanisms for that data inclusion?

Speaker

Participant 1 (Priya)


Reason

This comment fundamentally challenges the panel to recenter the discussion on end users rather than institutional frameworks, questioning whether current approaches adequately consider citizen needs and capabilities.


Impact

This was a pivotal moment that shifted the conversation from institutional and regulatory perspectives to user-centered design, prompting the moderator to directly ask if this emphasis is lacking and influencing subsequent discussions about enforcement and implementation.


What I want to see is to abandon the African Union data protection law and each country… Once we begin to show what data can do, then we can put data regulations in place. Now we have put data regulation before we even know what data will do to the people.

Speaker

Bitange Ndemo


Reason

This is a provocative challenge to the entire premise of current data governance approaches, arguing for a complete reversal of the typical regulatory sequence – suggesting demonstration of value should precede regulation rather than follow it.


Impact

This comment created a significant tension in the discussion, directly challenging the work that other panelists had been describing and advocating for a fundamentally different approach. It forced other participants to defend their positions and highlighted the ongoing debate about the optimal timing and sequencing of data governance frameworks.


I quickly did a search on Mario Draghi’s report… Europe has fallen behind its other regions… over-regulation… I would encourage our continent and others to learn from that… lessons could be learned

Speaker

Kojo Boakye


Reason

This comment introduces a controversial comparative analysis, using Europe’s experience as a cautionary tale about over-regulation stifling innovation, which directly challenges approaches that prioritize strong regulatory frameworks.


Impact

This created immediate pushback from the moderator and added tension to the discussion by suggesting that strong data protection frameworks might hinder rather than help African development, forcing the panel to grapple with the balance between protection and innovation.


Overall assessment

These key comments fundamentally shaped the discussion by introducing several productive tensions: between national sovereignty and continental harmonization, between regulatory protection and economic opportunity, between institutional frameworks and user-centered approaches, and between precautionary regulation and innovation-first policies. The most impactful comments challenged participants to move beyond their initial positions and consider alternative perspectives, particularly around the sequencing of regulation versus demonstration of value, the importance of user-centered design, and the communication of data governance benefits versus risks. The discussion evolved from a relatively consensus-oriented conversation about implementation challenges to a more nuanced debate about fundamental approaches to data governance, with the provocative comments from Ndemo and Boakye forcing other participants to articulate and defend their positions more clearly. This created a richer, more complex dialogue that better reflected the real tensions and choices facing African policymakers in data governance.


Follow-up questions

How can enforcement mechanisms for data protection laws be strengthened, particularly given the notoriously weak enforcement capacity in many African nations despite having GDPR-modeled laws?

Speaker

Osei Keja from Ghana


Explanation

This addresses a critical gap between policy development and implementation, where laws exist but lack effective enforcement mechanisms


How can Africa learn from the successes and mistakes of other nations currently improving data governance to make successful changes within Africa?

Speaker

Amy from Nigeria


Explanation

This seeks to identify best practices and lessons learned from global experiences that could be adapted for African contexts


How can big tech organizations be better engaged in country-level policy development processes rather than just participating in high-level platforms?

Speaker

Audience member from African Union Commission


Explanation

This highlights the need for more direct involvement of technology companies in national policy development to support harmonization efforts


What does meaningful rights-based data governance look like in practice, particularly as data governance evolves beyond just privacy protection to encompass different types of data?

Speaker

Moderator Anriet Esterhuisen


Explanation

This explores the practical implementation of rights-based approaches in an increasingly complex data governance landscape


How can data literacy and awareness of data rights be improved among citizens who may not understand their data rights or how to exercise them?

Speaker

Wakabi Wairagala


Explanation

This addresses the fundamental challenge of ensuring citizens can meaningfully participate in data governance frameworks


How can sustainable mechanisms for data inclusion be built to ensure users can participate in and draw value from the data economy?

Speaker

Priyad Chetty


Explanation

This focuses on creating lasting systems that enable broader participation in data benefits rather than just protection


Should the focus shift from implementing data protection laws to first demonstrating what data can do for people before putting regulations in place?

Speaker

Ambassador Bitange Ndemo


Explanation

This challenges the current approach of regulation-first and suggests a benefits-first approach to build understanding and support


How can enforcement be designed to focus on improving data governance for users rather than revenue generation for governments?

Speaker

Kojo Boakye


Explanation

This addresses concerns about enforcement mechanisms being driven by financial rather than protective motives


How can smaller economies effectively engage with big tech companies without resorting to overly restrictive measures?

Speaker

Minister Salima Monorma Bah


Explanation

This explores the power imbalance between smaller African nations and large technology companies in policy negotiations


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.