Networking Session #74 Mapping and Addressing Digital Rights Capacities and Threats

Networking Session #74 Mapping and Addressing Digital Rights Capacities and Threats

Session at a glance

Summary

This networking session at the Internet Governance Forum focused on mapping digital rights capacities and threats in global majority communities, presented by Oxfam’s Recipe Project in partnership with civil society organizations from multiple countries. The discussion aimed to identify current challenges and foster stronger partnerships to ensure inclusive digital governance that leaves no one behind.


Representatives from Vietnam, Bolivia, Cambodia, Somalia, and Palestine shared findings from comprehensive research involving over 1,000 respondents across nine countries. Common themes emerged across all regions, including significant digital literacy gaps, inadequate legal frameworks for digital rights protection, and widespread experiences of digital violence. In Vietnam, despite rapid digitalization, none of the 63 provincial service portals meet accessibility standards, and 44% of people cannot distinguish between accurate and fake information online. Bolivia reported that 77% of respondents experienced digital security incidents, with 62% facing digital violence, particularly affecting human rights defenders and women activists.


Cambodia highlighted that fewer than 30% of citizens possess adequate digital navigation skills, while lacking comprehensive cybersecurity and data protection laws. Somalia found that while 98% have internet access, only 28% understand risks of sharing personal information, and 42% experienced digital violence. Palestine presented the most severe situation, with systematic surveillance, censorship, and discriminatory infrastructure limiting Palestinian access to advanced networks while Israeli settlers enjoy full connectivity.


The panelists emphasized the importance of bottom-up approaches, multi-stakeholder dialogue, and capacity building for marginalized communities. An audience member from Zambia reinforced the need for balanced approaches to cybersecurity that protect both safety and freedom of expression. The session concluded with calls for continued collaboration between technical communities and civil society to address misinformation while preserving digital rights.


Keypoints

## Major Discussion Points:


– **Digital divide and access inequalities**: Multiple speakers highlighted significant gaps in digital access, with rural areas particularly affected (70% in Bolivia, limited infrastructure in Cambodia and Somalia), and the need for meaningful connectivity rather than just basic access.


– **Digital literacy and capacity building challenges**: All countries reported low digital literacy rates (under 30% in Cambodia, widespread gaps in Palestine), emphasizing the need for comprehensive training programs and bottom-up approaches to build digital skills in marginalized communities.


– **Digital violence and security threats**: Speakers documented high rates of digital violence, particularly gender-based (77% in Bolivia experienced digital security incidents, 42% in Somalia faced digital violence), including harassment, hate speech, and threats targeting human rights defenders and activists.


– **Inadequate legal frameworks and governance gaps**: Most countries lack comprehensive digital rights legislation, with laws still in draft form or inadequately protecting citizens from surveillance, data misuse, and online violations, while policy development often excludes marginalized voices.


– **Multi-stakeholder collaboration and advocacy strategies**: Organizations emphasized the importance of building coalitions, creating networks (like Bolivia’s feminist collectives), and engaging in policy dialogue to bridge grassroots communities with policymakers for more inclusive digital governance.


## Overall Purpose:


The discussion aimed to map current digital rights capacities and threats in Global Majority communities through the RECIPE project, share research findings from nine countries, and foster partnerships between civil society organizations to address digital inequalities and promote human rights in the digital age.


## Overall Tone:


The discussion maintained a professional, collaborative, and solution-oriented tone throughout. While speakers acknowledged serious challenges and violations, they balanced concern with optimism by highlighting ongoing efforts and practical solutions. The tone remained constructive and forward-looking, emphasizing collective action and shared learning rather than dwelling on problems alone.


Speakers

– **Mia Marzotto** – Digital Rights Advocacy Lead at Oxfam


– **Online moderator** – (Role/title not specified, appears to be Luan Mendez based on context)


– **Audience** – Enes Mafuta from Zambia, Standards Engineer


– **Khadeja Ibrahim** – International Advocacy Officer at the Palestinian NGO, Miftah


– **Cristian Leon** – Executive Director of Fundacion Internet Bolivia.org


– **Tran Thi Tuyet** – Program Manager for the Institute for Policy Studies and Media Development in Vietnam (also referred to as “Snow”)


– **Mohamed Aded Ali** – Executive Director of Somalia Non-State Actors (SONSA)


– **Theary Luy** – Head of Program at the Cooperation Committee for Cambodia


**Additional speakers:**


– **Luan Mendez** – Project Coordinator of the Centro SOS Digital at Fundacion Internet Bolivia (mentioned as online moderator but appears to be the same as “Online moderator”)


Full session report

# Comprehensive Report: Mapping Digital Rights Capacities and Threats in Global Majority Communities


## Executive Summary


This networking session at the Internet Governance Forum presented findings from Oxfam’s Recipe Project, a comprehensive initiative examining digital rights capacities and threats across global majority communities. The discussion brought together civil society representatives from Vietnam, Bolivia, Cambodia, Somalia, and Palestine to share research findings from nine countries involving over 1,000 respondents. The session revealed concerning patterns of digital rights violations, infrastructure inequalities, and governance gaps while highlighting innovative community-driven solutions and advocacy strategies.


## Research Methodology and Scope


Mia Marzotto, Digital Rights Advocacy Lead at Oxfam, introduced the Recipe Project as a multi-country research initiative co-funded by the European Union. The project examined four critical dimensions: digital literacy levels, internet access quality, experiences of digital violence, and existing prevention measures. The research methodology emphasized bottom-up approaches, engaging directly with marginalized communities to understand their lived experiences of digital rights challenges.


The comprehensive scope of the research spanned nine countries with over 1,000 respondents, though five countries were presented in detail during this session. This approach enabled the identification of common patterns while respecting regional specificities across diverse geographical and political contexts.


## Country-Specific Findings and Challenges


### Vietnam: Policy Development Gaps


Snow from the Institute for Policy Studies and Media Development in Vietnam presented findings highlighting significant gaps in digital policy development. Despite government-led digitalization efforts, fundamental accessibility problems persist, with none of Vietnam’s 63 provincial public service portals meeting user-friendliness standards.


The research revealed that 44% of Vietnamese respondents cannot distinguish between accurate and fake information online, indicating significant digital literacy gaps. More concerning is the systematic exclusion of marginalized communities from policy development processes. As Snow explained, digital policies remain largely developed through top-down approaches led by state agencies, with consultation processes that are “often formalistic and lack meaningful engagement.”


Snow presented four fundamental lessons from Vietnam’s experience: the need for meaningful participation of marginalized communities in policy development, the importance of addressing digital literacy gaps, the requirement for user-friendly digital infrastructure, and the necessity of moving beyond formalistic consultation to genuine engagement.


### Bolivia: High Rates of Digital Violence


The Bolivian findings, presented by online moderator Luan Mendez, revealed alarming statistics. An overwhelming 77% of respondents experienced digital security incidents, while 62% faced digital violence. Most significantly, 77% of those experiencing digital violence identified a specific relationship between these attacks and their status as human rights defenders.


Bolivia faces severe connectivity challenges, with a 30% digital divide in urban areas expanding to 70% in rural regions. These infrastructure gaps compound the vulnerability of already marginalized communities to digital rights violations.


However, Bolivia also demonstrated innovative civil society responses, including the creation of feminist collective networks providing digital security support through peer-to-peer methodologies. These networks represent a community-driven approach to building digital rights capacity.


### Cambodia: Capacity Building Through Networks


Theary Luy from the Cooperation Committee for Cambodia (CCC) presented findings showing that fewer than 30% of Cambodian citizens possess adequate skills to navigate the digital world safely. This digital literacy gap particularly affects rural youth and grassroots organizations.


CCC represents 200 NGOs and collaborates with over 400 provincial NGOs, providing a broad network for capacity building. Cambodia’s legal framework remains incomplete, lacking comprehensive cybersecurity, cybercrime, and personal data protection laws despite having digital government policies.


The Cambodian response emphasizes peer-to-peer capacity building, with grassroots organizations trained to become trainers themselves. As Theary explained, their multi-stakeholder policy dialogue serves as “not a consultation, but it’s a way to build trust” and ensure genuine community voice consideration in policy development.


### Somalia: Connectivity Without Protection


Mohamed Aded Ali from Somalia Non-State Actors (SONSA) presented a paradoxical situation where 98% of the population has internet connectivity, yet significant protection gaps persist. Only 28% of users understand the risks of sharing personal information online, while 42% experience digital violence, with 37% reporting gender-based incidents.


Somalia’s regulatory landscape includes various bodies such as the Digital Rights Authority (DBA) and National Identification and Registration Authority (NERA), but has limited legislative framework, with policies remaining in draft form or under cabinet review.


The Somali response includes establishing a digital task force committee comprising decision makers, civil society organizations, and technology companies, representing a multi-stakeholder approach to addressing digital rights challenges.


### Palestine: Systematic Digital Discrimination


Khadeja Ibrahim from the Palestinian NGO Miftah presented findings documenting systematic digital rights violations including surveillance, censorship, and discriminatory infrastructure access. Palestinians use 3G networks while Israeli settlers in the West Bank have access to 4G and 5G networks, creating stark technological inequality.


Khadeja mentioned specific AI tools being used including Blue Wolf, Lavender, and Where’s Daddy, as well as the impact of Facebook law and anti-terrorism legislation. Palestinian civil society organizations face additional challenges, with 72% unaware of local digital rights legislation and 75% lacking technological resources.


Despite these constraints, Palestinian organizations participate in the Palestinian Digital Rights Initiative Coalition, maintaining coordination and knowledge sharing to build collective capacity under extremely difficult circumstances.


## Common Challenges and Themes


### Digital Literacy Gaps


All speakers identified digital literacy gaps as a fundamental barrier to digital rights protection. The consistency of this challenge across diverse contexts—from Vietnam’s 44% unable to identify misinformation to Cambodia’s under-30% digital navigation skills—demonstrates this as a universal challenge requiring coordinated responses.


### Widespread Digital Violence


The research documented high rates of digital violence across all regions, with particular targeting of human rights defenders and gender-based violence. Bolivia’s finding that 77% of digital violence victims linked attacks to their human rights defender status demonstrates how digital spaces are weaponized to silence advocacy and activism.


### Inadequate Legal Frameworks


All countries face significant gaps in legal frameworks for digital rights protection. Laws are either missing entirely, remain in draft form, or lack meaningful stakeholder participation in development. This regulatory vacuum leaves citizens without recourse when digital rights violations occur.


### Infrastructure Inequalities


The research revealed how digital infrastructure serves as a mechanism of exclusion. Palestine’s discriminatory network access and Bolivia’s 70% rural digital divide demonstrate how unequal access reinforces existing inequalities.


## Civil Society Responses and Innovation


Despite significant challenges, civil society organizations across all regions developed innovative capacity-building strategies sharing common characteristics: peer-to-peer learning, community ownership of solutions, and network-based approaches.


Bolivia’s feminist collective networks, Cambodia’s train-the-trainer models, and Vietnam’s grassroots consultation advocacy all demonstrate community-driven approaches. These methods recognize that communities possess knowledge about their own needs and can develop contextually appropriate solutions.


Multi-stakeholder collaboration emerged as important but requiring genuine participation rather than tokenistic consultation. Cambodia’s policy dialogue platform and Somalia’s digital task force committee represent attempts to create meaningful engagement across sectors.


## Areas of Consensus and Tension


Strong consensus emerged regarding fundamental challenges: digital literacy gaps, inadequate legal frameworks, high rates of digital violence, and the need for bottom-up approaches. This consensus across diverse contexts suggests these are universal challenges in digital rights implementation.


However, tensions remain around balancing security and rights. Audience member Enes Mafuta from Zambia highlighted how cybersecurity laws enacted without transparency can create public backlash and restrict legitimate activities, reflecting broader challenges in digital governance where security concerns can justify restrictions on rights.


## Key Insights and Future Directions


The research demonstrates that digital rights violations follow systematic patterns reflecting broader inequalities and governance failures. This systemic nature requires comprehensive responses addressing root causes rather than individual incidents.


The success of peer-to-peer capacity building approaches suggests need for sustained investment in these methodologies. Supporting organizations to become trainers and knowledge sharers multiplies impact while building local ownership and sustainable capacity.


The Recipe Project demonstrates the value of sustained collaboration between civil society organizations across different contexts, enabling ongoing learning and mutual support while respecting local autonomy.


## Conclusion


This networking session successfully demonstrated both the universal nature of digital rights challenges and the innovative capacity of civil society organizations to develop contextually appropriate responses. The research findings reveal concerning patterns requiring urgent attention, while also highlighting remarkable innovation in community-driven capacity building and advocacy strategies.


The strong consensus among speakers on both challenges and solutions suggests significant potential for coordinated action and mutual learning. Moving forward, the digital rights community must build on these foundations while addressing ongoing challenges around resources, legal frameworks, and coordination mechanisms.


As Mia concluded, the session emphasized the need for continued collaboration between technical communities and civil society to address challenges like misinformation while preserving digital rights. This collaborative approach, grounded in community knowledge and meaningful participation, offers a promising path toward more inclusive digital governance.


Session transcript

Mia Marzotto: Good morning, everyone. Thank you for joining us for this networking session titled Mapping and Addressing Digital Rights Capacities and Threats. My name is Mia Marzotto, and I’m the Digital Rights Advocacy Lead at Oxfam. This week at the IGF, we heard many times how the deployment of digital technologies in more areas of our daily lives and the rapid rise of artificial intelligence increases the urgency with which human rights in a digital age must be prioritized. So with this session, we want to focus on what the current landscape looks like in terms of digital rights capacities and threats in global majority communities and encourage new or stronger partnerships and connections between progressive actors in the digital governance space so that we can collectively ensure that we turn the critical commitment of leaving no one behind into reality. With Oxfam, we believe that effective digital governance should advocate for and protect the rights and interests of all people and that their needs and experiences should really determine the future course of the digital age. Our mission is to end inequality, and this includes tackling the issue of digital inequality as we see a growing concerning gap between those who benefit from digital technologies and those who don’t. This is why in mid-2024, we launched Recipe, a multi-year project co-funded by the European Union that aims to promote fundamental rights in the digital age in partnership with with civil society organizations from 10 countries, working with some of the most vulnerable people in society for whom the risks as well as the opportunities related to digital technologies are the greatest. So this session will proceed as follows. First, we’ll hear from five representatives from the civil society organizations involved in the Recipe Project, who will present key information extracted from primary research and analysis recently conducted, providing sort of like a snapshot of the current situation of digital rights capacities and threats in a diverse set of global majority countries. And then we will move into a discussion and of course encourage reflections and questions from the audience, both in the room and online. So without further ado, let me quickly introduce my panel here of distinguished speakers, starting with Mohamed Aded Ali. He’s the Executive Director of Somalia Non-State Actors. To my right here, Tran Thi Tuyet, Program Manager for the Institute for Policy Studies and Media Development in Vietnam. Theary Luy, Head of Program at the Cooperation Committee for Cambodia. And at the end of the table, Christian Leon, Executive Director of Fundacion Internet Bolivia.org. Joining us online, we also have Khadeja Ibrahim, who’s the International Advocacy Officer at the Palestinian NGO, Mifta. And Luan Mendez, Project Coordinator of the Centro SOS Digital at Fundacion Internet Bolivia, who will also be our online moderator. So before focusing on the countries, I just wanna make a quick few remarks on the overall methodology and demographics. and Theary Luy. So, this is a summary of the demographics of the multi-country assessment we conducted under the Recipe Project to map the digital landscape threats and opportunities. So this was an assessment that included a comprehensive set of questions related to four main topics, digital literacy and internet access, digital violence and safety, and current measures to prevent digital violence, as well as perceptions on the further action needed to prevent digital rights violations. We had over a thousand respondents across nine countries, representing a range of groups, including community members, activists, journalists, and members of civil society organizations. And of course, we also looked at existing literature and other relevant studies, and importantly validated the findings through workshops with the groups involved. So I will now pass it on to our panel to present key findings from each country, as well as provide an overview of ongoing efforts and lessons learned in their work to promote digital rights and accountable internet governance in their respective contexts. Snow, would you like to start?


Tran Thi Tuyet: Hello, everyone, and it’s nice to meet you all here. I’m Snow from the Institute for Policy Study and Media Development of Think Tank based in Vietnam, focusing on digital technology policies. And it’s my honor to be here, and thank you so much, MIA and Osmar Malan, for giving us this opportunity to share here. And without further delay, I will start my presentation now. In Vietnam, digital transformation, or truyền đổi sổ, has been a national priority in recent years, with the government recognizing that digitalization is a comprehensive socioeconomic shift. The government leaves no one behind commitment aimed to guarantee that all citizens can benefit from and participate in the digital era. The cornerstone of this vision is to ensure digital rights for everyone. But despite notable progress, the reality for many is still marked by the barrier to digital access and participation. So our challenge and opportunity is to ensure that digital rights are not just a slogan, but a reality for all. In Vietnam, more and more people are using digital services. Each day, between three to six million log into this platform. This is three to four times more than just a year ago. Nearly half of all government services are now fully online. This number reflects the country’s strong push for digitalization. But the real question is, do this positive number actually means everyone can access to these services? While official reports show promising figures, significant challenges remain in digital infrastructure, digital skills, and digital access. According to our research in mid-2024, none of Vietnam’s 63 provincial public service portals fully meet standards for user-friendliness or accessibility, even though these portals are the main gateway for government services. So for marginalized communities, such as the migrant workers, getting access to these services is even more critical, since it is a necessity for them for claiming the entitlement under essential social protection policies. As digital services become a part of daily life in Vietnam, concern about data protection and online safety are growing, and Vietnam has also started building legal frameworks such as the personal data protection decree and a new law, but challenges still remain. Personal data protection leak and the buying and selling of personal info without the identification still happen. And misinformation in the online digital environment is common as 44% of people in our recent research Research also say that they cannot clarify the information they saw was accurate or fake. So this confusion makes it easier for the scammers, especially among people with limited digital skills, like the marginalized community. A survey by the National Cyber Security Association found that one in every 220 smartphone users in Vietnam has fallen victim to online scams. These challenges stem from several core issues that our analysis has identified. Digital policy in Vietnam are still largely developed through a top-down approach led by state agencies with consultation processes that are often formalistic and lack of meaningful engagement. There remains an absence of participation from the marginalized communities and their representative organizations during both the design and implementation of digital policies. Furthermore, there is a lack of robust coordination mechanism between the public sector, the private sector, the civil society, and relevant stakeholders that advocate for digital rights. So from the perspective of a policy research organization, we observe both remarkable and persistent challenges. Hence, our policy recommendations aim to provide a strategic roadmap that integrates both the policy, the technical, and the cooperative dimensions. So we would like to highlight four fundamental lessons. The first one is digital transformation as a multi-component ecosystem requiring synchronization between policy, technology, and people. The second one, inclusive policy design with a tentative participation mechanism and grassroots support institution is necessary. The third one, ensuring digital rights as a prerequisite for greater public participation in the transformation process. especially for marginalised groups. And the fourth one established multi-level, multi-layer and multilateral cooperation mechanism to strengthen the capacity, voice and policy influence of social organisation and disadvantaged community. So before I conclude my presentation, I would like to invite you to explore our research on this topic. Our detailed recommendations are available via the QR code on the screen. And we hope that Vietnam’s experience and lesson learned can contribute to the global conversation on digital rights for all. Thanks for your attention.


Mia Marzotto: Thank you, Snow, very much. From Vietnam to Bolivia. Christian, please.


Cristian Leon: Sure. Thank you, Mia. Good morning to all the wonderful panellists and the friends participating on site and online. So I represent an organisation called Internet Bolivia Foundation that is a digital rights organisation established in 2018. We work to generate digital inclusion, protection of digital rights and the fostering of capacities for the most vulnerable populations, especially in the fight against digital gender violence. We do this in a context in which three things are combined. The first one is a digital divide that still affects 30% of the population in urban areas and almost 70% in the rural areas. So we are really far from achieving a meaningful connectivity. The second thing is a strong perspective from the government and legislators to push forward a digital transformation, but the digital transformation more motivated in digital technosolutionism than in real necessities. And this is exposing people to potential abuses to their rights, such as privacy, freedom of speech, participation, among others, and see a very hostile environment. for Human Rights Defenders, for Women Activists, for LGBTQIA populations, all the vulnerable populations. So they are connected but without any safeguards to their participation. That is why most of our work has been focused on fostering the capacities of these populations and advocating for the respect of our rights. And that is why we are working with the RECIPE group to join forces together with other organizations from the Global South and the Global North for achieving these objectives. So we made a mapping of the current threats. And my colleague Luan, she’s connected. She will explain to you a little more about our results. So Luan, please.


Online moderator: Thank you very much. Hi, everyone in the session. Good morning. Well, about the key findings, we saw that 77% suffered a digital security incident and 62% experienced some type of digital violence in the last year. Another important result was that 77% said that the actions of digital violence affect them may have a specific relationship with their status as human rights defenders. That is a very important result. And another thing that the four main threats identified were harassment, hate speech, physical or sexual threats, and public defamation. So in order to respond to that, one way of responding to the previous result was the creation of a network of feminist collectives. Some of the objectives of this network are to generate support for digital security among and others. The first thing was the methodologies with pedagogical and practical dynamics were identified such as awareness-raising sessions, space for horizontal reflection, and articulation for advocacy in public policies in the political arena. This network seeks to provide support to victims and carry out collective prevention actions. It’s important to mention that this network is composed for a very large feminist collective in Bolivia that now are working in strategies and creating horizontal pedagogical methodologies in order to improve their skills in digital security.


Mia Marzotto: Thank you very much to you both. Theary, do you want to talk to us about Cambodia?


Theary Luy: Yes, thank you, Mia, and good morning all. I’m honored to speak with you on behalf of the Cooperation Committee for Cambodia, CCC. CCC, we are a membership-based organization in Cambodia that works in the inclusive partnership to promote good governance, enabling environment, and sustainability for civil society organizations in Cambodia. Currently, we have 200 NGOs as a member, including the local and international NGOs. Besides that, we also have the collaboration with the provincial-based NGOs that work at the ground level with their members, more than 400 NGOs. And back to the digital context in Cambodia, Cambodia stands as a digital crossroad that over the past… Last decade, we witnessed the rapid growth of the Internet and the transformative power of the social media in how our people communicate, engage, and access to the information. However, this digital growth has not come without challenges. So I would like to highlight the challenges based on the key findings of the research. First is about the digital literacy remain alarmingly low in Cambodia. Based on the research, fewer than 30% of Cambodians, especially rural youth and grassroots civil society organizations, posted the skill needed to navigate the digital world safely and effectively. This digital divide is not just about the access to technology, but it’s about the access to the opportunity to participation and to the protection. And second, our legal framework is still catching up. Cambodians have a digital government policy 2022 to 2035, which focuses on the three pillars that I mentioned about the digital government, digital economy, and the digital citizen. However, Cambodia lacks comprehensive law. Several laws are in the draft form. They are in process. This law includes cybersecurity law, cybercrime law, and personal data protection law. And the last one, the third, the absence of the clear and the right-based legislation make the citizen vulnerable to surveillance, misinformation, and data misuse. This leads to the growing of the stress in the digital platform due to the online scam, online gaming. There are a lot of things that we must urgently reveal. However, there is a hope and there is an action. At CCC, we are working with the provincial NGO network to promote the digital awareness and security. These local organizations are not only receiving training, but they are becoming trainers themselves to transfer the knowledge to their members and also their communities. And we are also engaging the youth and the social influencers to lead the public campaigns that make digital rights relevant. Another thing is about the main importance of our work in the policy dialogue. In Cambodia, we believe in the multi-stakeholder approach and the inclusive dialogue. We are bringing together the government, civil society, development partners, and also the private sector to raise awareness of the digital law and policy and to ensure the voice of the civil society are heard and shaping them. So for us, this multi-stakeholder dialogue is not a consultation, but it’s a way to build trust among key stakeholders. So from this effort, we learned that the local ownership, inclusive dialogue, and the youth engagement are the key building the digital future that is safe and equitable and empowering for the Cambodian citizens and also the world. Thank you.


Mia Marzotto: Thank you, Thierry. Mohamed?


Mohamed Aded Ali: Thank you very much. My name is Mohamed Aded Ali. I’m the Executive Director of Somali Civil Society Network, namely SONSA. SONSA is a multisectoral platform of the civil society organization dedicated to promoting I would like to thank the panelists as well as the IGF community and IGF Norway particular secretariat that organized this significant event. In general context, in Somalia, media oversight and regulatory barriers are limited digital advocacy in human rights, gender, economic, climate justice, and democracy. But the digital access can help drive equitable development, accountability, governance, gender, equality, climate actions, and human rights that empowers citizens to voice needs and hold leaders accountable. Increasing digital capacity, rights, awareness, and security practices to navigate constituents and maximum impact is critical for Somali civil society organizations. In Somalia, we have different stakeholders that engage in the digital and Internet sector, which is institutional sectors, private sectors, and other civil society organizations, mainly human rights defenders. We have Digital Rights Authority, DBA. We have National Identification and Registration Authority, NERA. We have National Communication Authority, NCA. We have Somali National Telecommunication and Technology Institute, as well as we have touch companies as a private sector actors, as well as we have human rights organizations and defenders. In early 2024, SOMSA conducted mapping and assessment addressing digital rights capacity, and that capacity focuses on Internet access, digital literacy, social media, users, digital rights awareness, digital violence, and digital security. The findings that we found during the assessment is, number one, internet access, 98% of the population connected primary networks from their home, workplace, use internet cafes, public services, and friendly networks. In terms of digital literacy, 98% reported that digital literacy is very important, as well as 90% can send messages through their mobiles as well as to receive their emails. In terms of social media users, Facebook and TikTok, 44% of the population, mainly youngest, they are using social media in terms of Facebook and TikTok. As well as there is YouTube, 32% uses YouTube as a user. As well as there is Twitter, which politicians and decision makers use as a professionality and a purpose of their work and displaying their achievement in terms of government institutions. As well as the assessment raises digital rights awareness, 28% understand risks and sharing personal information online. As well as digital violence, 42% experience digital violence, whereby 37% reports gender-based incidents like male harassment and other sexual offensive issues. In terms of digital security, 44% have basic incidents like account thefts and other scams. 69% adopted basic measures, blocking reporting updates, but advanced practices and secure communication and multi-security authorization remain limited. We have As ongoing efforts, as strategies, as lessons learned, strengthen the civil society organizations in digital skills, such to deliver workshops and corruption, privacy-seeking, patient and cyber security attack awareness, security communication and safe device practices, as well as, it is very important, raising public awareness, launch a nationwide campaign on tech-facilitated gender-based violence and online safety through radio, social media, as well as community forums. The other one, engage policymakers, it is very important to convene policy dialogue and roundtable connecting civil society organizations with regulatory decision-makers to align ICT framework with digital rights protection, sustain capacity building, distribute practical toolkits, ag digital safety checklist, incident reports, guides, templates, privacy policies, and provide follow-up support so CSO can apply and update practice over time. So thank you very much, I need to conclude my presentation, over to you.


Mia Marzotto: Thank you, Mohamed. We’ll now go to Khadeja, who’s online, thank you.


Khadeja Ibrahim: Hello, everyone, thank you for having me, and thank you, Mia. Just a second. Yeah, so I’m calling in from the West Bank, I work with the organization called Muftah, we work domestically on promoting good governance and democracy, and highlighting how Israeli violations affect women and girls in gender-specific ways. So to give a quick background, when we talk about digital rights violations, they’re unfortunately non-existent, the digital rights are non-existent. We face a variety of violations. To give a quick overview, Israel’s use of advanced surveillance, including biometric data. We’ve seen the development of AI tools such as Blue Wolf and Lavender and Where’s Daddy, which are deployed to monitor and control Palestinian movement, often without legal oversight. We also have a lot of social media censorship. Palestinian content is subject to arbitrary censorship, account suspensions and surveillance on platforms like Facebook and Instagram. The Facebook law and vague anti-terrorism legislation are weaponized to criminalize online dissent and restrict freedom of expression. There’s also repressive technology infrastructure in the occupied West Bank and the Gaza Strip, so discriminatory practice limit Palestinian access to ICT infrastructure. For example, Palestinians still use 3G Internet network, while Israeli settlers who are illegally living in the West Bank have full access to 4G and 5G networks. We’re also seeing a huge shrink in civic space, so civil society actors, journalists, and activists face intimidation, arrest and spyware targeting through Israeli technologies. So, to speak of the mapping assessment that we’ve done at Muftah, we collected questionnaires from 55 Palestinian organizations based in the West Bank, including Jerusalem and the Gaza Strip. Some of the key findings from that mapping assessment. First, we see a widespread gap in digital awareness and security capacity. So, 23% of CSOs lack basic knowledge of digital rights. Only 14.5 of them raise their awareness of digital threats as high. 52% have no digital rights protection policies. We also see that after October 2023, there’s been severe challenges. 40% of institutions face direct digital violations. We also see institutional vulnerability due to limited resources, so nearly 75% of CSOs lack technological resources, 55% lack digital knowledge, and 36% lack legal support, and small or medium-sized institutions are more vulnerable. Nearly 86% of CSOs received no technological or legal support post-October 2023, when the risks were more high. There’s also a lack of legal and policy frameworks, so 72% of CSOs are unaware of local digital rights legislation. Over half of them believe that existing cybercrime laws may be ineffective or unsure of their effectiveness. Nearly 62% of CSOs believe that the Palestinian government is not doing enough to protect digital rights. And we see an overwhelming majority of them need urgent, there’s an urgent need for capacity building and international support. And this means many of them seek security training, and they want training on digital rights to raise awareness. So, to speak of the main activities that we’ve been doing at Muftah, we’ve had two rounds of trainings on digital rights for CSOs and CBOs from marginalized communities across the West Bank. We’ve had a policy meeting with CSOs on the cybercrimes draft law. We’ve hosted a diplomatic briefing with diplomats based in Palestine on our digital rights report. We also are continuing to produce social media content based on the evidence-based research. We also plan to start a podcast series based on it as well. And we have participated at the Palestinian Digital Activism Forum back in March, which is hosted by the organization Hemna. And we will continue doing the work that we have been doing for the past year. So that’s it from my end. Thank you.


Mia Marzotto: Thank you very much Khadeja and thank you all so much for the presentations. I just would like to underline how clear it is every time I hear you guys speak that as much as there are clear differences between the various countries you come from and the communities you work with. There are some common digital rights threats and issues and also some important work from civil society organizations like the ones you represent, which shows that a better, more rights respecting digital ecosystem is possible. So we want to open it up to any questions or reflections from the audience to any of the speakers or in general. We have a few questions that we came up that we would like to hear from you. But of course, any and all questions are welcome also from those listening in online. So there are mics to the side of the room in case anybody wants to ask any questions. Otherwise, I know I have questions for our panel here. Any questions? Okay, maybe I can break the ice with a question to you all and then I’ll also monitor online if there are any questions. One question to all any of you who want to answer. Can you share a little bit more what you have done in terms of sharing these important research findings and analyses with stakeholders and duty bearers at the national level? International level and if there have been any reactions or actions taken on their part and then how have you communicated about those actions back to the groups involved in the research to establish meaningful two-way accountability in digital governance. Anybody would like to start?


Theary Luy: Yeah, from Cambodia actually, based on the finding that we conducted research in Cambodia, we organized a workshop that disseminated the finding to the civil society and other stakeholders on that, and based on that, we validated the request. Like I mentioned about the action that we conduct at the ground level in terms of the training to the grassroots organization and also the community there. And for that, not only the awareness raising on the digital, but also about the policy implementation at the national level that need the voice of the community, voice of the civil society that work there to consideration in the policy framework. So the policy dialogue that I mentioned is very important. Actually, we plan in 3rd July and also the end of this year for the policy dialogue that allow the audience, the community, and the civil society at the ground level and their representative to meet with the policy maker to raise all the challenges. So that’s why the multi-stakeholder dialogue is very important. That’s not for the consultation, but it’s the platform that change and consider the demand of the community. Yes, thank you.


Cristian Leon: Yeah, as Luan mentioned in Bolivia, we are creating this network of grassroots women organizations that they want to work together in these issues, but from a bottom-up perspective with the idea of not only creating tools, digital security tools or others, but also to carry out advocacy actions together. And this is really important because in Bolivia we are in a moment of elections, and you know these moments really normally open the windows to create new policies and to discuss things with politicians. So I think this is a very key moment in order to organize, to create coalitions and to do advocacy together. So this is one of the actions that we are doing with Internet Bolivia Foundation.


Mia Marzotto: Thank you. Mohamed?


Mohamed Aded Ali: Yeah, thank you very much. In our context, in terms of digital arena is a new ecosystem because mostly the civil society organizations and other individuals have a challenge in terms of capacity, you know, knowledge in the digital. Mostly in Somalia, the private sector as well as the civil society organizations are concentrated in terms of digitalization and as well as using other technologies. But mostly the population of the community still didn’t have much knowledge on digital. After we started this RECIPI project, we engaged different stakeholders. The decision makers are, for example, the government institutions. In terms of legislative, some of them still they are drafting and some of the policies are on the disk of cabinet and still not passed yet. So that’s the way we are, our context. But in terms of our engagement, we make coordination with different stakeholders. We established a digital platform organization as a digital task force committee, which comprises decision makers, civil society organizations, and tech companies to work together, considering digitalization is a part of basic rights of individuals. Thank you.


Mia Marzotto: Thanks, Mohamed. Perhaps, Khadija, would you like to come in on this? I know you mentioned the Palestine Digital Activism Forum, but maybe there’s more.


Khadeja Ibrahim: Yes, so we’re actually part of the Palestinian Digital Rights Initiative Coalition, which is a group of CSOs who work in the realm of digital rights within Palestine. So we contribute with them on a monthly basis, share findings, insights, and kind of just learn from each other, also learn about upcoming opportunities from one another as well. So it was through this coalition that we were actually encouraged also to apply to PDAF, to the Palestinian Digital Rights Activism Forum. This was a two-day forum of speakers, panels, workshops, teaching about digital rights in Palestine. So we participated in that and we were able to reach, you know, an audience within Palestine, like students, other organizations, other activists. So that was a really great opportunity for us as well. In addition to that, we just continue to share our content online through social media. We find that social media is a very effective tool to reach people, not only internationally, We have students at universities in Palestine who have reached out to us because of our content asking more questions, maybe they were doing research on the topic, so we were able to help them in that capacity. Thank you.


Mia Marzotto: Snow, yeah?


Tran Thi Tuyet: Yeah, actually, for IPS in Vietnam, we act as a bridge and facilitator for this process. Usually we follow a bottom-up approach, which means that we’re starting from the grassroots level through survey, consultation, and other, like, participatory activities to collect inputs from all relevant stakeholders. Once we have gathered sufficient perspective, we organize in-depth discussion with policymakers to share the findings, and through that, these findings are always accompanied with clear and actionable recommendations, so that we can work alongside with policymakers to drive positive change. For example, over the last three years, we have been heavily involved in the issue related to digital governance and online public service delivery, and once the concrete recommendations – we have the recommendation about consolidating our own 63 provincial portal into a single national portal, as I have mentioned in my presentation, then it was adopted by the policymaker. And additionally, our user experiences assessment are now being adopted as part of the national standards, and we continue to work closely with the policymaker and other government authorities to refine these standards. And when recommendations are taken on board, it’s become much easier to bring the policymaker, government agencies representative, and other civil society organizations together for direct dialogue, and I think that’s all.


Mia Marzotto: Thank you. Any brave souls in the room with questions, reflections? No? Okay. Then I do have a final, oh yeah, please. Would you like to come up to the mic? Thank you. Hello.


Audience: Thank you, everyone. I don’t know if you can hear me. Yes. Yes. So my name is Enes Mafuta. I’m from Zambia. I’m coming from the technical background. I’m a standards engineer. So I’ll just talk briefly on, I think, question three on digital rights and how international governance community should prioritize more respecting. So I’ve seen all your amazing presentation and how you’ve highlighted. One thing that I’ve noticed is that they are more interlinked with the issues that we’re facing in different regions. You talk of Asia, South America, Africa. There’s one common denominator, and that is digital literacy, which has been a challenge in terms of accessing the rights. So with regards to that question, one of the things that I would like to see the international community to work on is, you see, access is a right. We’ve said it, access is a right. But there’s something that will always block access. When it comes to the government, they’ll say they are safeguarding platforms from misinformation, deepfakes, and everything. So there’s need to balance this cybersecurity resilience by while upholding freedom of expression and also access. One of the example that I would give you is under a few minutes is how my country has enact, has amended the cybersecurity crime and law of 2025. And this bill was assented in a private manner. So when this bill was assented in a private manner, then the society came to learn about it It created a perception out there to say this is a bad law They are trying to curtail our freedom of speech and access and everything So it was a situation where now people started throwing memes, insults and everything So there was those types of arrests and stuff like that. In a nutshell I wouldn’t blame the government because there were some cyber Crime offenders, but on the other hand, there were some people who were ignorant about this law So there were not much awareness and everything. So I would encourage the civil society space at least to advocate for Algorithm detection that can tackle misinformation and also this deepfakes because Once an information which is wrong is sent out there It is weaponized and once it is weaponized It creates a perception in the minds of users and when it creates that perception in the mind of users The users themselves they are going to be limited by that information. They are going to accept it So in short, they are going to accept that definition And when you accept that definition it will end up limiting you and when you are limited by that information you believe it So you live with it because it’s a lie. So this is just my observation about this So let’s work together technical community civil society. Let’s find ways on how we can tackle these Misinformation and defects. Thank you very much


Mia Marzotto: Thank you very much. I’m aware. We have only one minute left I’m sure like there’s lots of thoughts around me on this I think this is definitely something that we the reason why spaces like the IGF are really important in the multi-stakeholder Approach is very important, right? And and I think that in in that Balancing exercise which you mentioned having a diversity of voices and experiences participate is is really important to find the right balance because because it is a balancing act indeed. Like I mentioned, this is the start of the conversation, hopefully, although I know that it may be the last day of this IGF, but we do want to continue the conversation. There are some resources at the back, and of course you can come up to us to have our contact details. I want to thank you all, panelists, and also to those online, and those in the room, and the technical team making this hybrid session possible. Again, thank you, and have safe travels back home, and let’s continue the discussion. Thanks again. Thank you.


M

Mia Marzotto

Speech speed

131 words per minute

Speech length

1236 words

Speech time

564 seconds

Multi-country research conducted across 9 countries with over 1000 respondents examining digital literacy, internet access, digital violence, and prevention measures

Explanation

Oxfam’s Recipe Project conducted comprehensive research across nine countries to map digital rights capacities and threats. The assessment included questions on four main topics: digital literacy and internet access, digital violence and safety, current prevention measures, and perceptions on needed actions to prevent digital rights violations.


Evidence

Over 1000 respondents across nine countries, representing community members, activists, journalists, and civil society organizations. Findings were validated through workshops with involved groups.


Major discussion point

Digital Rights Landscape and Threats Assessment


Topics

Development | Human rights


Importance of multi-stakeholder approach in IGF spaces for finding right balance in digital governance

Explanation

The multi-stakeholder approach is essential for balancing different interests in digital governance, particularly when addressing the tension between cybersecurity measures and freedom of expression. Having diverse voices and experiences participate is crucial for finding the right balance in this complex area.


Evidence

Reference to IGF as important space for multi-stakeholder dialogue and the balancing exercise between security and rights


Major discussion point

Multi-stakeholder Engagement and Policy Dialogue


Topics

Legal and regulatory | Human rights


Disagreed with

– Audience

Disagreed on

Balance between cybersecurity measures and freedom of expression


O

Online moderator

Speech speed

98 words per minute

Speech length

214 words

Speech time

130 seconds

77% of respondents in Bolivia suffered digital security incidents and 62% experienced digital violence, with 77% linking attacks to their human rights defender status

Explanation

Research findings from Bolivia reveal extremely high rates of digital security incidents and violence among respondents. Most significantly, the vast majority of those experiencing digital violence believe it is directly related to their work as human rights defenders, indicating targeted attacks.


Evidence

Four main threats identified: harassment, hate speech, physical or sexual threats, and public defamation. Creation of network of feminist collectives as response strategy.


Major discussion point

Digital Rights Landscape and Threats Assessment


Topics

Human rights | Cybersecurity


Agreed with

– Khadeja Ibrahim
– Mohamed Aded Ali

Agreed on

High prevalence of digital violence and security incidents


A

Audience

Speech speed

163 words per minute

Speech length

489 words

Speech time

179 seconds

Need for balanced approach between cybersecurity resilience and freedom of expression, with civil society advocacy for algorithm detection to tackle misinformation

Explanation

There is a critical need to balance cybersecurity measures with protecting freedom of expression and access rights. Governments often justify restricting access by citing protection from misinformation and deepfakes, but this can lead to overreach and curtailment of legitimate expression.


Evidence

Example from Zambia where cybersecurity crime law was amended and assented privately, leading to arrests and public backlash due to lack of awareness and consultation


Major discussion point

Multi-stakeholder Engagement and Policy Dialogue


Topics

Human rights | Cybersecurity | Legal and regulatory


Agreed with

– Tran Thi Tuyet
– Theary Luy
– Mohamed Aded Ali

Agreed on

Digital literacy gaps as fundamental barrier to digital rights


Disagreed with

– Mia Marzotto

Disagreed on

Balance between cybersecurity measures and freedom of expression


K

Khadeja Ibrahim

Speech speed

132 words per minute

Speech length

814 words

Speech time

367 seconds

Palestinian organizations face widespread digital rights violations including surveillance, censorship, and discriminatory infrastructure access

Explanation

Palestinian civil society faces comprehensive digital rights violations including advanced surveillance through AI tools, social media censorship, and repressive technology policies. These violations are systematic and affect all aspects of digital participation and expression.


Evidence

Israel’s use of AI tools like Blue Wolf, Lavender, and Where’s Daddy for surveillance; arbitrary censorship on Facebook and Instagram; weaponization of anti-terrorism legislation


Major discussion point

Digital Rights Landscape and Threats Assessment


Topics

Human rights | Cybersecurity


Agreed with

– Online moderator
– Mohamed Aded Ali

Agreed on

High prevalence of digital violence and security incidents


Palestinians use 3G networks while Israeli settlers access 4G/5G, demonstrating discriminatory technology infrastructure

Explanation

There is clear discriminatory practice in technology infrastructure provision, where Palestinians in occupied territories are limited to older 3G networks while Israeli settlers illegally living in the same areas have full access to modern 4G and 5G networks. This creates a two-tiered system based on ethnicity and political status.


Evidence

Specific comparison between Palestinian access to 3G versus Israeli settler access to 4G and 5G networks in the West Bank


Major discussion point

Digital Infrastructure and Access Challenges


Topics

Development | Human rights | Infrastructure


72% of Palestinian CSOs are unaware of local digital rights legislation and believe government efforts are insufficient

Explanation

There is a significant knowledge gap among Palestinian civil society organizations regarding digital rights legislation, with the vast majority unaware of existing laws. Additionally, most organizations believe their government is not doing enough to protect digital rights, indicating both awareness and policy implementation failures.


Evidence

Over half believe existing cybercrime laws may be ineffective or are unsure of their effectiveness; nearly 62% believe Palestinian government efforts are insufficient


Major discussion point

Policy and Legal Framework Gaps


Topics

Legal and regulatory | Human rights


Agreed with

– Tran Thi Tuyet
– Theary Luy
– Mohamed Aded Ali

Agreed on

Inadequate legal and policy frameworks for digital rights protection


Palestinian organizations participating in Digital Rights Initiative Coalition for monthly coordination and knowledge sharing

Explanation

Palestinian civil society has organized into a coalition that meets regularly to coordinate efforts, share findings and insights, and learn from each other’s experiences. This coalition serves as a platform for mutual support and collective action on digital rights issues.


Evidence

Monthly meetings of the coalition, participation in Palestinian Digital Activism Forum, sharing opportunities and learning from each other


Major discussion point

Civil Society Capacity Building and Response Strategies


Topics

Human rights | Development


C

Cristian Leon

Speech speed

119 words per minute

Speech length

398 words

Speech time

200 seconds

Bolivia faces 30% digital divide in urban areas and 70% in rural areas, far from meaningful connectivity

Explanation

Bolivia has significant connectivity challenges with substantial portions of both urban and rural populations lacking internet access. The rural-urban divide is particularly stark, with rural areas facing much higher rates of digital exclusion, indicating the country is far from achieving universal meaningful connectivity.


Evidence

Specific statistics showing 30% digital divide in urban areas versus 70% in rural areas; context of government push for digital transformation without addressing basic access


Major discussion point

Digital Infrastructure and Access Challenges


Topics

Development | Infrastructure


Creation of feminist collective networks in Bolivia to provide digital security support and advocacy through horizontal pedagogical methodologies

Explanation

In response to high rates of digital violence, Bolivian organizations have created networks of feminist collectives that focus on mutual support and capacity building. These networks use horizontal learning approaches and combine direct support for victims with collective advocacy for policy change.


Evidence

Network composed of large feminist collectives working on digital security strategies; methodologies include awareness-raising sessions, horizontal reflection spaces, and advocacy for public policies


Major discussion point

Civil Society Capacity Building and Response Strategies


Topics

Human rights | Development


T

Tran Thi Tuyet

Speech speed

132 words per minute

Speech length

938 words

Speech time

425 seconds

Vietnam’s 63 provincial public service portals fail to meet user-friendliness standards despite government digitalization push

Explanation

Despite Vietnam’s strong commitment to digital transformation and the fact that nearly half of government services are now online, none of the provincial portals meet basic accessibility and user-friendliness standards. This creates particular barriers for marginalized communities like migrant workers who need these services for social protection.


Evidence

Nearly half of all government services are fully online; 3-6 million daily users on digital platforms; specific mention of challenges for migrant workers accessing social protection services


Major discussion point

Digital Infrastructure and Access Challenges


Topics

Development | Legal and regulatory


Vietnam lacks meaningful participation from marginalized communities in top-down digital policy development

Explanation

Vietnam’s digital policy development follows a top-down approach led by state agencies, with consultation processes that are often formalistic rather than substantive. There is a notable absence of participation from marginalized communities and their representative organizations in both policy design and implementation phases.


Evidence

Consultation processes described as ‘formalistic and lack meaningful engagement’; absence of marginalized communities in policy design and implementation; lack of coordination between public, private, and civil society sectors


Major discussion point

Policy and Legal Framework Gaps


Topics

Legal and regulatory | Human rights | Development


Agreed with

– Theary Luy
– Mohamed Aded Ali
– Khadeja Ibrahim

Agreed on

Inadequate legal and policy frameworks for digital rights protection


Disagreed with

– Theary Luy

Disagreed on

Approach to policy development – top-down versus bottom-up methodologies


Vietnam’s bottom-up approach involves grassroots consultation followed by policymaker engagement with actionable recommendations

Explanation

The Institute for Policy Studies uses a bottom-up methodology starting with grassroots surveys and consultations to gather stakeholder input, then organizes discussions with policymakers to share findings. Their approach emphasizes providing clear, actionable recommendations that can drive concrete policy changes.


Evidence

Example of recommendation to consolidate 63 provincial portals into single national portal being adopted; user experience assessments becoming part of national standards; continued collaboration with policymakers to refine standards


Major discussion point

Multi-stakeholder Engagement and Policy Dialogue


Topics

Legal and regulatory | Development


M

Mohamed Aded Ali

Speech speed

106 words per minute

Speech length

764 words

Speech time

430 seconds

In Somalia, 42% experience digital violence with 37% reporting gender-based incidents, while 44% face basic security incidents like account theft and scams

Explanation

Somalia shows significant rates of digital violence and security incidents, with a notable gender dimension to the violence experienced. The prevalence of basic security incidents like account theft indicates widespread vulnerabilities in digital security practices among the population.


Evidence

98% internet connectivity from various sources; 69% adopted basic security measures like blocking and reporting but advanced practices remain limited; 28% understand risks of sharing personal information online


Major discussion point

Digital Rights Landscape and Threats Assessment


Topics

Human rights | Cybersecurity


Agreed with

– Online moderator
– Khadeja Ibrahim

Agreed on

High prevalence of digital violence and security incidents


Somalia shows 98% internet connectivity but limited advanced security practices and digital literacy gaps

Explanation

While Somalia has achieved high levels of basic internet connectivity through various means including home, workplace, and public access points, there remain significant gaps in digital literacy and advanced security practices. Most people can perform basic functions but lack sophisticated digital skills.


Evidence

98% connected through various networks; 90% can send messages and receive emails; 44% use Facebook and TikTok; only 28% understand risks of sharing personal information; advanced security practices remain limited


Major discussion point

Digital Infrastructure and Access Challenges


Topics

Development | Cybersecurity


Agreed with

– Tran Thi Tuyet
– Theary Luy
– Audience

Agreed on

Digital literacy gaps as fundamental barrier to digital rights


Somalia has various regulatory bodies but limited legislative framework with policies still in draft or cabinet review

Explanation

Somalia has established multiple institutional bodies to oversee digital and telecommunications sectors, including regulatory authorities and technical institutes. However, the legislative framework remains incomplete with many policies still in development stages rather than being implemented.


Evidence

Multiple institutions mentioned: Digital Rights Authority, National Identification and Registration Authority, National Communication Authority, Somali National Telecommunication and Technology Institute; policies described as ‘still drafting’ or ‘on desk of cabinet’


Major discussion point

Policy and Legal Framework Gaps


Topics

Legal and regulatory | Infrastructure


Agreed with

– Tran Thi Tuyet
– Theary Luy
– Khadeja Ibrahim

Agreed on

Inadequate legal and policy frameworks for digital rights protection


Establishment of digital task force committee in Somalia comprising decision makers, CSOs, and tech companies

Explanation

Somalia has created a collaborative platform bringing together government decision makers, civil society organizations, and private sector tech companies to work collectively on digitalization issues. This multi-stakeholder approach recognizes digitalization as a basic right that requires coordinated effort across sectors.


Evidence

Digital task force committee includes decision makers, civil society organizations, and tech companies; focus on digitalization as basic rights of individuals


Major discussion point

Civil Society Capacity Building and Response Strategies


Topics

Legal and regulatory | Development


T

Theary Luy

Speech speed

116 words per minute

Speech length

736 words

Speech time

379 seconds

In Cambodia, fewer than 30% of citizens possess skills to navigate digital world safely, with particular gaps among rural youth and grassroots organizations

Explanation

Cambodia faces a severe digital literacy crisis with the vast majority of citizens lacking the skills needed for safe and effective digital participation. The problem is particularly acute among rural youth and grassroots civil society organizations, creating a significant barrier to digital inclusion and protection.


Evidence

Digital divide described as ‘not just about access to technology, but access to opportunity, participation and protection’; specific mention of rural youth and grassroots CSOs as most affected groups


Major discussion point

Digital Rights Landscape and Threats Assessment


Topics

Development | Human rights


Agreed with

– Tran Thi Tuyet
– Mohamed Aded Ali
– Audience

Agreed on

Digital literacy gaps as fundamental barrier to digital rights


Cambodia lacks comprehensive cybersecurity, cybercrime, and personal data protection laws despite having digital government policy

Explanation

While Cambodia has established a digital government policy framework for 2022-2035 focusing on digital government, economy, and citizenship, the country lacks essential legal protections. Key laws including cybersecurity, cybercrime, and personal data protection remain in draft form, leaving citizens vulnerable.


Evidence

Digital government policy 2022-2035 with three pillars mentioned; cybersecurity law, cybercrime law, and personal data protection law all described as ‘in draft form’ and ‘in process’


Major discussion point

Policy and Legal Framework Gaps


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tran Thi Tuyet
– Mohamed Aded Ali
– Khadeja Ibrahim

Agreed on

Inadequate legal and policy frameworks for digital rights protection


Training grassroots organizations in Cambodia to become trainers themselves, transferring knowledge to communities

Explanation

Cambodia’s approach involves building capacity at the grassroots level by training local organizations who then become trainers for their own members and communities. This creates a multiplier effect and ensures knowledge transfer is culturally appropriate and sustainable.


Evidence

Work with provincial NGO networks; local organizations ‘not only receiving training but becoming trainers themselves’; engagement with youth and social influencers for public campaigns


Major discussion point

Civil Society Capacity Building and Response Strategies


Topics

Development | Human rights


Policy dialogue in Cambodia serves as platform for community voice consideration rather than mere consultation

Explanation

Cambodia emphasizes that their multi-stakeholder policy dialogue goes beyond traditional consultation to become a genuine platform for incorporating community voices into policy decisions. This approach aims to build trust among stakeholders and ensure meaningful participation in shaping digital laws and policies.


Evidence

Multi-stakeholder approach bringing together government, civil society, development partners, and private sector; policy dialogue described as ‘not consultation, but way to build trust’ and ‘platform that change and consider demand of community’


Major discussion point

Multi-stakeholder Engagement and Policy Dialogue


Topics

Legal and regulatory | Human rights


Disagreed with

– Tran Thi Tuyet

Disagreed on

Approach to policy development – top-down versus bottom-up methodologies


Agreements

Agreement points

Digital literacy gaps as fundamental barrier to digital rights

Speakers

– Tran Thi Tuyet
– Theary Luy
– Mohamed Aded Ali
– Audience

Arguments

Vietnam lacks meaningful participation from marginalized communities in top-down digital policy development


In Cambodia, fewer than 30% of citizens possess skills to navigate digital world safely, with particular gaps among rural youth and grassroots organizations


Somalia shows 98% internet connectivity but limited advanced security practices and digital literacy gaps


Need for balanced approach between cybersecurity resilience and freedom of expression, with civil society advocacy for algorithm detection to tackle misinformation


Summary

All speakers identified digital literacy as a critical challenge affecting their populations’ ability to safely and effectively participate in digital spaces, with particular impact on marginalized communities


Topics

Development | Human rights


High prevalence of digital violence and security incidents

Speakers

– Online moderator
– Khadeja Ibrahim
– Mohamed Aded Ali

Arguments

77% of respondents in Bolivia suffered digital security incidents and 62% experienced digital violence, with 77% linking attacks to their human rights defender status


Palestinian organizations face widespread digital rights violations including surveillance, censorship, and discriminatory infrastructure access


In Somalia, 42% experience digital violence with 37% reporting gender-based incidents, while 44% face basic security incidents like account theft and scams


Summary

Multiple countries report extremely high rates of digital violence and security incidents, with particular targeting of human rights defenders and gender-based violence


Topics

Human rights | Cybersecurity


Inadequate legal and policy frameworks for digital rights protection

Speakers

– Tran Thi Tuyet
– Theary Luy
– Mohamed Aded Ali
– Khadeja Ibrahim

Arguments

Vietnam lacks meaningful participation from marginalized communities in top-down digital policy development


Cambodia lacks comprehensive cybersecurity, cybercrime, and personal data protection laws despite having digital government policy


Somalia has various regulatory bodies but limited legislative framework with policies still in draft or cabinet review


72% of Palestinian CSOs are unaware of local digital rights legislation and believe government efforts are insufficient


Summary

All countries face significant gaps in legal frameworks for digital rights protection, with laws either missing, in draft form, or lacking meaningful stakeholder participation in development


Topics

Legal and regulatory | Human rights


Similar viewpoints

All speakers emphasized bottom-up, community-driven approaches to building digital rights capacity, with focus on training local organizations to become trainers and creating multi-stakeholder platforms for collaboration

Speakers

– Cristian Leon
– Tran Thi Tuyet
– Theary Luy
– Mohamed Aded Ali

Arguments

Creation of feminist collective networks in Bolivia to provide digital security support and advocacy through horizontal pedagogical methodologies


Vietnam’s bottom-up approach involves grassroots consultation followed by policymaker engagement with actionable recommendations


Training grassroots organizations in Cambodia to become trainers themselves, transferring knowledge to communities


Establishment of digital task force committee in Somalia comprising decision makers, CSOs, and tech companies


Topics

Development | Human rights


Strong consensus on the importance of meaningful multi-stakeholder engagement that goes beyond consultation to genuine participation in policy development and implementation

Speakers

– Tran Thi Tuyet
– Theary Luy
– Mia Marzotto
– Audience

Arguments

Vietnam’s bottom-up approach involves grassroots consultation followed by policymaker engagement with actionable recommendations


Policy dialogue in Cambodia serves as platform for community voice consideration rather than mere consultation


Importance of multi-stakeholder approach in IGF spaces for finding right balance in digital governance


Need for balanced approach between cybersecurity resilience and freedom of expression, with civil society advocacy for algorithm detection to tackle misinformation


Topics

Legal and regulatory | Human rights


Unexpected consensus

Infrastructure disparities as tool of discrimination and control

Speakers

– Khadeja Ibrahim
– Cristian Leon

Arguments

Palestinians use 3G networks while Israeli settlers access 4G/5G, demonstrating discriminatory technology infrastructure


Bolivia faces 30% digital divide in urban areas and 70% in rural areas, far from meaningful connectivity


Explanation

Both speakers highlighted how infrastructure access disparities serve as mechanisms of exclusion and control, whether through deliberate discrimination (Palestine) or systemic neglect (Bolivia rural areas)


Topics

Development | Human rights | Infrastructure


Gender-specific targeting in digital violence across different contexts

Speakers

– Online moderator
– Mohamed Aded Ali
– Cristian Leon

Arguments

77% of respondents in Bolivia suffered digital security incidents and 62% experienced digital violence, with 77% linking attacks to their human rights defender status


In Somalia, 42% experience digital violence with 37% reporting gender-based incidents, while 44% face basic security incidents like account theft and scams


Creation of feminist collective networks in Bolivia to provide digital security support and advocacy through horizontal pedagogical methodologies


Explanation

Unexpected consensus emerged on the gendered nature of digital violence across very different political and social contexts, leading to similar feminist organizing responses


Topics

Human rights | Cybersecurity


Overall assessment

Summary

Strong consensus exists among speakers on fundamental challenges including digital literacy gaps, inadequate legal frameworks, high rates of digital violence, and the need for bottom-up, multi-stakeholder approaches to digital rights protection


Consensus level

High level of consensus despite diverse geographical and political contexts, suggesting these are universal challenges in digital rights implementation. The agreement on solutions – particularly community-driven capacity building and meaningful stakeholder engagement – indicates potential for coordinated global action and shared learning across regions


Differences

Different viewpoints

Approach to policy development – top-down versus bottom-up methodologies

Speakers

– Tran Thi Tuyet
– Theary Luy

Arguments

Vietnam lacks meaningful participation from marginalized communities in top-down digital policy development


Policy dialogue in Cambodia serves as platform for community voice consideration rather than mere consultation


Summary

Vietnam’s speaker critiques top-down policy approaches as formalistic, while Cambodia’s speaker presents their multi-stakeholder dialogue as genuinely inclusive, suggesting different views on what constitutes meaningful participation


Topics

Legal and regulatory | Human rights | Development


Balance between cybersecurity measures and freedom of expression

Speakers

– Audience
– Mia Marzotto

Arguments

Need for balanced approach between cybersecurity resilience and freedom of expression, with civil society advocacy for algorithm detection to tackle misinformation


Importance of multi-stakeholder approach in IGF spaces for finding right balance in digital governance


Summary

The audience member emphasizes technical solutions like algorithm detection for misinformation, while the moderator focuses on multi-stakeholder processes for balance, representing different approaches to the same challenge


Topics

Human rights | Cybersecurity | Legal and regulatory


Unexpected differences

No significant unexpected disagreements identified

Speakers

Arguments

Explanation

The session was structured as a collaborative sharing of experiences rather than a debate, with speakers presenting complementary rather than conflicting perspectives on digital rights challenges


Topics

Overall assessment

Summary

The discussion showed minimal direct disagreement, with most differences arising from varying national contexts and implementation approaches rather than fundamental philosophical disagreements about digital rights


Disagreement level

Low level of disagreement with high consensus on core issues. The main tensions were around implementation methodologies rather than goals, which suggests strong potential for collaborative solutions and knowledge sharing across different contexts


Partial agreements

Partial agreements

Similar viewpoints

All speakers emphasized bottom-up, community-driven approaches to building digital rights capacity, with focus on training local organizations to become trainers and creating multi-stakeholder platforms for collaboration

Speakers

– Cristian Leon
– Tran Thi Tuyet
– Theary Luy
– Mohamed Aded Ali

Arguments

Creation of feminist collective networks in Bolivia to provide digital security support and advocacy through horizontal pedagogical methodologies


Vietnam’s bottom-up approach involves grassroots consultation followed by policymaker engagement with actionable recommendations


Training grassroots organizations in Cambodia to become trainers themselves, transferring knowledge to communities


Establishment of digital task force committee in Somalia comprising decision makers, CSOs, and tech companies


Topics

Development | Human rights


Strong consensus on the importance of meaningful multi-stakeholder engagement that goes beyond consultation to genuine participation in policy development and implementation

Speakers

– Tran Thi Tuyet
– Theary Luy
– Mia Marzotto
– Audience

Arguments

Vietnam’s bottom-up approach involves grassroots consultation followed by policymaker engagement with actionable recommendations


Policy dialogue in Cambodia serves as platform for community voice consideration rather than mere consultation


Importance of multi-stakeholder approach in IGF spaces for finding right balance in digital governance


Need for balanced approach between cybersecurity resilience and freedom of expression, with civil society advocacy for algorithm detection to tackle misinformation


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

Digital rights violations are widespread across Global South countries, with common patterns including low digital literacy (under 30% in Cambodia), high rates of digital violence (77% in Bolivia experienced security incidents), and inadequate legal frameworks


Marginalized communities face the greatest digital risks, particularly women, human rights defenders, rural populations, and migrant workers who lack access to protective measures and digital skills


Top-down policy approaches without meaningful community participation are ineffective – successful digital governance requires bottom-up engagement and multi-stakeholder dialogue


Civil society organizations are developing innovative capacity-building strategies including peer-to-peer training models, feminist collective networks, and horizontal pedagogical methodologies


Infrastructure discrimination exists even within countries (Palestinians have 3G while Israeli settlers have 4G/5G access), highlighting how digital divides can be tools of oppression


Digital transformation initiatives often prioritize techno-solutionism over addressing real community needs and protecting fundamental rights


Resolutions and action items

Continue multi-country collaboration through the Recipe Project to strengthen civil society digital rights advocacy


Establish and maintain regular coordination mechanisms like Cambodia’s multi-stakeholder policy dialogues and Palestine’s Digital Rights Initiative Coalition


Develop practical toolkits and resources for digital security training that can be adapted across different contexts


Create networks of grassroots organizations (like Bolivia’s feminist collectives) to provide peer support and collective advocacy


Engage with upcoming political opportunities like Bolivia’s elections to advocate for digital rights policy reforms


Maintain ongoing capacity building programs where trained organizations become trainers for their communities


Unresolved issues

How to balance cybersecurity measures with freedom of expression and access rights without government overreach


Lack of comprehensive legal frameworks in most countries (cybersecurity, data protection, and cybercrime laws still in draft stages)


Limited resources for civil society organizations – 75% of Palestinian CSOs lack technological resources and 55% lack digital knowledge


Misinformation and deepfakes detection while preserving legitimate discourse and avoiding censorship


Meaningful participation mechanisms for marginalized communities in digital policy development processes


Coordination between public sector, private sector, and civil society remains weak across all countries presented


Suggested compromises

Multi-stakeholder dialogue platforms that serve as genuine policy consideration forums rather than mere consultation exercises


Algorithm detection systems for misinformation that involve civil society advocacy to ensure they don’t restrict legitimate expression


Gradual policy implementation with community feedback loops, as demonstrated by Vietnam’s approach of grassroots consultation followed by policymaker engagement


Hybrid approaches combining government digitalization initiatives with civil society-led capacity building for vulnerable populations


International support frameworks that respect local ownership while providing technical and legal assistance to under-resourced organizations


Thought provoking comments

Digital policy in Vietnam are still largely developed through a top-down approach led by state agencies with consultation processes that are often formalistic and lack of meaningful engagement. There remains an absence of participation from the marginalized communities and their representative organizations during both the design and implementation of digital policies.

Speaker

Tran Thi Tuyet (Snow)


Reason

This comment is insightful because it identifies a fundamental structural problem in digital governance – the disconnect between policy creation and the communities most affected by these policies. It moves beyond surface-level issues to examine the root cause of digital inequality: exclusionary policymaking processes.


Impact

This observation established a critical framework that other panelists built upon throughout the discussion. It shifted the conversation from merely cataloging digital rights violations to examining the systemic governance failures that enable these violations, setting the tone for deeper structural analysis.


77% said that the actions of digital violence affect them may have a specific relationship with their status as human rights defenders.

Speaker

Luan Mendez


Reason

This statistic is particularly thought-provoking because it reveals how digital violence is not random but strategically targeted at those working for social change. It demonstrates how digital spaces are being weaponized to silence advocacy and activism, making it a tool of oppression rather than liberation.


Impact

This finding elevated the discussion from general digital safety concerns to understanding digital violence as a deliberate tactic to suppress civil society. It helped frame subsequent discussions around the need for protective measures specifically for vulnerable groups and activists.


Palestinians still use 3G Internet network, while Israeli settlers who are illegally living in the West Bank have full access to 4G and 5G networks.

Speaker

Khadeja Ibrahim


Reason

This comment is profoundly insightful because it illustrates how digital infrastructure itself can be a tool of discrimination and control. It shows how seemingly technical decisions about network access are actually political choices that reinforce existing power imbalances and human rights violations.


Impact

This stark example of digital apartheid provided concrete evidence of how digital rights violations intersect with broader systems of oppression. It challenged participants to think beyond individual privacy concerns to consider how digital infrastructure can institutionalize inequality.


There’s need to balance this cybersecurity resilience by while upholding freedom of expression and also access… when this bill was assented in a private manner, then the society came to learn about it. It created a perception out there to say this is a bad law… So there were those types of arrests and stuff like that.

Speaker

Enes Mafuta (audience member)


Reason

This comment is thought-provoking because it highlights the complex tension between legitimate security concerns and rights protection, while also demonstrating how lack of transparency and participation in lawmaking can undermine both security and rights objectives. It shows how process matters as much as content in digital governance.


Impact

This intervention shifted the discussion toward the practical challenges of implementing digital governance, moving from problem identification to the nuanced realities of balancing competing interests. It prompted reflection on the importance of inclusive, transparent policymaking processes.


Digital transformation as a multi-component ecosystem requiring synchronization between policy, technology, and people… ensuring digital rights as a prerequisite for greater public participation in the transformation process.

Speaker

Tran Thi Tuyet (Snow)


Reason

This insight reframes digital transformation from a purely technological process to a holistic social transformation that requires careful coordination of multiple elements. It positions digital rights not as an add-on consideration but as foundational to successful digital transformation.


Impact

This systems thinking approach influenced how other panelists framed their recommendations, moving the conversation toward comprehensive, coordinated responses rather than piecemeal solutions. It helped establish digital rights as central rather than peripheral to development goals.


Overall assessment

These key comments fundamentally shaped the discussion by elevating it from a simple catalog of digital rights violations to a sophisticated analysis of systemic governance failures and structural inequalities. The conversation evolved from problem identification to root cause analysis, with participants building on each other’s insights to develop a comprehensive understanding of how digital rights violations are embedded in broader systems of power and exclusion. The comments collectively demonstrated that digital rights issues cannot be addressed through technical solutions alone but require fundamental changes to governance processes, power structures, and approaches to development. The discussion successfully connected local experiences to global patterns, showing how similar exclusionary processes manifest across different contexts while respecting the unique circumstances of each region represented.


Follow-up questions

How can algorithm detection be developed and implemented to tackle misinformation and deepfakes while maintaining freedom of expression?

Speaker

Enes Mafuta (audience member from Zambia)


Explanation

This addresses the critical balance between cybersecurity resilience and upholding freedom of expression and access rights, particularly important given how misinformation can be weaponized and create limiting perceptions in users’ minds


How can the technical community and civil society work together more effectively to find solutions for tackling misinformation and deepfakes?

Speaker

Enes Mafuta (audience member from Zambia)


Explanation

This collaborative approach is essential for addressing the common challenges of digital literacy and misinformation that appear across different regions (Asia, South America, Africa)


How can governments better balance cybersecurity measures with protecting citizens’ rights to access and freedom of expression?

Speaker

Enes Mafuta (audience member from Zambia)


Explanation

This question arose from the example of Zambia’s cybersecurity law amendment that was enacted privately, leading to public backlash and arrests, highlighting the need for transparent processes and public awareness


What are the most effective methods for raising public awareness about new digital laws and policies to prevent ignorance-based violations?

Speaker

Enes Mafuta (audience member from Zambia)


Explanation

This addresses the gap between policy implementation and public understanding, which can lead to unintentional violations and subsequent arrests or restrictions


How can meaningful two-way accountability in digital governance be better established between research findings and the communities involved?

Speaker

Mia Marzotto


Explanation

This question seeks to understand how organizations can ensure that research findings lead to actionable changes and that communities are kept informed about the impact of their participation in research


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #225 Bridging the Connectivity Gap for Excluded Communities

WS #225 Bridging the Connectivity Gap for Excluded Communities

Session at a glance

Summary

This discussion focused on bridging the connectivity gap for excluded communities, examining innovative solutions and policies to achieve meaningful, affordable, and inclusive internet access for all. The panel, moderated by Nnenna Paul-Ugochukwu from Paradigm Initiative, brought together experts from the Internet Society, Global Digital Inclusion Partnership, and Internet Bolivia to address last-mile connectivity challenges.


Christopher Locke from the Internet Society emphasized that while emerging technologies like low-earth orbit satellites show promise for connecting remote areas, sustainability remains a challenge due to pricing instability and regulatory issues. He stressed the importance of community readiness and building local capacity for network management, noting that successful community networks require both technical training and business model development. Onika Makwakwa from the Global Digital Inclusion Partnership advocated for treating connectivity as a human right and reforming universal service funds to address demand-side issues like digital skills and device affordability. She highlighted the need for gender-disaggregated data and moving beyond basic connectivity metrics to measure meaningful access.


Leon Cristian from Internet Bolivia presented a more sobering perspective, outlining four key complexities: emerging technologies creating new problems around data sovereignty, regulatory power imbalances between states and big tech companies, the breakdown of cooperative governance models, and the emergence of a new digital divide around advanced computing capabilities. The discussion revealed that despite 20 years since the World Summit on the Information Society, similar connectivity gaps persist, requiring context-informed, people-focused solutions that go beyond traditional market-driven approaches to embrace community-centered models and multi-stakeholder collaboration.


Keypoints

## Major Discussion Points:


– **Last-mile connectivity challenges and innovative solutions**: The panel explored how emerging technologies like low-Earth orbit satellites (LEO) and 5G can bridge connectivity gaps, particularly in remote and underserved communities. However, sustainability issues around pricing, regulatory frameworks, and infrastructure capacity remain significant barriers.


– **Moving beyond basic connectivity to meaningful access**: Panelists emphasized the need to shift from simply providing internet access to ensuring meaningful connectivity that includes digital literacy, local content in native languages, affordable devices, and daily reliable access rather than the current standard of usage “once every three months.”


– **Community-centered networks and sustainability models**: Extensive discussion on community networks as viable alternatives to traditional telecom models, with emphasis on local ownership, management, and diverse business models including cooperatives. The Internet Society’s community readiness toolkit and grant programs were highlighted as examples of supporting sustainable community-led initiatives.


– **Policy and regulatory reform needs**: Strong calls for reforming Universal Service and Access Funds, opening spectrum for community networks, treating connectivity as a public good and human right, and creating enabling regulatory environments that support diverse stakeholders rather than just large telecommunications companies.


– **Growing complexity of the digital divide**: Recognition that the digital divide is becoming more complex with new challenges including data sovereignty, regulatory power imbalances between states and big tech companies, and emerging technology gaps (AI, quantum computing) that create additional layers of exclusion for developing countries.


## Overall Purpose:


The discussion aimed to generate actionable insights for bridging the connectivity gap for excluded communities, focusing on innovative solutions, policies, and business models to achieve meaningful, affordable, and inclusive connectivity for all by 2030, particularly in the Global South.


## Overall Tone:


The discussion maintained a professional but increasingly urgent tone throughout. It began optimistically with solution-focused presentations but became more sobering as panelists acknowledged the persistent challenges and growing complexities. The tone shifted from technical problem-solving to more critical assessments of systemic failures, with speakers expressing frustration about the lack of progress 20 years after initial digital divide discussions. Despite the challenges highlighted, the conversation concluded on a constructive note with concrete recommendations and examples of successful community-led initiatives.


Speakers

**Speakers from the provided list:**


– **Nnenna Paul-Ugochukwu** – Chief Operating Officer at Paradigm Initiative (nonprofit dedicated to promotion of digital inclusion and digital rights in Africa and the Global South); Session moderator


– **Christopher Locke** – Works with Internet Society, involved in community-led connectivity programs and ISOC Foundation


– **Thobekile Matimbe** – Senior Manager Partnerships and Engagements for Paradigm Initiative; Expert in human rights-based advocacy


– **Onica Makwakwa** – Executive Director Global Digital Inclusion Partnership; Has worked for 25 years driving gender and equity-focused policy


– **Leon Cristian** – Executive Director of Internet Bolivia; Advisor to governments on digital rights


– **Audience** – Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Sani Suleiman** – Colleague of Nnenna Paul-Ugochukwu, mentioned as online moderator gathering questions and comments (though did not speak in the transcript)


– **Bara Kotieno** – Chair of the Association of Community Networks in Kenya


– **Leo** – Representative from United Republic of Tanzania


– **Lee McKnight** – Professor at Syracuse University in the United States


– **Lisa Dakanay** – From the Institute for Social Entrepreneurship in Asia


Full session report

# Bridging the Connectivity Gap for Excluded Communities: A Comprehensive Discussion Report


## Introduction and Context


This panel discussion, moderated by Nnenna Paul-Ugochukwu, Chief Operating Officer at Paradigm Initiative, brought together leading experts to examine innovative solutions and policies for achieving meaningful, affordable, and inclusive internet access for excluded communities. The conversation addressed the persistent challenge that 2.6 billion people remain unconnected globally, indicating that current approaches require fundamental reassessment.


The panel featured Christopher Locke from the Internet Society, who focuses on community-led connectivity programmes; Onica Makwakwa, Executive Director of the Global Digital Inclusion Partnership with extensive experience in gender and equity-focused policy; Leon Cristian, Executive Director of Internet Bolivia and government advisor on digital rights; and Thobekile Matimbe, Senior Manager for Partnerships and Engagements at Paradigm Initiative. The discussion also included valuable contributions from audience members, including Bara Kotieno from Kenya’s Association of Community Networks and other participants from across the global digital inclusion community.


## Emerging Technologies: Opportunities and Challenges


### Low Earth Orbit Satellites: Promise and Limitations


Christopher Locke examined the potential of Low Earth Orbit (LEO) satellites to bridge connectivity gaps in remote areas, while highlighting significant challenges that complicate their deployment. “We’re still in the very early stages of Leo Internet,” Locke explained, “and not only is the price initially expensive, but also we’re increasingly seeing that as the networks become clogged, the prices sometimes are quite dynamic based on demand.” He cited examples from African cities where Starlink services are “pretty much booked out,” illustrating how capacity constraints create new barriers to access.


Locke emphasized that while LEO satellites offer solutions for remote connectivity, sustainability remains problematic due to pricing instability and regulatory frameworks that haven’t kept pace with technological development. The infrastructure requires significant investment in ground stations and local capacity building, making it unsuitable as a standalone solution for underserved communities.


### Regulatory and Sovereignty Challenges


Leon Cristian provided a critical perspective on how emerging technologies create new regulatory challenges. His most striking example concerned data sovereignty: “Starlink is operating in my country without permission,” he revealed, explaining how the company told Bolivian authorities, “I don’t need that, I don’t need to put an office in your country. I can operate and I can provide my services even if I don’t fulfil all their requirements.”


This example illustrates broader challenges where technological solutions can undermine national sovereignty and regulatory frameworks. Cristian identified emerging complexities including problems related to data sovereignty and spectrum allocation, regulatory power imbalances between states and technology companies, and what he termed an evolving “digital divide.”


### The Expanding Digital Divide


Cristian introduced the concept of an evolving digital divide that extends beyond basic connectivity. “Now the digital divide is not only about having or not meaningful connectivity,” he explained, “it’s also about having enough capacity to run AIs, quantum computation, blockchains, cryptos… So there is another connectivity divide, there is another digital divide that is happening right now.”


This observation challenges current approaches to digital inclusion by suggesting that while efforts focus on basic connectivity, new technological requirements are creating additional layers of exclusion that countries in the Global South must navigate simultaneously.


## Community Networks: Alternative Connectivity Models


### Beyond Traditional Telecommunications


Christopher Locke presented community networks as viable alternatives to traditional telecommunications models, emphasizing the need to move beyond conventional thinking. “We need to understand there are many business models to providing connectivity,” he argued. “Mimicking a small version of being a telco isn’t the way to build a sustainable community network. There are co-op models, there are many other models that allow us to develop that.”


The Internet Society’s approach focuses on community readiness evaluation that encompasses technology solutions alongside business and governance training. Locke stressed that community networks are “not mini-telcos but community organisations providing vital services in ways that make sense for local communities.”


### Real-World Implementation and Success Stories


Bara Kotieno from Kenya provided concrete examples of progress, noting that Kenya has established 20 community networks with a target of 100. This demonstrates the viability of the model while highlighting the ongoing need to focus on sustainability components beyond initial establishment.


The discussion revealed that successful community networks require comprehensive approaches that include technical training, business model development, and governance structures. Locke emphasized that the Internet Society’s community readiness toolkit and grant programmes support sustainable community-led initiatives, but the ultimate goal is local self-sufficiency rather than continued dependence on external funding.


### Sustainability and Local Ownership


The sustainability of community networks emerged as a central concern. Locke emphasized that successful community networks must develop sustainable business models that can cover costs through local pricing systems rather than depending on continuous grants. This requires innovative approaches that balance community service with financial viability.


Leon Cristian reinforced the importance of community participation, arguing that “including local communities brings diversity and indigenous perspectives essential for building technology for the future.” This emphasis reflects a shift away from top-down technology deployment towards participatory approaches that center community needs and capabilities.


## Policy Reform and Regulatory Innovation


### Universal Service Funds: Untapped Potential


Onica Makwakwa highlighted significant problems with Universal Service and Access Funds, revealing that these potentially transformative resources are largely underutilized and lack transparency. Thobekile Matimbe reported that “less than four out of 27 African countries are transparent about Universal Service Fund resources and initiatives,” highlighting the scale of the accountability deficit.


Makwakwa argued for comprehensive reform of these funds, emphasizing the need for “public reporting and openness to addressing demand-side issues like digital skills and devices.” Current approaches focus primarily on infrastructure deployment while neglecting the broader ecosystem of digital inclusion, including digital literacy, local content development, and device affordability.


### Enabling Regulatory Frameworks


The need for enabling regulatory frameworks emerged as critical for scaling community networks. Thobekile Matimbe emphasized that “enabling regulatory frameworks are essential for community-centred connectivity initiatives to thrive with multi-stakeholder approaches.”


Christopher Locke argued that “governments should support different connectivity solutions through affordable spectrum licensing rather than viewing it as income stream.” This represents a fundamental shift in how spectrum is conceptualized – from a revenue-generating asset to a development tool that can enable community-led connectivity initiatives.


### Connectivity as a Human Right


Strong consensus emerged around treating connectivity as a fundamental human right rather than a market commodity. Onica Makwakwa argued that “universal affordable access should be prioritised as a right, embedded in development policies and rights frameworks.”


Leon Cristian reinforced this perspective by arguing that “market failures require public investment and public-private alliances with greater community participation.” This represents a significant departure from market-led approaches that have dominated connectivity policy, acknowledging that market mechanisms alone cannot deliver universal access.


## Redefining Meaningful Connectivity and Addressing Barriers


### Beyond Basic Access Metrics


Onica Makwakwa delivered a powerful critique of current connectivity measurement standards, arguing that “defining a connected person as someone who uses internet once every three months is underwhelming.” This highlights how inadequate measurement standards mask the reality of digital exclusion and prevent effective policy responses.


The discussion revealed that meaningful connectivity requires regular access with adequate speeds, not the basic connectivity measured by national averages that currently dominate policy discussions. Makwakwa emphasized that current data collection lacks gender and income level disaggregation, making it difficult to measure true impact on underserved populations.


### The Device Affordability Crisis


Device affordability emerged as a critical barrier that receives insufficient attention in connectivity discussions. Makwakwa revealed that people often spend significant portions of household income on purchasing devices, with high taxation on devices creating additional barriers. These costs create insurmountable barriers for low-income populations, even where network infrastructure is available.


The discussion challenged conventional approaches to device affordability that focus on financing schemes rather than addressing root causes of high costs. Makwakwa argued for “actually lowering initial device costs through local assembly and right to repair” rather than simply making expensive devices more accessible through credit arrangements.


“We need to stop having poor policies for poor people,” Makwakwa declared. “Poor phones for poor people… There’s a big difference between you can afford a phone over three months and you can afford a phone now.” This reframing challenges approaches to digital inclusion by demanding dignity and equity rather than accepting second-class solutions.


### Digital Literacy and Holistic Approaches


The discussion emphasized that connectivity without digital literacy and relevant local content fails to deliver meaningful benefits. Investment is needed in digital literacy programmes, local content development in local languages, and online safety education to ensure that connectivity translates into empowerment rather than mere access.


This holistic approach recognizes that technical connectivity is only the foundation for digital participation. Without complementary investments in skills, content, and safety, connectivity provision may fail to deliver transformative benefits.


## Economic Models and Alternative Approaches


### Moving Beyond Profit-Centric Models


The discussion revealed strong consensus that traditional profit-centric telecommunications models are insufficient for achieving universal connectivity. Christopher Locke advocated for “multiple business models beyond profit-centric telco models, including co-op models for sustainable community networks.”


Leon Cristian reinforced this perspective by arguing that market failures require public investment and public-private alliances with greater community participation. This acknowledgement of market limitations opens space for alternative approaches that combine public investment, community ownership, and social enterprise models.


### Spectrum as a Development Tool


Christopher Locke’s argument that governments should use “spectrum licensing as a development tool rather than primarily as government revenue generation” represents a significant policy shift. This approach recognizes spectrum as a public resource that should be managed to maximize social benefit rather than simply generate government income.


By making spectrum more accessible and affordable for community-serving organizations, governments can enable local solutions that complement commercial telecommunications infrastructure while serving communities that are not commercially viable for traditional operators.


## Challenges and Political Realities


### Political Understanding and Support


Thobekile Matimbe provided a revealing illustration of political resistance to digital inclusion through an anecdote about engaging with government officials: “I have been in one engagement with one government on the African continent where we are discussing digital inclusion for underserved communities and the feedback was from one member of parliament that look, do you really think my grandmother needs a smartphone?”


This example reveals the fundamental disconnect between policymakers and the reality of digital inclusion needs. It illustrates how basic assumptions about who deserves connectivity access still need to be challenged at the highest levels of government, highlighting that technical and policy solutions require political will and understanding to achieve scale and impact.


### Implementation Complexities


The discussion revealed various implementation challenges, from regulatory power imbalances with global technology companies to the need for new governance frameworks that can accommodate diverse stakeholders and business models. These challenges require nuanced approaches that balance the benefits of global connectivity solutions with legitimate concerns about sovereignty and local control.


## Areas of Consensus and Future Directions


### Community-Centered Approaches


Despite different perspectives on specific solutions, all speakers demonstrated strong consensus around the need for community-centered approaches to connectivity. This consensus spans technical implementation, business model development, and policy design, reflecting recognition that top-down approaches have failed to deliver universal access.


The community-centered consensus includes meaningful community participation in solution design, local ownership and management of connectivity infrastructure, and business models that serve community needs rather than simply maximizing profits.


### Rights-Based Framework


Strong consensus emerged around treating connectivity as a fundamental right that requires government intervention and public investment. This rights-based approach provides a foundation for arguing that market failures justify public intervention and that universal access is a legitimate government responsibility.


### Need for Systemic Change


Perhaps most significantly, speakers demonstrated consensus that current approaches to digital inclusion have fundamentally failed to deliver results. The persistence of massive connectivity gaps indicates that incremental improvements to existing approaches are insufficient, creating space for more transformative alternatives including community networks, public investment, and regulatory reform.


## Conclusion


This comprehensive discussion revealed that bridging the connectivity gap for excluded communities requires fundamental changes to current approaches rather than incremental improvements. The persistence of massive digital divides indicates that alternative approaches centered on community needs, human rights, and public investment are essential.


The conversation highlighted both the promise and complexity of emerging solutions, from LEO satellites to community networks, while emphasizing that technology alone cannot solve connectivity challenges without appropriate social, economic, and political frameworks. The emerging consensus around community-centered approaches, rights-based frameworks, and the need for systemic change provides a foundation for more transformative interventions.


However, significant challenges remain, including developing sustainable business models for community networks, addressing regulatory power imbalances with global technology companies, and creating measurement frameworks for meaningful connectivity. Addressing these challenges will require continued collaboration, innovation, and political commitment to digital inclusion as a fundamental development priority.


The discussion ultimately demonstrated that bridging the connectivity gap is not simply a technical challenge but a comprehensive development undertaking that requires coordinated action across multiple sectors and stakeholders. The path forward demands both urgency to address the immediate needs of unconnected populations and patience to build sustainable, community-centered solutions that can deliver lasting transformation.


Session transcript

Nnenna Paul-Ugochukwu: Hello. And welcome to this session titled Bridging the Connectivity Gap for Excluded Communities. My name is Nnena Polugochukwu, and I’m the Chief Operating Officer at Paradigm Initiative, which is a nonprofit dedicated to the promotion of digital inclusion and digital rights in Africa and the Global South. I’m honored today to guide today’s conversation on a challenge that sits at the intersection of infrastructure, equity, and human rights, asking the question, how do we ensure meaningful, affordable, and inclusive connectivity for all? While Internet access has significantly expanded globally, millions remain unconnected, particularly in rural, remote, and underserved regions. Bridging the last-mile connectivity gap is crucial for ensuring digital inclusion and achieving meaningful connectivity and access for all. Today’s conversation will explore innovative solutions, policies, and business models aimed at addressing last-mile challenges, including community networks, public-private partnerships, emerging technologies like low-Earth orbit satellites, 5G expansion, and alternative spectrum management approaches. So this session aims to generate actionable insights to inform global Internet governance discussions, ensuring that unserved and underserved populations’ equity and Senior Manager Partnerships and Engagements for Paradigm Initiative and also an expert in human rights-based advocacy. I also have with me Onika Makwakwa, Executive Director Global Digital Inclusion Partnership. She has worked for 25 years driving gender and equity-focused policy. And I also have with me Leon Cristian, the Executive Director of Internet Bolivia. So today online, I also have my colleague Sani Suleiman who will be moderating and will be gathering questions and comments during plenary at the end of the panel discussion. So welcome once again, and I think we will dive right in and go for it as my colleague always says. So I’ll start the conversation today with Chris. So drawing from your leadership and your background in building digital economies, how can emerging technologies like I mentioned, low earth orbit satellites and 5G be leveraged to bridge the connectivity gap in a sustainable way?


Christopher Locke: Thank you and thank you for inviting me to the panel. It’s lovely to be here. We are relatively agnostic at the Internet Society about what connectivity platforms people use to connect. and the work we do with our community-led connectivity program. We have programs around the world that use a wide variety of platforms, whether it’s Fiber, whether it’s Leos, whether it’s mobile. But what we have seen increasingly in the work that we do is how Leos in particular can really help bridge remote communities for blindingly obvious reasons, in that, you know, having satellites allows us to get connectivity to communities that otherwise would not be covered by Fiber platforms or not be covered by mobile platforms. And we’re seeing that increasingly, particularly in some island states. There’s a very strong focus in the new Fiber strategy for the Internet Society on connecting small island states. And what we’ve seen particularly in the Pacific region is that satellite is increasingly becoming the norm for connectivity and is helping connect remote islands in exciting new ways. So indeed, in some cases, we’re seeing communities and islands where Starlink is becoming the largest ISP in the island. And actually, the majority of Internet is coming over satellite platforms. So there are huge opportunities for the way that Leos can work. And as more are launched and the prices come down, it becomes more affordable. Sustainability, though, we think is still an issue. And sustainability comes from two different areas. Firstly, obviously, is price. We’re still in the very early stages of Leo Internet. And not only is the price initially expensive, but also we’re increasingly seeing that as the networks become clogged, the prices sometimes are quite dynamic based on demand. So having coverage from a Leo constellation, having coverage from a Leo provider, can absolutely provide connectivity to a remote area. But if the pricing is again unstable, and if, as we are seeing at the moment in some African cities with Starlink, where they’re pretty much booked out. Mike Jensen, Leon Cristian, Mike Jensen, Mike Jensen, Mike Jensen, Mike Jensen, Mike Jensen, Mike Jensen, Leon Cristian, Mike Jensen, Mike Jensen, Leon Cristian, Mike Jensen, Leon Cristian, Mike Jensen, Mike Jensen, Mike Jensen, Leon Cristian, Mike Jensen, But at the moment, we’re really still in the early stages and the pricing issues and the regulatory issues are still to be solved


Nnenna Paul-Ugochukwu: Thank you, Chris, and I love that you’ve touched on some of the lessons. I think that you’re already learning in implementing these solutions and talking about sustainability and the regulatory frameworks So with ISOC foundation, how do you evaluate? Talking about sustainability, how do you evaluate the impact and are there specific examples you can give with some of the lessons that you have shared that can show how they can be replicated across these local communities because you specifically mentioned communities in Africa. Are there any specific examples? How are you evaluating? What are the lessons and then how can those be replicated?


Christopher Locke: Yeah, the evaluation for us starts before the program begins So we have a very good community readiness toolkit that we use when we’re working on a potential project And what we do with that toolkit is not only evaluate the technological solution But as is implied in the title, we evaluate the community. We’re trying to understand who is going to be leading this for the community What is the governance structure and the support structure within the community for? Is it a school? Is it a local organization? You know who is going to be owning and maintaining the network and then you know How can we provide training to support them not only just in the crimping of the wires But also then in understanding what sustainability looks like. On a panel I was on yesterday when we were looking at kind of innovative financing in this space We were talking about the need when we develop community networks to develop them with business training as well as with technology training and what we want to be able to do is provide Our grant capital and our capacity building support to get local communities off the ground with their community centered connectivity solutions The next phase after that shouldn’t be another grant. The next phase should be that there is real sustainability and the community network because they’re able to build a pricing model that allows them to kind of cover the costs of the community. Often with schools, that’s where schools can sell connectivity to the local community via voucher systems or whatever system works and support that out. But what we like to see in our community readiness toolkit is that the technology solution is there that’s fit for that particular need, whatever the geographic need of that community is, but then from a business model perspective that you’re building a community network that is sustainable because it actually meets the needs of the community, can be managed by the community and is economically sustainable as well.


Nnenna Paul-Ugochukwu: Thank you, Chris. My takeaways from there is to be creative around regulation, around licensing and focusing also on building capacities of the community networks, ensuring that they are ready to manage these networks themselves and keep them sustainable. Thank you very much. So moving from connectivity and more to meaningful access, I’ll come to you, Onika. So drawing from your leadership of the GDIP, what evidence-based policy and regulatory frameworks can best improve affordability and inclusion and support last mile community initiatives such as the ones that Chris has given us examples of?


Onica Makwakwa: Great. Thank you so much for that question and thanks for inviting us to this panel. It’s always wonderful when we come to IGF and talk about these things to also have a partner like Internet Society that’s actually on the implementation side of making sure that some of these ideas have an opportunity to be tested out in communities. So in terms of, you know, focusing on this last mile connectivity initiatives that are affordable and accessible to everyone, it’s really important for us to. continue to support policy frameworks that are people-centered at first and that are designed through a lens of equity, human rights, and accountability. Because when we look at who is not connected at the moment, those tend to be the population that tends to benefit the most out of being connected when we talk about transformative qualities of connectivity. So the first thing that I would say is that we need to prioritize universal affordable access as a right. You know we need governments to embed digital access in development policies and frameworks and in the rights framework as well. Treating connectivity as a public good and not just nice to have a luxurious thing. This includes setting very ambitious universal service goals and enshrining them in the right to meaningful connectivity that focuses on regular access, reliable access that is high quality, as well as affordable internet and devices for people to be able to benefit from digital technologies. The other item is we need to reform universal service and access funds. I think we’ve been talking about this for a really long time. It is quite evident and we’ve done quite an audit a few years ago looking at universal service and access fund, how they’re deployed, how effective they are, and you know it’s quite clear that regulators should open, you know, that countries should continue to utilize universal service access funds in a lot better way. Perhaps even being open to addressing some of the demand side issues of connectivity like digital skills and affordable devices. So it’s not all just about infrastructure but beginning to address I’m here with the CEO of the World Bank, and I’m here to talk about the importance of public reporting. We need public reporting because we can’t continue to have gatherings like this where we keep talking about how we don’t really know the impact of universal services and access funds or that there are funds that are not utilized. And enabling community networks and innovative models, regulators need to open up spectrum, and that’s beginning to happen as we see with some of the work that internet society organizations have been doing around community networks and funding connectivity, community-based initiatives. And one of the things that’s really key that Chris mentioned in training for business and for technical skills, we need to be open to different financial models for connecting everyone. I think that the pure commercial model alone is not going to be a size that fits all communities, so we need to be open to the fact that, you know, in a continent like Africa, for example, where so many people live on less than $2 per day, we might, even if a country reaches the affordability level, remember that’s based on averages. We have to be willing to think about subsidies to certain communities or co-op model connectivity that allows those who may never be able to afford connectivity to be able to still have access as a public good and as a right for them to be connected, you know, mandating inclusive infrastructure sharing. And open access is something we’ve talked about, I think, at literally every IGF and gathering of this kind, and we need to see more and more efforts to prevent monopolies so that we’re truly building connectivity strategies that focus on everyone being included. And lastly, integrating digital inclusion in broader economic and social policies. Connectivity policies need to be linked with the investments in digital literacy, local content devices, and online safety issues, because those issues also drive the experience of people online. So I often say that we want everyone to go online to do what exactly, read English, be on Facebook? I don’t think so. I think there’s a lot more, if we’re talking about digital and transformation, we need to invest more in some of the other things that ensure people are able to fully benefit meaningfully from being online, including relevant local content in local languages that people are able to consume. And lastly, I’ll just summarize by saying that policy and regulatory frameworks need to work for the last mile, and they need to focus on being inclusive by design, accountable in delivery. Accountability is just one of those things that I’m pained by. I feel like we’re just not seeing enough of that. And transformative in the impact, we are not just connecting people for access. It has to be beyond access. What is it that they’re able to do to improve and change their lives by being connected? So we need to put that at the center, so that it’s all grounded in equity, actively dismantling structural barriers, and lastly, not leaving women behind. We’ve done a lot of work on connected resilience, which is a study that looks at gendered experiences of women. I invite you to read that report and just really see what the lived experiences of women is through meaningful connectivity and you will really get a picture of how we’ve barely scraped the surface in terms of being inclusive in our connectivity efforts. Thank you.


Nnenna Paul-Ugochukwu: Thank you, Anika. I had a follow-up question that I believe you already started to answer and that was around how we can move frameworks beyond just connectivity to ensure, like you said, meaningful access through digital literacy, cultural relevance, developing local content. Maybe touch a bit more on that and also share what do you think are the metrics that we should adopt to be able to measure? You’ve given us some things that, you know, inclusive design, accountability and delivery, very specific mandates as well. How would we know that we have gotten there? What are the metrics? How would we measure that?


Onica Makwakwa: Yes, great question, actually. So it’s really important that we measure what we want to see impact in, right? You know, so we are still struggling at just having data that’s segregated even by gender, believe it or not. In 2025, we are not collecting gender desegregated data. We are not collecting data that’s desegregated by income levels. And we learned this when we used to do the affordability index report because, you know, a lot of these indices rely on national averages. So if you take a country, for example, like South Africa, and you measure it on affordability based on the one gig for no more than 2% of household income, the country actually comes out as being quite affordable, right? However, when you take the population, because South Africa is a country where more than 50% of the population lives on less than half the average GNI. So when you take the population and you splice it by income quantiles, We’ve got these incredible instruments like your National Broadband Plan that can be mute. In fact, if you take a lot of National Broadband Plans and just do a web search of women, you’ll maybe find one or two, but no real measurement of how are we going to know we’ve actually succeeded. Is it 10% of the women, 30% of the women? We just have not been clear. So we’ve got an opportunity to make sure that our instruments that we measure our connectivity, our gaps with, are very clear and articulate on what the target and that goal is. Is it 40% of the unconnected? What percentage of that is women and what percentage is rural? Women are also not a monolithic group. So really getting into all those intersections of how we are connecting people is really important, but also moving away from measuring on basic connectivity. I’m a big advocate of us raising the standards. At the global level, the standard of a connected person is someone who uses the internet once every three months is so underwhelming that we need to just really, I think those of us who are going to WSIS need to really talk a lot about how that standard needs to improve. Meaningful connectivity is about daily access, especially when we are talking about the age of artificial intelligence and the things that… Thank you so much for joining us. We have a lot of things that we want to do, you know, we want to do in terms of digitization of public services, daily access and limited access, 4G speed at minimum, if we truly want people to do the kinds of things that, you know, we are promising them for transformation, because the truth is, we need to stop having poor policies for poor people. You know, poor phones for poor people. We need to do a lot of things, so let’s work on real device affordability strategies, not device financing. There’s a big difference between you can afford a phone over three months and you can afford a phone now, you know? So we need to, I feel like we really have not started to do the work in terms of driving affordability, in terms of making sure that there’s rich and relevant content, that there’s a way to make sure that all of the services that we’re providing are accessible for all of the people, and that’s really what we’re doing, and we’re doing this very much in English, which is not a majority language for most of our population.


Nnenna Paul-Ugochukwu: So, yeah, I mean, I think, you know, looking at all of the instruments that we have, including our digital development policies in general, to see how they are explicit about, you know, the kind of vision that we’ve set for the world, the kind of, you know, white kind of work that we’ve done on the kind of connectivity goals to achieve a high- quality and affordable access. And a lot of what you just said about the kind of work that we’ve done on the kind of connectivity goals to achieve a high- quality and affordable accessibility, and our partnership-building efforts. So in what role do you do public-private partnerships And community driven models play in ensuring access to the Internet.


Thobekile Matimbe: Thank you so much, Nnenna, for talking about public private partnerships and obviously access to the Internet for our communities. I think we’ve already begun to, you know, unpack community centred initiatives, connectivity initiatives and how they are important. And when we’re looking at it from the perspective of those initiatives, it is clear that it’s a multistakeholder approach. It’s all hands on deck in terms of laying out what is important, what should be there for meaningful connectivity. I need to highlight the importance of, you know, obviously a relevant and appropriate regulatory environment that ensures that, you know, connectivity and meaningful connectivity is reached and attained, especially for excluded communities. And this is something at the heart of our work at Paradigm Initiative, focusing on, you know, those who are underserved in rural communities and research from ITU last year, you know, presented that at least 38% of Africans of the African population is online. The digital divide is not something that has, you know, been eradicated at this point. Even as we look at attainment of the Sustainable Development Goals, we still have a big gap that is there. And we are now even moving away from just connectivity, but saying there should be meaningful connectivity. And what does that mean for collaboration and, you know, putting all hands on deck in terms of ensuring that this is a reality for our communities. So I think in that vein, it’s clear that, you know, in addressing that gap, you know, there’s need for that regulatory framework that ensures that, you know, community-centered community connectivity initiatives can thrive. and also that even the private sector is also able to come on board. I think Onyeka touched on the Universal Service Fund and it’s something that we’ve done research on at Paradigm Initiative through our State of Digital Rights and Inclusion in Africa report, LONDA, and looking at the 2024 report, it shows that, you know, from the 27 reported countries, we have less than four countries that really are transparent about those resources and what they’re doing with them, what they’re collecting and how they’re gathering those resources through the support of the private sector. There’s also not transparency about the initiatives that are even being rolled out and there’s not enough support even for whatever, you know, community centers for connectivity, how they’re being run and how they’re being sustained so that they’re not just something that’s put up in communities and as hubs, but it’s something that is sustained. And I like the fact that as well there’s, you know, a lot of support from a lot of civil society actors that are, you know, running other initiatives as well to support this and I think Paradigm Initiative has done a lot of work on life legacy, ensuring that they bring digital literacy to communities and putting their hands on deck and also engaging the government as well as a key partner to say, here we are, we can collaborate and ensure we expand, you know, the reach of digital literacy in communities that are underserved. And I think one of the key important things as we also talk about some of the work that we’ve done under the Local Networks Initiative together with AEPC, I think it’s been key to highlight the importance of social impact when these community-centered connectivity initiatives are being rolled out, ensuring that communities also are on board at the table of describing or rather articulating the vision of this initiative so that there’s tangible, meaningful, you know, benefit for the communities as well when we’re looking at even inclusion as a whole, bringing voices on board, what are the key things they want to see, what they want to benefit from these initiatives and how they can also be a critical stakeholder in ensuring that they are sustainable. I think it is something that is important. I will highlight of course as we are discussing the world summit on information society and looking at how far we have come 20 years later. I think it is really concerning that where we are right now. We are at a place where 20 years later we are still discussing the digital divide and articulating similar gaps that we articulated as far back as 2002, 2003 where we still need to see enabling policies that are ensuring that this happens. We still need to see great cooperation across diverse stakeholders to ensure that there is meaningful connectivity. We still see a really broad digital divide and how can we as we engage in these conversations be able to speak truth to ourselves and say do we really want to see this change? Do we really want to see the needle move? Because where we are, we are still where we were and we want to be able to ensure that we prioritize this. Even in national budgeting processes we have actually governments making sure that they prioritize this. I have been in one engagement with one government on the African continent where we are discussing digital inclusion for underserved communities and the feedback was from one member of parliament that look, do you really think my grandmother needs a smartphone? And you are like okay, at this day and age we are still debating the importance of access to digital technologies for our communities. So how far can we move? I think we need to be at a place where we speak truth to ourselves and say look, we cannot leave anyone behind and this is not an educated elitist conversation but it is a conversation that really ensures that. especially the Universal Services Fund and community initiatives


Nnenna Paul-Ugochukwu: because I believe also that transparency fosters trust, it makes building of this public-private partnerships even stronger. So thank you, Tobekele. I’ll come to you now, Cristian. Thank you for being here today. So looking ahead to 2030, now we’ve been saying set goals, what do we want to see, what do we want to achieve? And having, you know, advised governments on digital rights, what innovation, be it technological or policy-based, has the potential to make today’s digital divide obsolete and how can we prepare for that future now?


Leon Cristian: All right, thank you, Nnena. Good morning to all the wonderful panelists and the friends participating online and on-site. I want to talk more about the complexities of this debate right now, not perhaps in a very positive way because actually we are seeing a lot of complexities in the world. You know, 2030 looks so in the future right now that let’s expect to reach 2026 first. All right, from what I have been hearing these days at the IGF, There are, I think, four challenges that we have to address. Let’s don’t expect that these things actually make things more complex to to, you know, close the digital divide. The first one I want to talk about, I think Christopher also mentioned it, is the emerging of new technologies such as low-orbit satellites, which are, of course, rapidly solving connectivity problems, especially in the most remote areas. For countries with very big digital divides and low resources, such as my country, Bolivia, these kind of connections seem actually an interesting solution, so we should take them into into account, but at the same time, also these technologies are generating new problems, problems that perhaps we didn’t have before, related to data sovereignty, spectrum allocation, national security, among others. And the second thing, the second challenge, is the regulatory power imbalances that now are growing between states and big tech companies. Countries of the global majority have today a minimum capacity to demand the fulfillment of guarantees and rights of these companies. This is something that we all know, but going back to the first complexity that I mentioned, for example, Starlink is operating in my country without permission. How this happened? Because my government asked Starlink to have a complaints office in order to operate in Bolivia, but since Starlink is so powerful and such a big company, they said, I don’t need that, I don’t need to put an office in your country. I can operate and I can provide my services even if I don’t fulfill all their requirements. So that is happening. And how we can do accountability to these companies if they don’t even want to invest in one office in one country like Bolivia. The third complexity is about the disappearance of a governance model based on cooperation. In a world in international crisis it is becoming increasingly difficult to think of models of internet governance because particular interests are becoming more important than the needs of the most vulnerable populations. And that is something that we also have to address and the spaces like the IGF are so important because they allow us to speak about these kind of issues. And fourth, the increasingly complex technologies that today require an infrastructure and a computing capacity that our countries don’t have. Now the digital divide is not only about having or not meaningful connectivity, it’s also about having enough capacity to run AI’s, quantum computation, blockchains, cryptos and all those technologies. So there is another connectivity divide, there is another digital divide that is happening right now. And we also have to address that digital divide for these countries because we are lagging behind and we will not have access to these technologies because we don’t have this capacity to run them, right? So we don’t even have the energy infrastructure to power it. That is why our countries are in a double digital divide. So these are real big complexities that we also I think we have to address. Of course, I’m being really negative in relation to what can happen in the future, but I think after hearing all these things in the IGF, I think we also have to, you know, speak about them.


Nnenna Paul-Ugochukwu: Thank you. Thank you, Cristian. Thank you for highlighting the increasing complexity in how the, you know, the digital divide is growing as well. And maybe just to bring some balance, because you’re right, looking at these complexities and challenges, it looks bleak because we’re just trying to, you know, solve one. And then, you know, we have these increasing complexities and challenges being thrown at us. How would you or what would you prefer as a balance? How would you balance this? I like the example you gave about Starlink and Bolivia and also about how it’s new problems are coming up in terms of national security, data sovereignty, other digital rights issues. So how can we balance investment in infrastructure and ensuring that we have connectivity with the necessary investments in rights protection mechanisms for communities and for countries as well?


Leon Cristian: Okay. I agree with everything that Onika said. I think that is the way. So I also think that the answer depends on the context, the needs of the specific countries or regions. In the case of Bolivia, I think, for example, that the last mile problem is purely a market issue. Why? Because the invisible hand of the market failed here to solve the connectivity needed by remote areas and populations. Either because these are very small communities and nobody wants to invest in these communities, or because the state is receiving lobby of these very big companies. ICTs, ICPs, and they don’t allow these communities, for example, to have the regulation that they need for community networks. There are so many cases in Latin America and in my country of community networks that are actually functioning and resolving some of the connectivity problems. But as Christopher mentioned, there is a difficulty here to sustain these community networks. The government is not helping at all because the government are creating legislations and regulatory frameworks only from the perspective of these big companies, what these big companies need in order to operate, but they are not facilitating things for small companies that also have these issues and they are doing what they can in order to resolve their own connectivity problems. I have a lot of cases in which communities, they said, I want to invest in my own infrastructure, but the government don’t allow me to invest because the government says that only the government or a big company can invest in this community. Why? This is something really, I don’t know how to explain it because it’s really hard to understand. So what should be done, in my opinion, and in my context, we need to stop seeing connectivity as something to be solved only by the market or only by big companies and more as a basic need that should be solved through public investments, public-private alliances with a greater participation of the communities that are affected, of course, and I think that the work that is being done by internet society, global digital inclusion partnership and civil society in general is really important and we have to strengthen that work. Also, of course, with the inclusion of the local communities.


Nnenna Paul-Ugochukwu: Thank you. Thank you, Cristian. My takeaway from your preferred solutions is they should be context-informed and people-focused. I have lost connectivity online to Zoom, but just to say to the audience online and on site, I hope you’re getting your questions ready and I hope that this has been an engaging conversation for you with some key takeaways and focus areas. So before we go to Q&A, I think I have one last question and this is for all my panelists. What practical steps can civil society, ISPs, mobile operators, all the stakeholders, we’ve talked about businesses, start-ups, what can they do together to de-risk investments in last mile infrastructure and promote inclusive access based on everything that you’ve spoken about today? I think I’ll start from Chris and then we can go down.


Christopher Locke: I’ll actually repeat something that was said earlier on. We need to understand there are many business models to providing connectivity. We need to understand that the dominant model of profit-centric, telco, satellite provider, etc., whilst fundamentally important in providing the possibility of access for last mile access for communities, isn’t necessarily the sustainable way. Mimicking a small version of being a telco isn’t the way to build a sustainable community network. There are co-op models, there are many other models that allow us to develop that. So I think really being innovative and creative in the way we think about what sustainable business models look like, and then as has been said, getting governments to support that, making spectrum licensing for local usage affordable and available. to those networks In a way that doesn’t just look at spectrum licensing as a nice income stream for for the government You know understanding the GDP impact of giving people connectivity, you know, the astonishing changes to their lives you get through connectivity Massively outweigh the small amount of money you can earn from selling Community spectrum licenses to network. So we really do need as was said earlier particularly by Anika to drive Governments to support very different types of connectivity solutions and business models and and put the regulation in place to support those


Nnenna Paul-Ugochukwu: Thank you, Chris


Thobekile Matimbe: Thank you. I will echo the importance of an enabling regulatory framework and and and the importance of inclusion as a matter of importance and in line with human rights not as a privilege, but something that is really critical and necessary as well as Also highlighting the importance of multi-stakeholder approaches not just between the government and the private sector but also civil society because of some of the meaningful steps that CSOs are taking to bridge the connectivity gap


Nnenna Paul-Ugochukwu: Thank you


Onica Makwakwa: Okay, so I will I would say for me it’s advocating for policy incentives together And it’s really great that we are all here as government actors private sector actors and civil society We need to focus this advocacy strongly on you know, access to our universal service funds for multi-stakeholder projects text waivers or import duty reductions on equipment for last mile connectivity and license fee exemptions or flexible spectrum access for small-scale operations community networks or other rural development initiatives And on the data, we need to collaborate, you know, to guide investment and share and combine disaggregated data on coverage gaps, on affordability and digital use to better identify and understand, you know, what’s viable for less small investment areas. And civil society can help gather real-time community feedback while operators and ISPs can share infrastructure maps or usage data with appropriate safeguards. I know there’s a lot of protectionism that happens around this particular issue. And, you know, lastly, we need to develop and support local digital ecosystems, you know, by collaborating to incubate local startups and encourage local device repair and distribution networks. It really baffles me that we are also talking about climate issues and e-waste and sustainability of the planet. But we still have so many countries where the right to repair is not practical and a reality for these devices. And that’s one of the things that can help drive the affordability of devices down, especially for me in a continent like Africa, that’s a huge reuse market. It really baffles me that we have not looked at this issue of the right to repair as one of the solutions towards beginning to lower the cost of devices. Thank you.


Leon Cristian: Yeah, for me, it can be repetitive because this was already mentioned that meaningful connectivity should be a right and should be embedded in all the international policy framework, not only the one related to digital policy. Because this is something really transversal to almost anything right now. And as I mentioned, connectivity should not be left only for the market or the companies to solve. This is something that is really all the stakeholders need to be involved in this. Civil society, companies, government, and of course the local communities. Because connectivity is not a privilege, it’s a necessity.


Nnenna Paul-Ugochukwu: Thank you. Thank you to all the panelists. I already see a question coming up online, so I will take that as a cue to open the floor for questions. So the first question we have, talking of sustainability as a private sector looking at deploying community networks, what renewal incentives are available from Internet Society Foundation and how many years does the grant cover? So I believe this is for you, Chris.


Christopher Locke: Absolutely. So we provide a range of different support for community networks. We provide grants by reconnecting the unconnected program. Usually these are done on an annual basis. We have annual granting windows, so there is the opportunity to continue grants going forward. But as I said, what we like to do is, if we get into a relationship with a community network, is build them towards the point where they are sustainable without the need for grants. Now what that sometimes means is someone can come initially for a grant to support the initiation of a community network, and then at a later stage they are looking to spread it out to a larger area, so the follow-on grant allows them to increase their coverage. And then later on, maybe it’s something else. We are increasingly seeing power supply as something that people are wanting support with. So we don’t like to just continually fund something to stay as it is, or to get to a situation where it’s not possible to sustain without grant support. But what we do want to do is see actually if there is growth, how can we help those community networks overcome the next phase of their growth, and how can we get them on a path to sustainability, and how can we meet.


Nnenna Paul-Ugochukwu: Thank you, Chris, and I hope that answered the question. We have one in the room.


Audience: Thank you very much. My name is Bara Kotieno. I hope I can be heard. I chair the Association of Community Networks in Kenya. I’m here with my colleague James sitting at the back. Thank you very much, Chris, and the rest of the panelists for the very interesting interventions which are very relevant. Actually, I consider this a very pertinent and relevant topic. I wanted to just share some thoughts to affirm the comments that have been made by the panelists. First, Kenya now has about 20 community networks. We have a target by the Communications Authority of Kenya to build 100 community networks. It is based on realization of the fact that after 20 years of investment in GSM, we only have 30% of the country having meaningful connectivity. Therefore, there is a need to accelerate connection of the 70% that are remaining, and community networks have been found to be viable alternatives. Thanks to the Internet Society, we have received seed funding that has actually established 90% of the community networks that I have mentioned. We do have support from APC that has also contributed, which we really appreciate. But we have a letter of commitment or we have commitment from the Internet Society to work with us in achieving our target of 100 community networks. Of course, I cannot forget to thank the Communications Authority of Kenya, which has worked with the community to ensure that we have a community network service provider license. We also have further systemic enhancements to create a Tier 4 license. we have demonstrated that community networks are actually a viable alternative but the challenge now is to actually prove that there are sustainable means of providing affordable connectivity to the community and maybe that’s a challenge I will throw back to the panel to help us now figure out the sustainability component. Thank you very much. Thank you


Nnenna Paul-Ugochukwu: Barak. Any reflections to Barak? I mean I’d just like to give the respect back


Christopher Locke: and respect the work that Barak and the organization does you know it’s very nice of him to mention ourselves on APC as supporters but the work that they do in Kenya is astonishing and I think most importantly can be a case study for what country level coordination of community networks, training, development can actually do to achieve really immense results. Yeah thank you, thank you Chris.


Audience: Hello, you can hear me? Yes we can. Okay, my name is Leo from United Republic of Tanzania. I want to ask one issue. We all acknowledge that accessibility is still a big challenge especially in global south despite of the improvement of mobile coverage. We understand 8% plus of coverage is achievable against the accessibility of 38% of in Africa. It shows that the pricing and device affordability. It’s still a big problem Apart from the solution you have mentioned Please can you share more experience you have to solve the issue of the accessibility especially in the device affordability. Thank you


Onica Makwakwa: Okay, that’s a bit of a tough one so, you know device affordability one of the Research that we had done Specifically looking at Africa. We actually learned that device affordability is still a very huge challenge in addition to you know, the affordability of the Internet itself and that when you look at How much people are actually spending on? access to on accessing these devices We were able to to find that some Community some countries you find that people are spending anywhere from 20 to 60 percent of average household income on Just purchasing one device There’s a lot of work that’s currently happening around device affordability including a lot of resistance against Subsidies of Devices and we’ll pack that for a little bit for now. So looking at affordability specifically one of the things that we’ve done successfully in a couple of countries is looking at taxation of devices and found we found that there’s anywhere from 20 to 40 percent of Taxation that is on devices whether it’s an import duty tax your VAT Or you know sales tax or what have you but there’s anywhere from 20 to 45 percent And we’ve been able to demonstrate actually that if governments could just roll back some of those taxes It actually increases uptake of digital technologies within the country and I’ll give you just a quick example between Nigeria and Ghana where we found that people in Ghana would actually buy their devices in Nigeria and activate them in Ghana including Those who live on the border in terms of their DSTV Subscription they would live in one country and do their subscription in another country because those taxes are so high that they actually make a difference in affordability and so but I think what has been An area I will criticize of our all our multi-stakeholder community is that our focus when it comes to device affordability has predominantly been on Affording the devices over time Instead of really tackling the issue of how do we lower that initial cost of devices? To a point where more people can afford to be able to get them. The repair issue is one of them No one wants to talk about local assembly I’m not sure why because local assembly can also enable Gives an opportunity to also enable and retool The workforce in that country, especially in a continent like Africa where majority of our population is young people We need retooling and reskilling so bringing some of these devices to be partially or fully assembled within the continent Would make a huge difference including the content that’s uploaded, right? So why must this device arrive in my country with everything including the plastic that gets peeled off and all the software Already installed could not couldn’t some of those Happen local and we’ve seen this model with the motorbikes that are imported into the continent for delivery of food They literally come in parts and therefore their text differently Compared to if they arrived as a fully assembled motorbike to service the community the mobile operators have predominantly been working on the device affordability around financing schemes and Different models for that and that’s really great and wonderful but I really would love to see a commitment from civil society from government especially and some private sector around And developing this phone that we were promised, I don’t know how many years ago, the $10 phone is not here yet, but maybe it’s not going to be $10, you know, but we need to really, I would not give us a good grade on device affordability. We haven’t done a good job. We need to regulate for both the market and for consumer protection. And pricing is a consumer protection issue.


Nnenna Paul-Ugochukwu: Thank you, Onika. And I believe, Tobekili, you wanted to chime in?


Thobekile Matimbe: Thank you. I wanted to chime in, I think, in response to the comment, great comment from the second speaker. Just to highlight, I think, the importance of what actions are being taken in other spaces. I know in Tanzania, for example, there’s been, you know, great work as well by community networks. And, you know, I think annually they prioritize the School of Community Networks. And we’ve been part of that process as well to be able to convene this School of Community Networks where I think we unpack how to improve skills in designing, administration and management of the community-based telecommunications networks, as well as developing skills as well to create a sustainable business model for those community networks. I think that would be very key in terms of sustainability of the ongoing community networks.


Audience: Thank you. Thank you. I’m Lee McKnight, Syracuse University professor in the United States. First, I wanted to share some possible good news or challenge the perspective that blockchain and other technologies are not reachable or accessible in the global south. We are working with Brazilian and Peruvian professors right now with open source software that we would be bringing and co-creating with indigenous and local communities in the Brazilian and Peruvian Amazon later this year. So this is not… And secondly, on AI, not AI if you believe Google and it takes a giant… , the first comment I wanted to make is that there is no trillion-dollar data center, it doesn’t, there’s smaller range of options also at the edge that can be reached and be accessible elsewhere. So that’s the first comment. Second, I wanted to note, I have something on my back, which is with, I’m just kind of making an announcement, Anika knows what I’m going to say, but I’m going to say that we are launching a new Internet program, or a SIP, as in a SIP of Internet, meaning not first-class Internet, but Internet accessible anywhere, and also thanks to the Internet Society Foundation’s support a couple years earlier in Costa Rica, we are now launching the program in cooperation with the government of Ghana and with the African Parliamentary Network, for real, to bring these, what’s called the SIPs, the SIPs, to a community network, and we’re launching just now, and I’m going to say that there are libraries, also a solar panel, you can set up, you can create a community network everywhere except the North and South Pole, starting now, and we will be bringing this forward, again, in cooperation with Internet Society chapters in many countries, with Parliament, across Africa, Central America, and elsewhere, so this is not a magic solution, there are limitations still, but there is really no reason with support of librarians, they can’t access the Internet either when they go out there, they’re your natural allies to bring and change the reform, the legislation, to bring community networks everywhere, because we’ve tapped out under WSIS, we’ve let 2.6 billion in 2022, 2023, 24, 25, it’s still 2.6 billion unconnected, it’s not working, this is the only approach that’s going to work is community networks, and I thank you for everyone’s attention.


Nnenna Paul-Ugochukwu: Thank you. Thank you for that answer. Congratulations.


Audience: I’m Lisa Dakanay from the Institute for Social Entrepreneurship in Asia. I just wanted to ask the panel in terms of experiences and perspectives of integrating community-centered connectivity initiatives with social entrepreneurship, social enterprise development, and social and solidarity economy, because in the Asia-Pacific region and the Philippines where I come from, that integration has been a critical factor, I think, in addressing sustainability and social impact issues of community-centered connectivity initiatives. Thank you.


Nnenna Paul-Ugochukwu: Thank you. I think we can start with Chris.


Christopher Locke: It’s late on Friday. I’m running very low on brain and energy at the moment. We discussed on the panel yesterday, and the excellent work in measuring the social impact of community networks that was presented, that being able, and again it goes back to the business model question, being able to understand what the impact of a community network is, not just in profit and sustainability, not just in a very simple calculation of contribution to the economy, but to a very broad social impact network of what is the implications on health, on education, on a very broad range of factors that was presented at our session we were both on yesterday, I think gives us a much better sense of assessing the success of a community network and being able to point to what is possible by investments in community networks. These are not, and I can’t say this enough, these are not mini telcos. These are community organisations providing a vital service to a local community in a way that makes sense for that community. And I think the better we can have the kind of granularity that was offered in the social impact assessment work that you were showing yesterday. Gives us that opportunity to not only measure ourselves the impact better and obviously measuring the impact of the community networks is something We do a lot within our granting program at ISOC and something I want to look to see if we can adopt the things that were Presented yesterday in the other session to do But far more importantly and again going back to the previous question from the previous speaker in the way We talked to governments and parliamentarians about why this is important and why is essential You know talking about success in as broad a spectrum as possible of social and economic indicators Allows us to make an incredibly strong case for the investment in community networks the right policies for community networks and the support of the kind of Organizations that Barack and many people in the room are running


Nnenna Paul-Ugochukwu: Thank you, Chris. Thank you. I Don’t know anyone else wanted to speak to the question on the social


Onica Makwakwa: Yes It’s really important for us as we you know, I always feel uncomfortable Sort of seeing myself as this Advocate for connectivity and the question is always so what you want people to connect to do what exactly so, you know social enterprises are a real great opportunity for us to actually also support the use case of This connectivity and why we want people to be connected and I think the best example that I have so far And I’m just this is gonna just truly be a plug-in So we did this study that was actually also funded by Internet Society on connected resilience looking at how women are staying connected through meaningful connectivity specifically and one of the organizations that we continue to we Discovered in this process and continue to support and work with is an organization called women in digital that’s based in Bangladesh and You know, I Nila basically started this organization with the excitement and aim of Teaching women how to code now. We all know about all these programs about teaching women how to code, you know There’s all kinds of goals we’re gonna get a million girls coding and all of that and the question always becomes so after all of this and and then what? What are these girls doing with this coding? It’s really important, I think, what women in digital have been able to do is create an ecosystem for them after they are trained to be able to begin to change the content ecosystem within Bangladesh, right? So one of the things that they are doing that I actually just got mine this week is creating smart cards for people. So I’m coming all the way from South Africa, I ordered my digital smart card from Bangladesh because I want to support this project that’s led by women where they are creating these smart cards and, you know, just really incredible, and someone kind from Bangladesh was able to bring it to me to this session. So it’s not for me, I think it’s really important that it’s not good enough for us to, like, get people devices, get people digital skills. How are they utilizing them? How are we creating content that’s relevant and usable for them? But how are we also giving them an opportunity to truly transform their lives and be able to utilize these resources? How is government supporting social enterprises and creating an enabling environment for them to be successful and to ease registration? Because right now, most countries, registration is really for profit or non-profit. Social entrepreneurship has huge opportunities, especially in digital space, but it’s not recognized as, you know, formal business in quite a lot of communities so far.


Nnenna Paul-Ugochukwu: Thank you, Anika. We’re about time, so maybe I’ll give a minute to Tobe Kili and Sir Christian, so some final remarks before we close.


Thobekile Matimbe: Thank you so much, Nnena, and thanks for everything you’ve done in the room and all the amazing contributions. I think what is key here from today’s conversation is the importance of ensuring that those who are in underserved communities are not left behind and that we ensure that there’s meaningful connectivity. Thank you. and how we can do that through, you know, effective regulatory frameworks and also relevant support as well, not just for establishing community networks, but also for their sustainability. And that’s very critical and very important.


Leon Cristian: Thank you. I would like to highlight and congratulate the initiative that they are doing with local population in Brazil. I think that is really important. And I think that is one of the cases we should replicate all. I believe that open source is really a technology that is a game changer and it has the capacity to empower local communities and reduce the dependence that countries like the one that I represent, we have with these very big tech companies. So including local communities is not only something about inclusion, it’s also about diversity. And we want to create an internet for the future that has diversity, that has indigenous languages, that has the perspective of these communities in how we are constructing, building technology. Thank you. Thank you, Cristian. I want to thank


Nnenna Paul-Ugochukwu: my panel today, the esteemed panelists. Thank you for sharing your experiences. Thank you for the fantastic insights, the interventions. We’ve gotten some great examples today from the panel and from the audience on what we can replicate in our communities going forward. Some key takeaways today, I think what’s reverberated a few things, connectivity as a right, because connectivity is foundational, but also seeing that the digital divide is increasingly becoming more complex with new challenges such as, like you mentioned, data sovereignty and national security. But we also have the opportunity with some of the recommendations that have been emphasized today, inclusivity in design, accountability. Thank you all for your time, for making time to be here. Thank you for your attention, thank you for your contributions and for the interventions. And I hope you have a great rest of the conference and enjoy the closing ceremony. Thank you.


C

Christopher Locke

Speech speed

169 words per minute

Speech length

1609 words

Speech time

570 seconds

Low Earth Orbit satellites can bridge remote communities but face sustainability challenges due to pricing instability and regulatory issues

Explanation

LEO satellites provide connectivity to remote areas that wouldn’t be covered by fiber or mobile platforms, but sustainability remains an issue due to expensive and dynamic pricing based on demand, plus regulatory challenges that are still being resolved.


Evidence

Examples from Pacific region small island states where Starlink is becoming the largest ISP, and African cities where Starlink networks are becoming congested and booked out


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Development


Disagreed with

– Leon Cristian

Disagreed on

Optimism vs. Pessimism about technological solutions and future prospects


Community readiness evaluation must include technology solutions, governance structures, and business training alongside technical training

Explanation

The Internet Society uses a community readiness toolkit that evaluates not just technological solutions but also community leadership, governance structures, and provides business training to ensure sustainability beyond just technical wire crimping skills.


Evidence

Internet Society’s community readiness toolkit and their approach of providing grant capital and capacity building support


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Development | Infrastructure


Agreed with

– Leon Cristian
– Nnenna Paul-Ugochukwu

Agreed on

Community participation and local context are essential for successful connectivity initiatives


Community networks should be developed as sustainable business models that can cover costs through local pricing systems rather than continuous grants

Explanation

The goal is to provide initial grant support and capacity building to get community networks started, but then move them toward sustainability where they can cover costs through local revenue generation rather than requiring ongoing grants.


Evidence

Examples of schools selling connectivity to local communities via voucher systems to support sustainability


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Development


Agreed with

– Onica Makwakwa
– Thobekile Matimbe

Agreed on

Community networks require sustainability beyond continuous grants through local business models


Community networks are not mini-telcos but community organizations providing vital services in ways that make sense for local communities

Explanation

Community networks should not try to mimic traditional telecommunications companies on a smaller scale, but rather operate as community organizations that provide connectivity services tailored to local community needs and contexts.


Evidence

Discussion of co-op models and various alternative business models beyond profit-centric telco approaches


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Development


Multiple business models beyond profit-centric telco models are needed, including co-op models for sustainable community networks

Explanation

There are many different business models for providing connectivity beyond the dominant profit-centric telecommunications model, and creative approaches like cooperative models can enable sustainable community networks.


Evidence

Reference to various co-op models and alternative approaches that don’t rely on traditional telco profit structures


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Development


Governments should support different connectivity solutions through affordable spectrum licensing rather than viewing it as income stream

Explanation

Governments should make spectrum licensing for local usage affordable and available to community networks, understanding that the GDP impact of connectivity far outweighs the small revenue from selling spectrum licenses.


Evidence

Comparison of GDP impact from connectivity versus small government income from spectrum licensing fees


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Legal and regulatory | Infrastructure


L

Leon Cristian

Speech speed

130 words per minute

Speech length

1244 words

Speech time

570 seconds

New technologies like LEO satellites create problems related to data sovereignty, spectrum allocation, and national security that didn’t exist before

Explanation

While emerging technologies like low-orbit satellites solve connectivity problems in remote areas, they simultaneously generate new challenges around data sovereignty, spectrum allocation, and national security that countries previously didn’t have to address.


Evidence

Reference to Bolivia’s experience with these new technological challenges


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Legal and regulatory


Disagreed with

– Christopher Locke

Disagreed on

Optimism vs. Pessimism about technological solutions and future prospects


Starlink operates in Bolivia without permission, demonstrating regulatory power imbalances between states and big tech companies

Explanation

Starlink refused to establish a complaints office in Bolivia as required by the government, yet continues to operate in the country, showing how powerful tech companies can ignore national regulatory requirements with impunity.


Evidence

Specific example of Starlink refusing to comply with Bolivia’s requirement for a local complaints office while still providing services


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Legal and regulatory | Human rights


The digital divide now includes not just connectivity but capacity to run AI, quantum computation, and blockchain technologies

Explanation

The digital divide has evolved beyond basic internet access to include the infrastructure and computing capacity needed to run advanced technologies like artificial intelligence, quantum computing, and blockchain applications.


Evidence

Reference to countries lacking energy infrastructure to power advanced computing technologies


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Development


Disagreed with

– Onica Makwakwa

Disagreed on

Scope and complexity of the digital divide


Countries face double digital divide – lacking both meaningful connectivity and infrastructure to power advanced technologies

Explanation

Developing countries are experiencing a compounding digital divide where they lack both basic meaningful connectivity and the advanced infrastructure needed to run next-generation technologies, putting them further behind.


Evidence

Bolivia’s situation as an example of lacking both basic connectivity and advanced computing infrastructure


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Development


Market failures require public investment and public-private alliances with greater community participation

Explanation

The invisible hand of the market has failed to solve connectivity needs in remote areas and small communities, requiring government intervention through public investment and public-private partnerships that include meaningful community participation.


Evidence

Examples of small communities that private companies won’t invest in, and cases where governments create regulations only for big companies while blocking community investment


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Legal and regulatory


Agreed with

– Onica Makwakwa
– Thobekile Matimbe

Agreed on

Connectivity should be treated as a fundamental right requiring government support and policy reform


Including local communities brings diversity and indigenous perspectives essential for building technology for the future

Explanation

Community inclusion is not just about digital inclusion but about creating diversity in internet development, ensuring indigenous languages and community perspectives are incorporated into how technology is built and deployed.


Evidence

Emphasis on open source technology as a game changer for empowering local communities and reducing dependence on big tech companies


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Sociocultural | Development


Agreed with

– Christopher Locke
– Nnenna Paul-Ugochukwu

Agreed on

Community participation and local context are essential for successful connectivity initiatives


O

Onica Makwakwa

Speech speed

154 words per minute

Speech length

2927 words

Speech time

1137 seconds

Universal affordable access should be prioritized as a right, embedded in development policies and rights frameworks

Explanation

Governments need to treat digital access as a fundamental right and public good, not a luxury, by embedding connectivity in development policies and human rights frameworks with ambitious universal service goals.


Evidence

Reference to the need for enshrining the right to meaningful connectivity in policy frameworks


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Human rights | Legal and regulatory


Agreed with

– Leon Cristian
– Thobekile Matimbe

Agreed on

Connectivity should be treated as a fundamental right requiring government support and policy reform


Universal service and access funds need reform with public reporting and openness to addressing demand-side issues like digital skills and devices

Explanation

Universal service and access funds should be reformed to be more effective, with public reporting on their use and impact, and expanded to address demand-side connectivity barriers like digital skills training and device affordability, not just infrastructure.


Evidence

Reference to an audit showing poor effectiveness of universal service funds and lack of transparency in their deployment


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Legal and regulatory | Development


Agreed with

– Christopher Locke
– Thobekile Matimbe

Agreed on

Community networks require sustainability beyond continuous grants through local business models


Current connectivity standards are inadequate – defining a connected person as someone who uses internet once every three months is underwhelming

Explanation

Global standards for measuring connectivity are too low, with the current definition of a connected person being someone who uses the internet once every three months, which is insufficient for meaningful digital participation in the age of AI and digital services.


Evidence

Reference to the need to raise standards at WSIS discussions and move toward meaningful connectivity metrics


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Development | Legal and regulatory


Agreed with

– Thobekile Matimbe
– Audience

Agreed on

Current approaches to measuring and achieving connectivity are inadequate and need fundamental reform


Disagreed with

– Leon Cristian

Disagreed on

Scope and complexity of the digital divide


Data collection lacks gender and income level disaggregation, making it difficult to measure true impact on underserved populations

Explanation

Current data collection methods fail to disaggregate by gender and income levels, relying instead on national averages that mask the true connectivity gaps experienced by women and low-income populations.


Evidence

Example of South Africa appearing affordable based on national averages while over 50% of the population lives on less than half the average GNI


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Development | Human rights


Meaningful connectivity requires daily access with 4G speed minimum, not basic connectivity measured by national averages

Explanation

True meaningful connectivity should be measured by daily access with 4G speeds at minimum, especially given the demands of AI and digitized public services, rather than the current low standards of basic connectivity.


Evidence

Reference to the need for higher standards to support digitization of public services and AI applications


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Development


Device affordability remains a huge challenge with people spending 20-60% of household income on purchasing devices

Explanation

Research shows that device affordability is a major barrier to connectivity, with people in some countries spending anywhere from 20 to 60 percent of their average household income just to purchase a single device.


Evidence

Specific research findings on device affordability across African countries showing the percentage of household income spent on devices


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Development


Taxation on devices ranges from 20-45% and reducing these taxes increases uptake of digital technologies

Explanation

High taxation on devices including import duties, VAT, and sales taxes can add 20-45% to device costs, and reducing these taxes has been shown to increase digital technology adoption within countries.


Evidence

Example of people in Ghana buying devices in Nigeria due to lower taxes, including cross-border DSTV subscriptions


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Legal and regulatory


Focus should shift from device financing schemes to actually lowering initial device costs through local assembly and right to repair

Explanation

Rather than focusing primarily on financing schemes that allow people to afford devices over time, efforts should concentrate on reducing the actual upfront cost of devices through local assembly and enabling device repair rights.


Evidence

Example of motorbikes imported in parts for food delivery services being taxed differently than fully assembled ones, and reference to the lack of right to repair in many countries


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Development


Investment needed in digital literacy, local content in local languages, and online safety to ensure meaningful benefit from connectivity

Explanation

Connectivity policies must be linked with investments in digital literacy, local content development in local languages, and online safety measures, because these factors determine whether people can meaningfully benefit from being online beyond just access.


Evidence

Question of what people should do online – ‘read English, be on Facebook?’ – highlighting the need for relevant local content


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Sociocultural | Development


T

Thobekile Matimbe

Speech speed

162 words per minute

Speech length

1263 words

Speech time

466 seconds

Enabling regulatory frameworks are essential for community-centered connectivity initiatives to thrive with multi-stakeholder approaches

Explanation

Successful community connectivity initiatives require appropriate regulatory environments that support meaningful connectivity for excluded communities, involving all stakeholders including government, private sector, and civil society working together.


Evidence

Reference to Paradigm Initiative’s work focusing on underserved rural communities and ITU research showing only 38% of African population is online


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Legal and regulatory | Development


Agreed with

– Onica Makwakwa
– Leon Cristian

Agreed on

Connectivity should be treated as a fundamental right requiring government support and policy reform


Less than four out of 27 African countries are transparent about Universal Service Fund resources and initiatives

Explanation

Research shows a severe lack of transparency in how Universal Service Funds are collected, managed, and deployed across African countries, with insufficient support for community connectivity centers and their sustainability.


Evidence

Paradigm Initiative’s State of Digital Rights and Inclusion in Africa report (LONDA) 2024 findings on Universal Service Fund transparency


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Legal and regulatory | Development


Agreed with

– Onica Makwakwa
– Audience

Agreed on

Current approaches to measuring and achieving connectivity are inadequate and need fundamental reform


A

Audience

Speech speed

147 words per minute

Speech length

891 words

Speech time

363 seconds

Kenya has established 20 community networks with a target of 100, demonstrating viability but requiring focus on sustainability components

Explanation

Kenya’s experience shows that community networks are viable alternatives for connectivity, with 20 networks already established and government support for reaching 100 networks, but the main challenge now is proving sustainable means of providing affordable connectivity.


Evidence

Kenya’s realization that after 20 years of GSM investment, only 30% of the country has meaningful connectivity, leading to community networks as solutions for the remaining 70%; support from Internet Society seed funding and APC; Communications Authority of Kenya’s community network service provider license


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Development


After 20 years since WSIS, similar gaps persist with 2.6 billion people still unconnected, indicating current approaches aren’t working

Explanation

Despite two decades since the World Summit on Information Society, the same digital divide issues persist with 2.6 billion people remaining unconnected, suggesting that current approaches to bridging the digital divide are insufficient and community networks may be the only viable solution.


Evidence

Consistent figure of 2.6 billion unconnected people from 2022-2025, indicating lack of progress with traditional approaches


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Development | Infrastructure


Agreed with

– Onica Makwakwa
– Thobekile Matimbe

Agreed on

Current approaches to measuring and achieving connectivity are inadequate and need fundamental reform


Social entrepreneurship integration is critical for addressing sustainability and social impact of community connectivity initiatives

Explanation

Integrating community-centered connectivity initiatives with social entrepreneurship and social enterprise development has been a critical factor in addressing both sustainability challenges and social impact issues, particularly in the Asia-Pacific region.


Evidence

Experience from Asia-Pacific region and Philippines showing successful integration of connectivity with social and solidarity economy approaches


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Economic | Development


N

Nnenna Paul-Ugochukwu

Speech speed

126 words per minute

Speech length

1451 words

Speech time

690 seconds

Bridging the last-mile connectivity gap is crucial for ensuring digital inclusion and achieving meaningful connectivity for all

Explanation

The moderator emphasizes that while Internet access has expanded globally, millions remain unconnected, particularly in rural, remote, and underserved regions. Addressing last-mile connectivity challenges is essential for digital inclusion and ensuring equitable access to connectivity.


Evidence

Reference to millions remaining unconnected in rural, remote, and underserved regions despite global Internet expansion


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Development | Infrastructure


Innovative solutions including community networks, public-private partnerships, and emerging technologies are needed to address last-mile challenges

Explanation

The session aims to explore various innovative approaches to connectivity challenges, including community networks, public-private partnerships, emerging technologies like LEO satellites, 5G expansion, and alternative spectrum management. These solutions should generate actionable insights for global Internet governance discussions.


Evidence

Session framework covering community networks, public-private partnerships, LEO satellites, 5G expansion, and alternative spectrum management approaches


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Infrastructure | Legal and regulatory


Transparency in Universal Service Funds fosters trust and strengthens public-private partnerships for connectivity initiatives

Explanation

The moderator emphasizes that transparency in how Universal Service Funds are managed and deployed is essential for building trust among stakeholders. This transparency creates a foundation for stronger collaboration between public and private sector partners in connectivity initiatives.


Evidence

Reference to the importance of transparency in Universal Service Fund management discussed by panelists


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Legal and regulatory | Development


Solutions should be context-informed and people-focused to effectively address connectivity challenges

Explanation

The moderator synthesizes that effective connectivity solutions must be tailored to specific local contexts and centered on the needs of the people they serve. This approach ensures that interventions are relevant and sustainable for the communities they aim to connect.


Evidence

Synthesis of panelist discussions emphasizing context-informed and people-focused approaches


Major discussion point

Bridging the Connectivity Gap for Excluded Communities


Topics

Development | Sociocultural


Agreed with

– Christopher Locke
– Leon Cristian

Agreed on

Community participation and local context are essential for successful connectivity initiatives


Agreements

Agreement points

Community networks require sustainability beyond continuous grants through local business models

Speakers

– Christopher Locke
– Onica Makwakwa
– Thobekile Matimbe

Arguments

Community networks should be developed as sustainable business models that can cover costs through local pricing systems rather than continuous grants


Universal service and access funds need reform with public reporting and openness to addressing demand-side issues like digital skills and devices


Enabling regulatory frameworks are essential for community-centered connectivity initiatives to thrive with multi-stakeholder approaches


Summary

All speakers agree that community networks must move beyond dependency on grants to achieve long-term sustainability through local revenue generation, proper business models, and supportive regulatory frameworks


Topics

Economic | Development | Legal and regulatory


Connectivity should be treated as a fundamental right requiring government support and policy reform

Speakers

– Onica Makwakwa
– Leon Cristian
– Thobekile Matimbe

Arguments

Universal affordable access should be prioritized as a right, embedded in development policies and rights frameworks


Market failures require public investment and public-private alliances with greater community participation


Enabling regulatory frameworks are essential for community-centered connectivity initiatives to thrive with multi-stakeholder approaches


Summary

Speakers consensus that connectivity is a fundamental right that requires active government intervention, policy reform, and public investment rather than relying solely on market forces


Topics

Human rights | Legal and regulatory | Development


Current approaches to measuring and achieving connectivity are inadequate and need fundamental reform

Speakers

– Onica Makwakwa
– Thobekile Matimbe
– Audience

Arguments

Current connectivity standards are inadequate – defining a connected person as someone who uses internet once every three months is underwhelming


Less than four out of 27 African countries are transparent about Universal Service Fund resources and initiatives


After 20 years since WSIS, similar gaps persist with 2.6 billion people still unconnected, indicating current approaches aren’t working


Summary

Strong agreement that existing measurement standards, funding mechanisms, and approaches to connectivity have failed to deliver meaningful results over the past 20 years


Topics

Development | Legal and regulatory


Community participation and local context are essential for successful connectivity initiatives

Speakers

– Christopher Locke
– Leon Cristian
– Nnenna Paul-Ugochukwu

Arguments

Community readiness evaluation must include technology solutions, governance structures, and business training alongside technical training


Including local communities brings diversity and indigenous perspectives essential for building technology for the future


Solutions should be context-informed and people-focused to effectively address connectivity challenges


Summary

All speakers emphasize that successful connectivity solutions must be designed with meaningful community participation, local context consideration, and indigenous perspectives


Topics

Development | Sociocultural


Similar viewpoints

Both speakers acknowledge that while LEO satellites offer connectivity solutions for remote areas, they simultaneously create new challenges around pricing, regulation, data sovereignty, and national security that need to be addressed

Speakers

– Christopher Locke
– Leon Cristian

Arguments

Low Earth Orbit satellites can bridge remote communities but face sustainability challenges due to pricing instability and regulatory issues


New technologies like LEO satellites create problems related to data sovereignty, spectrum allocation, and national security that didn’t exist before


Topics

Infrastructure | Legal and regulatory


Both speakers recognize that the digital divide has evolved beyond basic connectivity to include the infrastructure and capacity needed for advanced technologies, creating multiple layers of digital exclusion

Speakers

– Onica Makwakwa
– Leon Cristian

Arguments

Countries face double digital divide – lacking both meaningful connectivity and infrastructure to power advanced technologies


The digital divide now includes not just connectivity but capacity to run AI, quantum computation, and blockchain technologies


Topics

Infrastructure | Development


Both speakers advocate for moving away from traditional profit-centric business models toward more innovative, community-centered approaches that address affordability through structural changes rather than financing schemes

Speakers

– Christopher Locke
– Onica Makwakwa

Arguments

Multiple business models beyond profit-centric telco models are needed, including co-op models for sustainable community networks


Focus should shift from device financing schemes to actually lowering initial device costs through local assembly and right to repair


Topics

Economic | Development


Unexpected consensus

Regulatory power imbalances between governments and big tech companies

Speakers

– Leon Cristian
– Onica Makwakwa

Arguments

Starlink operates in Bolivia without permission, demonstrating regulatory power imbalances between states and big tech companies


Universal service and access funds need reform with public reporting and openness to addressing demand-side issues like digital skills and devices


Explanation

Unexpected consensus emerged around the challenge of big tech companies operating beyond national regulatory control, with speakers from different regions (Latin America and Africa) identifying similar patterns of corporate power superseding government authority in connectivity provision


Topics

Legal and regulatory | Human rights


The failure of traditional market-based approaches to connectivity after 20 years

Speakers

– Leon Cristian
– Audience
– Thobekile Matimbe

Arguments

Market failures require public investment and public-private alliances with greater community participation


After 20 years since WSIS, similar gaps persist with 2.6 billion people still unconnected, indicating current approaches aren’t working


Less than four out of 27 African countries are transparent about Universal Service Fund resources and initiatives


Explanation

Surprising level of agreement across speakers from different sectors that market-based approaches have fundamentally failed to deliver connectivity goals, requiring a complete rethinking of approaches rather than incremental improvements


Topics

Economic | Development | Legal and regulatory


Overall assessment

Summary

Strong consensus emerged around four main areas: the need for sustainable community-driven connectivity models, treating connectivity as a fundamental right requiring government intervention, the inadequacy of current measurement and funding approaches, and the essential role of community participation in solution design


Consensus level

High level of consensus with significant implications for policy reform. The agreement spans technical, economic, regulatory, and social dimensions, suggesting a comprehensive framework for addressing connectivity challenges. The consensus indicates a shift away from market-only solutions toward rights-based, community-centered approaches with strong government support and reformed international frameworks.


Differences

Different viewpoints

Optimism vs. Pessimism about technological solutions and future prospects

Speakers

– Christopher Locke
– Leon Cristian

Arguments

Low Earth Orbit satellites can bridge remote communities but face sustainability challenges due to pricing instability and regulatory issues


New technologies like LEO satellites create problems related to data sovereignty, spectrum allocation, and national security that didn’t exist before


Summary

Christopher Locke presents a cautiously optimistic view of LEO satellites as solutions for remote connectivity despite challenges, while Leon Cristian emphasizes how these same technologies create new problems and complexities that didn’t exist before, taking a more pessimistic stance on technological solutions.


Topics

Infrastructure | Legal and regulatory


Scope and complexity of the digital divide

Speakers

– Onica Makwakwa
– Leon Cristian

Arguments

Current connectivity standards are inadequate – defining a connected person as someone who uses internet once every three months is underwhelming


The digital divide now includes not just connectivity but capacity to run AI, quantum computation, and blockchain technologies


Summary

Onica focuses on improving current connectivity standards and meaningful access for existing technologies, while Leon Cristian argues the digital divide has expanded to include advanced technologies like AI and quantum computing, representing different views on prioritization.


Topics

Development | Infrastructure


Unexpected differences

Technology accessibility for developing countries

Speakers

– Leon Cristian
– Audience (Lee McKnight)

Arguments

Countries face double digital divide – lacking both meaningful connectivity and infrastructure to power advanced technologies


After 20 years since WSIS, similar gaps persist with 2.6 billion people still unconnected, indicating current approaches aren’t working


Explanation

Leon Cristian argued that advanced technologies like AI and blockchain are inaccessible to developing countries, but an audience member (Lee McKnight) directly challenged this by announcing open source solutions for bringing these technologies to indigenous communities in the Amazon, creating an unexpected disagreement about technological accessibility.


Topics

Infrastructure | Development


Overall assessment

Summary

The discussion showed relatively low levels of fundamental disagreement, with most speakers aligned on core goals of digital inclusion and community-centered approaches. Main disagreements centered on optimism levels about technological solutions and the scope of digital divide challenges.


Disagreement level

Low to moderate disagreement level. Speakers generally agreed on problems and goals but differed on emphasis, approaches, and outlook. The disagreements were more about perspective and prioritization rather than fundamental opposition, which suggests productive potential for collaborative solutions despite different viewpoints on implementation strategies.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers acknowledge that while LEO satellites offer connectivity solutions for remote areas, they simultaneously create new challenges around pricing, regulation, data sovereignty, and national security that need to be addressed

Speakers

– Christopher Locke
– Leon Cristian

Arguments

Low Earth Orbit satellites can bridge remote communities but face sustainability challenges due to pricing instability and regulatory issues


New technologies like LEO satellites create problems related to data sovereignty, spectrum allocation, and national security that didn’t exist before


Topics

Infrastructure | Legal and regulatory


Both speakers recognize that the digital divide has evolved beyond basic connectivity to include the infrastructure and capacity needed for advanced technologies, creating multiple layers of digital exclusion

Speakers

– Onica Makwakwa
– Leon Cristian

Arguments

Countries face double digital divide – lacking both meaningful connectivity and infrastructure to power advanced technologies


The digital divide now includes not just connectivity but capacity to run AI, quantum computation, and blockchain technologies


Topics

Infrastructure | Development


Both speakers advocate for moving away from traditional profit-centric business models toward more innovative, community-centered approaches that address affordability through structural changes rather than financing schemes

Speakers

– Christopher Locke
– Onica Makwakwa

Arguments

Multiple business models beyond profit-centric telco models are needed, including co-op models for sustainable community networks


Focus should shift from device financing schemes to actually lowering initial device costs through local assembly and right to repair


Topics

Economic | Development


Takeaways

Key takeaways

Connectivity should be treated as a fundamental right, not a privilege, and embedded in development policies and human rights frameworks


The digital divide is becoming increasingly complex, extending beyond basic connectivity to include capacity for advanced technologies like AI and quantum computing


Community networks are viable alternatives to traditional telco models but require sustainable business models, not continuous grant dependency


Current measurement standards for connectivity are inadequate – meaningful connectivity requires daily access with minimum 4G speeds, not the current standard of internet use once every three months


Device affordability remains a critical barrier, with people spending 20-60% of household income on devices, exacerbated by 20-45% taxation rates


Universal Service and Access Funds are underutilized and lack transparency, with less than four out of 27 African countries being transparent about their use


Regulatory frameworks must enable community-centered initiatives through affordable spectrum licensing and support for diverse business models


Multi-stakeholder approaches involving government, private sector, civil society, and local communities are essential for sustainable solutions


Social impact measurement and integration with social entrepreneurship are critical for demonstrating value beyond simple connectivity metrics


Resolutions and action items

Advocate collectively for policy incentives including access to universal service funds for multi-stakeholder projects


Collaborate to share disaggregated data on coverage gaps, affordability, and digital use to guide investment decisions


Develop local digital ecosystems by incubating local startups and encouraging device repair networks


Reform universal service and access funds to address demand-side issues like digital skills and affordable devices


Create enabling regulatory environments that support community networks through flexible spectrum access and license fee exemptions


Integrate digital inclusion in broader economic and social policies linking connectivity with digital literacy and local content


Establish public reporting requirements for universal service fund utilization and impact


Support development of local assembly and right to repair initiatives to reduce device costs


Unresolved issues

How to address regulatory power imbalances between states and big tech companies, particularly when companies like Starlink operate without local permissions


Specific mechanisms for ensuring long-term sustainability of community networks beyond initial grant funding


How to balance investment in infrastructure with necessary investments in rights protection mechanisms


Concrete strategies for reducing device costs beyond taxation reform, particularly achieving the promised affordable devices


How to address the emerging ‘double digital divide’ where countries lack both connectivity and infrastructure for advanced technologies


Specific metrics and standards for measuring meaningful connectivity impact across different contexts and communities


How to scale successful community network models like Kenya’s approach to other countries with different regulatory environments


Addressing the governance challenges in international crisis contexts where cooperation models are disappearing


Suggested compromises

Adopt multiple business models for connectivity including co-op models and community-centered approaches rather than relying solely on profit-centric telco models


Balance market-based solutions with public investment and public-private partnerships that include greater community participation


Use spectrum licensing as a development tool rather than primarily as government revenue generation


Combine infrastructure investment with capacity building in business training alongside technical training for community networks


Integrate device financing schemes with efforts to reduce initial device costs through local assembly and repair networks


Develop context-informed and people-focused solutions that adapt to specific country and regional needs rather than one-size-fits-all approaches


Thought provoking comments

We’re still in the very early stages of Leo Internet. And not only is the price initially expensive, but also we’re increasingly seeing that as the networks become clogged, the prices sometimes are quite dynamic based on demand… in some African cities with Starlink, where they’re pretty much booked out.

Speaker

Christopher Locke


Reason

This comment was insightful because it challenged the common assumption that new technologies like LEO satellites are straightforward solutions to connectivity gaps. Locke introduced the complexity of dynamic pricing and capacity constraints that aren’t often discussed in connectivity debates.


Impact

This shifted the conversation from viewing LEO satellites as a panacea to understanding them as part of a complex ecosystem with their own limitations. It set the stage for discussing sustainability challenges and prepared the ground for later discussions about regulatory frameworks needed to manage these new technologies.


We need to stop having poor policies for poor people. You know, poor phones for poor people… There’s a big difference between you can afford a phone over three months and you can afford a phone now.

Speaker

Onica Makwakwa


Reason

This was a powerful reframing that challenged the entire approach to digital inclusion. Instead of accepting substandard solutions for underserved populations, Makwakwa argued for raising standards and addressing root causes of affordability rather than just financing schemes.


Impact

This comment fundamentally shifted the discussion from incremental improvements to systemic change. It influenced subsequent speakers to think more critically about business models and policy approaches, moving beyond technical solutions to address equity and dignity in connectivity provision.


Starlink is operating in my country without permission… they said, I don’t need that, I don’t need to put an office in your country. I can operate and I can provide my services even if I don’t fulfill all their requirements.

Speaker

Leon Cristian


Reason

This concrete example exposed the power imbalances between global tech companies and national governments, particularly in the Global South. It illustrated how technological solutions can undermine sovereignty and regulatory frameworks.


Impact

This comment introduced a critical dimension to the discussion – the tension between connectivity solutions and national sovereignty. It deepened the conversation by showing how the digital divide isn’t just about access, but about who controls that access and under what terms.


I have been in one engagement with one government on the African continent where we are discussing digital inclusion for underserved communities and the feedback was from one member of parliament that look, do you really think my grandmother needs a smartphone?

Speaker

Thobekile Matimbe


Reason

This anecdote powerfully illustrated the fundamental disconnect between policymakers and the reality of digital inclusion needs. It revealed how basic assumptions about who deserves connectivity access still need to be challenged at the highest levels of government.


Impact

This comment brought the discussion back to ground-level realities and highlighted that technical and policy solutions mean nothing without political will and understanding. It emphasized the need for advocacy and education at the political level, not just technical implementation.


Now the digital divide is not only about having or not meaningful connectivity, it’s also about having enough capacity to run AI’s, quantum computation, blockchains, cryptos… So there is another connectivity divide, there is another digital divide that is happening right now.

Speaker

Leon Cristian


Reason

This observation was profound because it revealed how the digital divide is not static but evolving and potentially widening. While efforts focus on basic connectivity, new technological requirements are creating additional layers of exclusion.


Impact

This comment expanded the scope of the entire discussion, forcing participants to think beyond current connectivity challenges to future technological requirements. It added urgency to the conversation and highlighted the risk of countries falling further behind even as they work to address current gaps.


We need to understand there are many business models to providing connectivity… Mimicking a small version of being a telco isn’t the way to build a sustainable community network. There are co-op models, there are many other models that allow us to develop that.

Speaker

Christopher Locke


Reason

This insight challenged conventional thinking about how connectivity services should be structured and funded. It opened up possibilities for community-centered approaches that don’t rely on traditional profit-driven models.


Impact

This comment redirected the conversation toward innovative governance and business models, influencing other speakers to discuss community ownership, social enterprises, and alternative sustainability approaches. It helped frame community networks as fundamentally different from commercial operations.


Overall assessment

These key comments collectively transformed what could have been a technical discussion about connectivity solutions into a nuanced examination of power, equity, and systemic change. The speakers didn’t just present problems and solutions, but challenged fundamental assumptions about how digital inclusion should be approached. Cristian’s observations about regulatory power imbalances and evolving digital divides added critical complexity, while Makwakwa’s call to stop accepting substandard solutions for poor communities reframed the entire equity discussion. Locke’s insights about LEO satellite limitations and alternative business models grounded the conversation in practical realities while opening up new possibilities. Matimbe’s anecdote about political resistance brought the discussion back to the human and political dimensions that often determine success or failure of technical solutions. Together, these comments elevated the discussion from a typical policy panel to a critical examination of how power, technology, and equity intersect in the digital inclusion space.


Follow-up questions

How can we evaluate the impact of community networks and what specific metrics should be used to measure meaningful connectivity beyond basic access?

Speaker

Nnenna Paul-Ugochukwu


Explanation

This is crucial for demonstrating the value of community networks to governments and funders, and for moving beyond simple connectivity statistics to measure transformative impact on communities


What are the specific business training components needed alongside technology training for sustainable community networks?

Speaker

Christopher Locke


Explanation

Understanding the business model aspects is essential for creating self-sustaining community networks that don’t require continuous grant funding


How can universal service and access funds be reformed to be more effective and transparent in supporting last-mile connectivity?

Speaker

Onica Makwakwa


Explanation

Current universal service funds are underutilized and lack transparency, representing a significant missed opportunity for bridging connectivity gaps


What specific policy incentives and regulatory frameworks are needed to support community networks and alternative connectivity models?

Speaker

Multiple speakers (Christopher Locke, Onica Makwakwa, Thobekile Matimbe)


Explanation

Regulatory barriers are preventing community networks from scaling and becoming sustainable, requiring specific policy reforms


How can device affordability be addressed through local assembly, taxation reform, and right-to-repair initiatives?

Speaker

Onica Makwakwa


Explanation

Device costs remain a major barrier to access, and current approaches focusing on financing rather than reducing actual costs are insufficient


How can countries in the Global South address the emerging ‘double digital divide’ related to AI, quantum computing, and advanced technologies?

Speaker

Leon Cristian


Explanation

A new layer of digital divide is emerging where countries lack the infrastructure and computing capacity to access advanced technologies


How can accountability mechanisms be established for large tech companies operating in countries without local presence or compliance with national requirements?

Speaker

Leon Cristian


Explanation

Power imbalances between states and big tech companies are creating governance challenges, as illustrated by Starlink operating in Bolivia without permission


What are the best practices for integrating community-centered connectivity initiatives with social entrepreneurship and social enterprise development?

Speaker

Lisa Dakanay


Explanation

Understanding how to combine connectivity initiatives with social entrepreneurship could improve sustainability and social impact


How can we develop better data collection methods that are disaggregated by gender, income levels, and other demographic factors?

Speaker

Onica Makwakwa


Explanation

Current data collection relies on national averages that mask inequalities and prevent targeted interventions for excluded populations


What are the sustainability models for community networks beyond the initial grant funding phase?

Speaker

Bara Kotieno


Explanation

While community networks can be established with seed funding, long-term sustainability remains a challenge that needs to be addressed


How can open source technologies and edge computing be leveraged to make advanced technologies accessible in remote and underserved areas?

Speaker

Lee McKnight


Explanation

Exploring alternatives to centralized, expensive technology infrastructure could democratize access to advanced digital tools


What social impact assessment methodologies should be adopted to measure the broader effects of community networks on health, education, and economic development?

Speaker

Christopher Locke


Explanation

Better measurement tools are needed to demonstrate the full value of community networks beyond simple connectivity metrics


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #79 Wgig+20 Glancing Backward and Looking Forward

Day 0 Event #79 Wgig+20 Glancing Backward and Looking Forward

Session at a glance

Summary

This discussion was a 20th anniversary reunion of the Working Group on Internet Governance (WGIG), celebrating the group’s contributions to the World Summit on the Information Society (WSIS) process and examining the current state of internet governance. The session was organized into three segments: the nature of internet governance, the relationship between multilateral and multistakeholder approaches, and the future of the Internet Governance Forum (IGF).


Participants emphasized that WGIG played a crucial role in legitimizing multistakeholder cooperation within the United Nations system, demonstrating that diverse stakeholders could work together effectively on complex technical and policy issues. The group developed a broad definition of internet governance that has stood the test of time, encompassing not just technical infrastructure but also the broader use of the internet and related policy issues. Several speakers shared anecdotes about how the process fostered unprecedented collaboration between stakeholders who had previously been antagonistic, such as government representatives, civil society, business, and technical communities.


The discussion revealed that the false dichotomy between multilateral and multistakeholder approaches has evolved, with recognition that both models must coexist and complement each other. Participants noted that while the IGF has successfully served as a global forum for dialogue and capacity building, it faces challenges including limited bottom-up participation, difficulty addressing controversial topics, and questions about its long-term sustainability and funding.


Looking forward, speakers identified the need for the IGF to mature in handling contentious issues, improve its methodology for generating meaningful outcomes, and potentially establish a new working group to address the relationship between internet governance, data governance, AI governance, and broader digital governance challenges. The discussion concluded with recognition that despite the passage of 20 years, the fundamental principles and collaborative spirit established by WGIG remain relevant and necessary for addressing contemporary digital governance challenges.


Keypoints

## Overall Purpose/Goal


This discussion was a 20th anniversary reunion of the Working Group on Internet Governance (WGIG), bringing together original members to reflect on their impact, assess the current state of internet governance, and discuss the future of the Internet Governance Forum (IGF). The session aimed to evaluate whether the multi-stakeholder approach they pioneered has lived up to expectations and what changes might be needed going forward.


## Major Discussion Points


– **WGIG’s Historical Impact and Methodology**: Participants reflected on how WGIG successfully demonstrated multi-stakeholder cooperation within the UN system, created the first comprehensive definition of internet governance, and established innovative processes like public consultations and transparent documentation. The methodology of allowing diverse viewpoints while finding workable compromises was highlighted as a key achievement.


– **Multi-stakeholder vs. Multilateral Governance Models**: A central debate focused on whether these two approaches should be seen as competing or complementary. Speakers argued against treating this as a false dichotomy, emphasizing that both models need to coexist and work together, with the Tunis compromise representing a successful marriage of both approaches under the UN umbrella.


– **IGF’s Evolution and Current Challenges**: Discussion covered whether the IGF has fulfilled its original mandate as a forum for open dialogue. While praised for capacity building and providing a neutral space for discussion, concerns were raised about its limited decision-making power, funding sustainability, and whether it has become too insular or controlled from the top-down rather than truly bottom-up.


– **Contemporary Governance Challenges**: Participants grappled with how internet governance has evolved to encompass AI governance, data governance, and digital rights issues. There was debate about whether the IGF is mature enough to handle controversial topics and whether the original broad definition of internet governance remains relevant for today’s challenges.


– **Future Directions and Institutional Reform**: The conversation explored whether a new WGIG-style working group might be needed to address current governance gaps, how to better engage governments and decision-makers, and whether the IGF’s mandate needs updating to remain relevant in the current digital landscape.


## Overall Tone


The discussion began with a celebratory, nostalgic tone as participants shared anecdotes and reflected positively on their collaborative achievements 20 years ago. However, the tone gradually became more critical and forward-looking, with speakers raising challenging questions about current limitations and future needs. While maintaining collegiality and mutual respect, participants weren’t afraid to voice controversies and disagreements, particularly around issues of bottom-up governance, the role of governments, and whether current multi-stakeholder models are sufficient for today’s challenges. The session ended on a pragmatic note, acknowledging both successes and the need for continued evolution.


Speakers

**Speakers from the provided list:**


– **Markus Kummer** – Session moderator/chair, former diplomatic service of Switzerland, celebrating 20-year anniversary of Working Group on Internet Governance (WGIG)


– **William J. Drake** – WGIG member, session organizer, edited books on WGIG in 2005 and 2015


– **Ayesha Hassan** – Former WGIG member representing ICC (International Chamber of Commerce)/global business community


– **Raul Echeberria** – Former WGIG member representing technical community


– **Wolfgang Kleinwachter** – Former WGIG member representing civil society and academic community


– **Avri Doria** – Former WGIG member, came in as a techie, past MAG member and MAG chair


– **Jovan Kurbalija** – Former WGIG member, Executive Director of the UN High-Level Panel, author of “Introduction to Internet Governance”


– **Alejandro Pisanty** – Former WGIG member, participating remotely from Mexico City


– **Carlos Afonso** – Former WGIG member, participating remotely from Rio


– **Baher Esmat** – Works for ICANN, participating remotely


– **Vittorio Bertola** – Former youngest member of WGIG, described as least diplomatic member


– **Charles Shaban** – Remote moderation coordinator, intellectual property practitioner


– **Bertrand de la Chapelle** – Executive Director of the Internet and Jurisdiction Policy Network


– **Ariette Esterhuisen** – Past MAG member and past MAG chair


– **Israel Rosas** – Internet Society representative


– **Jimson Olufuye** – Africa ICT Alliance


– **Hadi Alminyawi** –


– **Sébastien Bachelet** – Isaac Frantz and Uralo representative


– **Participant** – Government representative (specific identity not disclosed)


– **Audience** – Nandini from IT4Change India, part of Civil Society Coalition Global Digital Justice Forum


**Additional speakers:**


– **Shaima Akhtar** – Chairperson, Bangladesh Women IGF (participated via online question)


Full session report

# 20th Anniversary Reunion of the Working Group on Internet Governance (WGIG): A Comprehensive Assessment of Past Achievements and Future Challenges


## Executive Summary


This discussion marked the 20th anniversary reunion of the Working Group on Internet Governance (WGIG), bringing together original members to reflect on their groundbreaking contributions to the World Summit on the Information Society (WSIS) process and examine the current state of internet governance. The session, moderated by Markus Kummer and organised by William J. Drake, opened with a lighthearted moment when Kummer shared that he had asked AI “does WGIG matter?” and received an affirmative response highlighting WGIG’s lasting contributions to multistakeholder governance.


Despite technical difficulties that set an informal, reunion-like atmosphere, the discussion evolved into a substantive examination of three key themes: the nature and enduring relevance of internet governance, the relationship between multilateral and multistakeholder approaches, and the future of the Internet Governance Forum (IGF). The conversation balanced celebration of WGIG’s achievements with frank assessment of contemporary challenges, revealing both pride in past accomplishments and concern about current governance limitations.


## Historical Impact and Methodology of WGIG


### Foundational Achievements


William J. Drake outlined seven ways that WGIG made a lasting impact, which participants consistently endorsed throughout the discussion. These contributions included demonstrating that multistakeholder cooperation could work effectively within the UN system, helping facilitate WSIS negotiations by systematically mapping issues and positions, promoting broader public engagement through innovative consultation processes, demystifying internet governance by establishing that governance does not mean government control, developing a working definition of internet governance that has proven remarkably durable, proposing the creation of the IGF as a forum for continued dialogue without negotiation pressure, and creating methodological innovations for consensus-building.


Markus Kummer emphasised that the WGIG report found its way into the final WSIS outcome and significantly impacted the process. Raul Echeberria noted that WGIG strengthened the concept of multistakeholderism and consolidated the idea that participation from all stakeholders is crucial, sharing an anecdote about how discussions of root server A brought together diverse perspectives. Wolfgang Kleinwächter highlighted how WGIG created a unique culture of collaboration where every stakeholder brought different expertise to the table, transforming previously antagonistic relationships into productive cooperation.


### Methodological Innovation and the Role of Frank March


A particularly poignant aspect of the discussion focused on WGIG’s innovative methodology and the crucial role of Frank March, the secretary who has since passed away. Avri Doria provided detailed insights about March’s approach, describing how he would write while listening and incorporating real-time feedback from participants. She explained: “Frank March wrote while listening. And he would write and then he would read back what he wrote and people would say, no, that’s not what I meant. And he would change it. And he would read it back again.”


This methodological innovation proved particularly significant when contrasted with current governance processes. Jovan Kurbalija observed that contemporary processes often suffer from what he termed a “governance Bermuda Triangle” where stakeholder contributions disappear without clear traceability to final documents. This observation sparked broader reflection on how governance processes have evolved away from the transparent, participatory model that made WGIG successful.


## Evolution and Enduring Relevance of Internet Governance Definition


### Definitional Durability and Universal Applicability


There was remarkable consensus among participants that WGIG’s definition of internet governance has stood the test of time and remains applicable to contemporary challenges. Wolfgang Kleinwächter provided detailed analysis of the definition’s three key elements: multistakeholder approaches (involving all stakeholders in their respective roles), collaborative approaches (working together), and holistic approaches (covering all relevant issues). He argued that these elements make the definition universal and applicable to emerging governance challenges including AI governance.


Ayesha Hassan noted that the definition has adapted to new technologies, sharing an anecdote about how even Cuba and global business representatives found common ground in supporting the multistakeholder approach. Carlos Afonso reinforced the definition’s enduring relevance by noting that the WGIG report identified key public policy areas and fundamental issues that remain valid today.


### Contemporary Applications and Conceptual Clarity


The discussion revealed how internet governance has naturally expanded to encompass new technologies while maintaining its core framework. However, this expansion also raised questions about conceptual clarity. William J. Drake suggested the need for new work to address the relationship between internet governance, data governance, AI governance, and broader digital governance to reduce conceptual confusion and provide clearer guidance for policy-makers.


## Multistakeholder versus Multilateral Governance Models


### Moving Beyond False Dichotomies


One of the most significant areas of discussion concerned the relationship between multistakeholder and multilateral governance approaches. There was strong consensus that treating these as competing models represents a false dichotomy, with multiple speakers arguing that both approaches must coexist and complement each other.


Avri Doria emphasised that both models must work together rather than in opposition, while Markus Kummer noted that multilateralism protects smaller countries and should not exclude multistakeholderism. Jimson Olufuye added that multistakeholderism helps governments fulfil their responsibilities to citizens rather than taking over government work.


### Provocative Perspectives and Real-World Tensions


However, Alejandro Pisanty provided a more provocative view, arguing that all internet governance problems are better solved by multistakeholder mechanisms and suggesting that countries pushing for multilateral approaches are also pushing against internet freedom. He framed this as an “acid test” that challenged the diplomatic tendency to treat both approaches as equally valid.


Charles Shaban emphasised the need to find ways for multilateral and multistakeholder mechanisms to work together effectively, suggesting that the challenge lies not in choosing between models but in designing effective interfaces between them.


## Assessment of the Internet Governance Forum


### Achievements and Evolution


The discussion of the IGF revealed both appreciation for its achievements and frank acknowledgement of its limitations. Baher Esmat highlighted the IGF’s role as the primary global multistakeholder forum providing space for open discussion and capacity building, noting that it has continuously evolved in topics and outcomes while maintaining its non-decision-making nature as a strength.


Jovan Kurbalija praised the IGF’s value for capacity building and creating incremental development of new methodologies. Raul Echeberria noted the IGF’s evolution, mentioning how the 2013 IGF he organized addressed surveillance issues that became highly relevant. These positive assessments emphasised the IGF’s success in creating a neutral space for dialogue and its role in developing governance capacity globally.


### Critical Assessments and Uncomfortable Truths


However, the discussion also featured sharp critiques of the IGF’s performance and limitations. Vittorio Bertola, identified as the former youngest member of WGIG, provided perhaps the most damning assessment, arguing that the IGF failed to address economic and social questions due to lack of enforcement mechanisms against private sector actors. He observed: “the people that could make money out of breaking down the internet and turning it into walled gardens, they just went on and made money. And nobody could stop them because we had no stick.”


Most strikingly, Bertola cited a UK survey showing that half of young people believe they would be better off if the internet didn’t exist, serving as a devastating indictment of how far the internet has diverged from its original promise.


Avri Doria challenged one of the IGF’s core mythologies by stating bluntly: “Any notion we have that IGF has bottomed up is something that we should quit pretending. It is not. It hasn’t been.” This critique of the IGF’s democratic legitimacy sparked discussion about the gap between rhetoric and reality in multistakeholder governance.


### Capacity for Controversial Topics


A significant portion of the IGF discussion focused on its capacity to handle controversial topics. Jovan Kurbalija argued that enhanced cooperation should be brought as a track on the first day of IGF, questioning why such topics are avoided. Alejandro Pisanty contended that the IGF is mature enough to handle controversial issues but that some stakeholders are not ready for such discussions.


## Contemporary Challenges and Future Directions


### Institutional Reform Needs


The discussion revealed growing recognition that significant institutional reforms may be necessary to address contemporary digital governance challenges. William J. Drake suggested the need for new work to address the relationship between different forms of digital governance. Bertrand de la Chapelle proposed a new multistakeholder working group to address the IGF’s future after the WSIS+20 process.


Jovan Kurbalija presented three key questions for the IGF’s future: whether it should continue as a discussion forum or gain decision-making capacity, how to improve national and regional IGF processes, and how to ensure financial sustainability. He also suggested revisiting the Tunis compromise formula that balanced multistakeholder participation with the UN umbrella.


### Addressing Democratic Deficits and Inclusion


A recurring theme throughout the discussion was concern about democratic deficits in current governance processes. Nandini from IT4Change India raised concerns about digital governance issues being moved to closed-door trade negotiations, undermining democratic governance.


Charles Shaban emphasised the need for intersectional and rights-based approaches to digital governance centring marginalised communities, while Shaima Akhtar asked whether governance frameworks should evolve to be more inclusive of women, youth, and marginalised communities.


### Technical Innovation in Governance Processes


The discussion highlighted potential for innovation in governance processes themselves. Jovan Kurbalija noted that AI tools could now help trace contributions to final documents, though he emphasised that human approaches remain preferable. This observation highlighted how technological advances might support more transparent and participatory governance processes, potentially addressing the “governance Bermuda Triangle” problem he identified.


## Unresolved Tensions and Future Challenges


### Enforcement versus Dialogue


One of the most significant unresolved issues was how to balance multistakeholder governance with the need for enforcement mechanisms against powerful private sector actors. Bertola’s observation that governance without enforcement mechanisms allowed harmful actors to fragment the internet into commercial silos highlighted a fundamental challenge that remains unaddressed.


### Democratic Legitimacy and Genuine Participation


Despite broad agreement on many issues, the discussion revealed significant disagreements about the IGF’s democratic legitimacy and bottom-up nature. The tension between multistakeholder rhetoric and reality emerged as a critical concern requiring attention.


### Scope and Effectiveness


Participants disagreed about the IGF’s effectiveness in addressing broader internet governance issues beyond technical coordination. This disagreement reflected fundamental tensions about the appropriate scope and ambition of internet governance institutions, and whether dialogue-focused forums can be effective in addressing structural economic and social problems.


## Conclusion: Legacy and Future Prospects


The 20th anniversary reunion of WGIG demonstrated both the enduring relevance of the group’s achievements and the significant challenges facing contemporary internet governance. The informal, reunion-like atmosphere, punctuated by technical difficulties and fond memories of Frank March, provided an appropriate setting for both celebration and critical self-reflection.


The discussion revealed that WGIG’s core contributions—legitimising multistakeholder cooperation within the UN system, developing a durable and universal definition of internet governance, and creating innovative consensus-building methodologies—remain valuable and applicable to current challenges. The definition’s three elements of multistakeholder, collaborative, and holistic approaches have proven remarkably adaptable to new technologies and governance challenges.


However, the conversation also revealed significant tensions and unresolved issues. The gap between multistakeholder rhetoric and reality, the challenge of addressing economic and social problems without enforcement mechanisms, and questions about democratic legitimacy and genuine participation all emerged as critical concerns. Perhaps most sobering was the recognition that despite technical successes in internet governance, broader social outcomes may not have lived up to the original promise of the internet.


The frank and sometimes uncomfortable nature of the discussion—exemplified by Bertola’s harsh assessments and Doria’s challenge to IGF mythology—demonstrated the intellectual maturity of the internet governance community and its willingness to confront difficult truths. This honest assessment, balanced with recognition of genuine achievements, provides a solid foundation for addressing the significant challenges that lie ahead.


The session ultimately affirmed that while WGIG’s foundational work remains relevant, new approaches and potentially new institutions may be needed to address contemporary challenges effectively. The collaborative spirit and methodological innovations that made WGIG successful—particularly the transparent, participatory approach exemplified by Frank March’s real-time writing and feedback process—offer valuable lessons for future governance efforts in an increasingly complex and contested digital landscape.


Session transcript

Markus Kummer: I don’t understand, he gives me signs. Look at all those people, with that many people, they probably have enough time for a sentence each. Yeah, no, we can’t see that, we have to turn around. You guys are sitting in the right place, because those are too far away. I cannot touch the Zoom. Okay. Can you hear me? Yes. I can still hear music in my… Okay, good morning everyone. It’s all a bit complicated here, we have to put our headsets on, and we are on channel 4. 5. Oh, sorry, channel 5. Yes, we are in room 5. Anyway, welcome everyone. And to the technician, can you stop the background music, please? I don’t know what’s… Background music is on 4. Got it? Okay, I was on the wrong channel. Well, it’s very complicated, yes. And we’re not young people anymore, we’re celebrating the 20-year anniversary of the Working Group on Internet Governance. And whatever you say, you can hear it because the microphones are on permanently, so if you have any side conversations, we can hear it. Anyway, it’s a great pleasure to have you all here. So it’s a class reunion to celebrate the 20th anniversary. We’re all still walking, we’re not in a wheelchair yet. And as somebody said, maybe… We have to ask for a doctor to be in the room just in case. Be that as it may, I took the liberty of using my AI assistant I have on my phone to ask, does WGIG matter? And I say, it’s a difficult question, and the answers are yes and no. But in conclusion, it says that WGIG plays a role in the development and promotion of internet governance and may have less impact than some would have desired. This is sort of a kind of wishy-washy intelligence, I would have thought. We were part of the process. We actually thought we had a tremendous impact. It was a bit of a game changer in the WSIS process, which before was very much government only, and we really opened the doors to a multi-stakeholder participation. And we thought, OK, let’s look back a bit and also look forward a bit. And we decided to have it in three segments. And when I say we, it was mainly Bill who put it together. Have one segment on the nature of internet governance, one segment on multi-stakeholder and multilateral, which is seen by some as an antagonistic situation, and then the last segment, the IGF, looking forward to the future of the IGF. Does the IGF live up to expectation? What should change? What not? And with that, I hand over to Bill, who provides some framing of the session. Please, Bill.


William J. Drake: OK. Thank you very much, Marcus, and hello, everybody. I’m going to take my headphones off so I don’t hear myself. Well, I thank you all for coming, first of all, because there’s a lot going on at the same time. We’re here not because we think that the WGIG report that was put out was a source of Talmudic wisdom for the ages, that we solved all the problems of the universe or anything like that. It was a negotiated document under very intense conditions, et cetera. However, there was. a lot of impact, I think, on the process, and that it’s been lasting, and I think it’s worth taking note of that, since this is the 20th anniversary of the WSIS, how could we not talk about the 20th anniversary of the WGIG, which played an important role in bringing the WSIS to a successful conclusion. So that’s why I thought it would be useful to get the band back together again and do this. So I’m just going to make a few points about the impact of the WSIS, and that’s based in part on, we did two books together, aside from the reports that were put out, members of the group got together and contributed chapters to two books that I edited, one in 2005 and the other in 2015 on the 10th anniversary, and in those books I had chapters about why WGIG mattered, and I’m going to kind of just run through a couple of points to try to level set us to get everybody on the same page, because frankly at this point, I don’t know how many people even remember the WGIG. I talked to people at the party last night, and a lot of people were like, you know, I come to IGF, I don’t really know where the IGF came from, I don’t know what, you know, I mean a lot of this history has just gone dark as we’ve all gotten older and new people have cycled in, et cetera. So I think it’s good to try to level set us by getting us on the same page and say, what was this thing, et cetera. So I’m just going to make three points about the procedural and institutional contributions, and then a couple of points about the substantive contributions, and then we’ll go to inclusive discussion with all the people that are here as well as the ones who are online. So I would start by saying that the WGIG demonstrated the benefits of multistakeholder cooperation in the United Nations in a way that really hadn’t been seen before in the Internet governance space for those who are old and were around during the WSIS process. You may remember in the early stages of the process, stakeholders were being locked out of rooms, thrown out of rooms, told not to speak. The whole thing was a kind of a mess. For a long time, it took a while for the whole multistakeholder thing, ethos, to start to kick in. The WGIG contributed, I think, substantially to legitimating that and showing that, in fact, multistakeholder collaboration could be effective, problem-solving, and that indeed stakeholders could make real contributions to the kind of procedural and substantive learning that everybody was doing as we came together and groped towards some shared of things. Secondly, I’d say that the WCIS, the WGIG facilitated the WCIS negotiations. For those of you, again, who are around, you might remember that for the first couple of years in the Geneva cycle, people were saying the conversations all over the place, we don’t know what’s going on, it’s not cumulative, we’re going nowhere, etc., etc. The WGIG actually did a kind of systematic mapping and brought order to the discussion, worked through issues in a kind of structured and deliberate and methodical basis that kind of laid out the main issues at stake in a way that everybody could understand. The WGIG promoted public engagement. We did a lot of innovative things that now we take for granted, but back then were new in terms of having public comments, having open transparent processes, having everything on the web, having simultaneous translation in the sessions, etc., etc. All of that stuff that IGF now does, back then it was all new in the United Nations context. In terms of substantive contributions, the WGIG played a major role, I think, in demystifying the nature and scope of Internet governance. There was a lot of debate, you might remember back then, about is there such a thing as Internet governance, does the term even make sense, or else you had people who said, well, if there is Internet governance, it just meant what ICANN does, or it just means what intergovernmental agencies like the ITU should be doing, etc. And we were able to sort of work through this concept and sort of demonstrate that governance does not mean government, right? That we needed a holistic, broad approach to Internet governance that took into account not just the underlying infrastructure, but also the use of the Internet and the rule systems that apply to privacy, digital trade, intellectual property, and so on, on transactions going over the Internet, etc. We developed a working definition, which was drawn from the political science literature on international regimes as it happens, that set out who does Internet governance, what does Internet governance consist of, and where is it done, etc. All that was important. And we kind of took the attention off the whole debate that was going on at the time about how the ITU might take over ICANN and so on. In fact, we kind of de-centered the ITU and that controversy completely by taking this broad kind of approach, which I think was good. Fifth, we began the holistic analysis of a broad range of issues. In light of that broad definition, we mapped out all the different issues that are part of the Internet governance ecosystem and galaxy and clustered them and made them a little bit more tractable in terms of discussions. We had a process where colleagues put out different visions for oversight of Internet governance of critical Internet resources. This is the first chance for people to put forward alternatives to the ITU as an intergovernmental solution. Various government members came up with proposals for a global Internet council, an intergovernmental global Internet policy council, all kinds of new types of things. None of these were agreed as a group, but they were simply listed for people’s information and that helped shape the discussion going forward about enhanced cooperation and things like that. So that was, I think, probably useful in advancing the discussion. And of course, most importantly from the standpoint of this group, the WGIG proposed the creation of an Internet governance forum to continue the dialogue, to help to solve the deadlocks that were occurring around Internet governance to say, let’s have a permanent space attached to the United Nations where we can continue to have open discussion without the pressure of negotiating outcomes and so on. So those were seven ways in which the WGIG, I think, made a meaningful, impactful contribution to the conclusion of the WSIS process, but also helped laid the foundation for everything that’s gone on in the years subsequent. So that’s just a little background to refresh us for those who haven’t thought about the WGIG in 20 years. And now what we’re going to do is go through three forward-looking questions. I think it’s important to think about how we can do that. I think it’s important to think about how we can do that. I think it’s important to think about how we can do that. I’m also asking colleagues to think about how we think about some of the contemporary problems from the perspective of what we did together 20 years ago. So, thank you.


Markus Kummer: Thank you, Bill, and at a more basic level, I would say the WGIC report found its way into the final outcome of the WSIS, which was a very, very good report. I think it was a very good report, and I think a significant impact of the WGIC that we actually managed to feed into the process, and also, for those who attended the summits, there was a significant difference between the Geneva phase and the Tunis phase. The Tunis phase was by far more open in terms of procedure. The Tunis phase was more open in terms of procedure. The WGIC report was more open in terms of procedure. There was a discussion group and some government said, hey, this guy is not government. He needs to be taken out. Whereas, in Tunis, their ICANN community was present, the chair of ICANN was sometimes, or the CEO was sometimes asked for his comments or opinion on some of the items, and that was also an implementation of political science and the way in which those groups were discussed. And were we present. You will, and I have three members of the old WGIC group listed as contributors, but others feel free to chime in, and we will also open to the other participants to come in after the first segment, but we will invite Tita for us to come in after the cursor section has opened up a little bit. Here are three members of the old WGIC group listed as contributors, so to speak. You were a representative of ICC and people listened to what you had to say. Over to you, Aisha.


Ayesha Hassan: Thank you very much for the opportunity to speak to you. I think it has come a long way, but I think that the definition has stood the test of time. I think it’s still valid. I think that it has been nicely shaped so that it has adapted. In terms of the nature of Internet governance today, I think that it has expanded in the sense that many emerging countries are using the Internet as a means of communication. I think that it has also expanded in the sense that many of the issues that we are talking about now, in terms of Internet governance, whether it’s AI governance or other technologies, so I think that the discussion that we started 20 years ago is now at a place where we want to keep looking to having adaptable ways to address these issues, and one of the things that I think is really important is the idea of partnerships, and the idea of cooperating and discussing things across the stakeholder groups, which really didn’t exist before, and I just want to take a moment to share one little anecdote. At the time I was global business, and at one point our esteemed chairman, Nitin Desai, laughed and he said, oh my god, do I hear Cuba and global business agreeing? And here’s my comrade next to me today. Yes, we learned to talk to each other, and I think that that has lasted over the years and been fostered by the IGF itself, giving everybody an opportunity to talk with people that they may not have the chance to have a coffee and truly exchange with. And I purposely am emphasizing this because that is an asset, and I think that’s part of what the WGIG started and the IGF has continued to foster. So I’ll stop with that at this point.


Markus Kummer: Thank you, Aisha, for that, and I think anecdotes are always good to enliven the debate a bit or the discussion, and it’s great history. Raul, can you come in?


Raul Echeberria: Yes, thank you very much, Marcus. It’s very nice to be here with all of you today, 20 years later. I don’t know why the rest of the people look 20 years older, not myself. I don’t know what happened with the rest of the people. WGIG was a really innovative experience, an innovative way to address the difficulties that have presented during the first phase of the summit. As Marcus said, it was very different. The first and the second phases of the summit were very different. It was an achievement itself. The WGIG, as Aisha also pointed out, all of us learned about how to work better with other stakeholders. And it was very helpful to have an informal discussion in Tunis. I remember also… bringing an anecdote that there were a lot of discussions in the wiki about who managed the root server A and some of us were trying to explain to the rest of to other people that the root server A was not important. That’s so I said no because why you question what we are our positions. No, I’m not questioning your positions What we are saying here is that the problem is not root server A. The problem is more complex. So that’s the is the the discussion the level of discussion in 2005 was very different than 2003. I think that the one of the things that one of the important outcomes of the wiki was also that we strengthen the the concept of multi-holderism and this is when we look nowadays the the organizations and the forums that are involved in in different ways on in the governance of Internet. Not all of them are pure multi-stakeholder models, but they all of them or most of them are open to the participation of all stakeholders and this is important because we consolidated the idea that the participation of all stakeholders is crucial. It’s important and it’s a very important still today and though things that sometimes we take us for granted are not. And they all the with all the political changes that we are seeing around the world. That’s the things that that we assume that that were discussions of from the past probably will come back to the table in the in the in the near future and so it is important that we continue as reinforcing that idea that the participation of all stakeholders is crucial and that we need in a very changing world. We need to be more efficient in the development of digital policies to be more efficient, to have the right policies on time is we need the participation of everybody from the inception of the processes, from the origin of the discussions. So I think those lessons that we learned 20 years ago remain very important, valid. I wouldn’t say just valid, that’s crucial for the future of the digital governance in general. Thank you very much.


Markus Kummer: Thank you, Raul. And over to Wolfgang.


Wolfgang Kleinwachter: Thank you very much. I was a member of the WIKIG on behalf of civil society and the academic community and Juan was from the government, Aisha from the business and Raul from the technical community, as demonstrates already, that there was a unique culture of collaboration by trying to promote the understanding, because I also will start with an anecdote, because originally internet governance was not an issue for the WSIS in the mandate from the UN General Assembly in 2001. So in the process of the second PREP-COM, some people raised the issue of internet and then we had an inter-sessional meeting in Paris and this was in a night meeting in the cellar and it was unclear whether non-governmental people can join this Working Group 5. Now we are in the Workshop Room 5. So it was a new Working Group 5 because we had four different working groups and there was no control on the entrance, so that means a lot of non-state actors moved into the room and then a debate started and somebody talked about IP addresses and then the ambassador from a country, I will not name the country, said, what is this an IP address? Because there was only little knowledge about governments at this time and then Paul Wilson from APNIC stepped in and explained, you know, how how IP addresses and domain names are functioning. And then the ambassador, oh, this was very helpful. Thank you very much. And I think this was the start of the mutual recognition that every stakeholder can bring a different expertise to the table. But from Paris, it was in July 2003 to December 2003, to December 2003, the understanding of the complexity was still on a low level. So that means even people did not understand what we are talking about. That’s why the Wiki got the mandate to define what internet governance is. So it had not only the mandate to work on recommendations, but also to give a definition. And I think this definition is, the discussion around this definition is really crucial. And Bill mentioned that already. When we worked on the definition, we had two options at the end of the day. And narrow definition, just to concentrate on the critical internet resources, names and numbers, or a broad definition. And confronted with the complexity of the mandate also of the WSIS, we decided in favor of the broad definition. And this is what Bill said also on who and what and how. So, and these are the three elements which are defined in these three lines of the paragraph in the Tunis agenda, which says, okay, it has to be all stakeholder has to be involved in their respective roles. I was fighting until the very last moment to add on equal footing. But Anita said, okay, respective roles include in a certain way that everybody is equal in their respective role. So, but anyhow, this was the first element. all stakeholders has to be involved, the second was sharing, sharing on norms, principles and even decision-making procedures, that is the collaborative approach, and the third part was that we differentiated between the evolution and the use of the internet, so the technical and the political layer, and this was the holistic approach. And if I take these three basic elements of the definition, the multi-stakeholder approach, the collaborative approach and the holistic approach, then I think this definition is really universal and can be used for all governance aspects which we are discussing today, 20 years later, because 20 years ago internet governance was the term that covered all the things, but since that we have seen, you know, a lot of new language appearing around governance, we have digital governance, ICT governance, cyber governance, then governance of the internet of things, IoT governance, and now we are talking about AI governance, and I’m asking myself, you know, what is the difference between AI governance and internet and the broad definition of internet governance, if you just take these three approaches. So AI governance has to be multi-stakeholder, AI governance has to be collaborative, and, you know, without a holistic approach, you will fail to find any sustainable solutions for AI, that means you have to take into consideration also in AI governance the technical elements and the public policy implications, and in so far, you know, the confusion we had 20 years ago around internet governance sometimes reappear as confusion about AI governance, but it’s rather simple, go back to what we have produced 20 years ago. Thank you.


Markus Kummer: Thank you for that, Wolfgang, and yes, we also, the Tunis agenda has a whole chapter on internet governance with some explanatory paragraphs, and they make it clear that anything related to the use and abuse of the internet is part of internet governance, and also… these applications you mentioned, AI and whatever, they rely on the internet. Without the internet, digital without the internet, it’s not just about the standalone computer, it’s about connecting the computers. But I would like to invite other WGIC members, old WGIC members, if they would like to add anything, but also open the floor to other participants if you have comments or questions. Juan, you would like to come in, please?


Participant: Yes, well, good morning. I don’t want to, I’m not going to talk about the substantial achievements of the IGF, of the WGIC, because it was just mentioned, you know, the working definition and the rest of the mandate. I want to talk about some achievement of the methodology, because to ask a group of persons, intelligent group of persons, that have different viewpoints, to agree in a very contentious subject, it seems like being crazy, you know, because, well, if you have people that have similar opinions, that’s okay, but to have 30 people with different opinions to get to the results, it is a challenge. And it was done, and I’m going to share with you how methodology, it was done, because I think it’s useful for many discussions that are being carried out now, and also it has been applied in some other forums in which I participated. The first thing is that throughout the year that we were working, we had a lot of opinions. I think Bill mentioned all that opinions, and it was all collected in a very big document, and then at the end, because we had to have a report, we were placed into a… and they throw out the key, and you don’t get out until you have a report. But we have very, very large material that will be very impossible to collate. So the first decision is that we will have two reports. We will have a big report in which all the opinions will be collected. It necessarily is not a consensus report. It’s a sort of compendium, and it was mentioned that that was being very useful because many interesting ideas were there, and then people can take it out and materialize it later, and then to concentrate in the actual report. And, of course, then we have a lot of things in which we had consensus, and we put it there. We discussed the definition, as Wolfgang just said, with some tweaks there. But in the end, there were the recommendations of the arrangements for the government. There was no consensus in the final one. And so we decided that we would try to narrow down the different proposals to the bare basics, and we finally ended with four different ones. And I think this is a contribution of this report, and it’s a methodology that can be used in some discussions that we’re going to have now, for instance, in data governance and some other, that whenever we don’t get, we don’t have to get to one final consensus because maybe that’s impossible. But if we can narrow down and put the basic alternatives as part of the final report, that is a result, because otherwise you can say, okay, we don’t have consensus, so we don’t have result, and that’s zero. And so I think that this opens the way. to very contentious issues that it’s been discussed today to open the way to have results that could be actionable and that can really contribute. So I think Marcus and Bill and my colleagues, I think that that’s what’s one of the contribution that we made. And I think we had no choice because otherwise we’ll still be in the Chateau 20 years after trying to get the one only answer. Thank you.


Markus Kummer: Thank you. It was very good wine. Yeah. Yeah. Marcus. Marcus. OK, Jovan.


Jovan Kurbalija: Thank you. Just a quick note on this point, which I think Bill and Juan brought. At that time, you had a feeling that your input was taken care of. It was not accepted verbatim, but you had a feeling that it was provided. What is the major problem today is that we have so many processes which call you to have your say, make contribution. And your contribution disappears in some sort of a governance Bermuda Triangle. And you just come with some document and say, OK. It’s fine if somebody says, OK, we disagree. We cannot accept that. There is a consensus. But this is a huge problem. Some good news is, ideally, we should be locked in some sort of secluded place and negotiate, as Marcus did, really, as a big master of diplomacy. But nowadays, we have also AI that can help us. AI can trace our contribution to the final document. Is your contribution reflected in final document or completely ignored? It’s not ideal. I prefer human approach. But this is one great lesson, I would say, from WGIG and overall IGF process. When somebody asks you to have your say, to contribute, there is some sort of reflection. Roger, this. agree, agree, let’s discuss it, whatever. I’m afraid that this is missing now in global governance in general, and I would say also in AI and digital governance.


Markus Kummer: Thank you, Jovan, and I think in the interest of time we have to move on, and I encourage the next speakers to be as concise and compact as possible. So we go to the multilateral versus multistakeholder, and the first speaker is Alejandro Pisanti. He’s joining us remotely. Alejandro, are you on?


Alejandro Pisanty: Yes, I am on. Can you hear me?


Markus Kummer: Excellent. We can hear you. Welcome.


Alejandro Pisanty: Hi. It’s 3 a.m. in Mexico City. Cheers, everybody. Thanks for the invitation and for the first round. It’s very useful. One of the things that happens with, and it’s a bit of a paradox, as Jovan has said, we had the years of the governments and decision makers at the UN structurally because this was the outcome, and we was part of the process to produce an outcome for WSIS, the World Summit of Information Society, which had been agreed to take place by an agreement of the UN General Assembly. So that’s something that we’re not having as often now, and processes like the GDC seem to have been – Global Digital Compact – seem to have been designed in order to make it much harder for voices like who would have been the wiki to make even a dent on decisions that have already been made by the Secretary General and his adjuncts and a few influential governments. Multilateral versus multi-stakeholder, we have to – I’m confused. convinced that all problems of Internet governance and many others are much better solved by multi-stakeholder mechanisms. The weights of the stakeholders have to be different. Institutional and organizational design are key. We have to be able to, let’s say, involve governments decisively when things are involving law enforcement, for example. So even if the involvement is informal, it has to be very enabling, as happens in the anti-phishing working group or as it’s happening in the upcoming global anti-scam alliance. And in other places, the governments really have themselves decided to sit in the second row. That happened to us when we were doing the ICAN reform process around 2001, 2003. Governments were offered seats to study, at least seats at the table on the board, and they had very good reasons not to do that. One of them was legal, that they would have to join the liabilities of the corporation as governments, and the other ones that they would never come together to decide who were five representatives or even three representatives as one vote. What we have learned from these multi-stakeholder processes also is how different stakeholders perceive what is important, what is decisive, and what’s actually pretty standard. As you know, I have a scheme for translating things, like, for example, using identity, the mass scale of the internet, cross jurisdictions, lowering barriers, lowering friction, and managing memory and forgetting to understand what problems are actually not a problem created by the internet, but modified or disrupted if you want. And in some of these cases, you actually do need a more multilateral layer in the solution of a fully multi-stakeholder problem. Consequences of choosing multi-stakeholder or multilateral, well, it’s… the internet more free in the countries that propose multilateral? That’s probably the acid test of why we actually need to keep pushing for multi-stakeholder because every single country that pushes for more multilateral is also pushing against internet freedom. Thank you.


Markus Kummer: Thank you for that Alejandro and thank you for joining us at the ungodly hour in your country. That’s very much appreciated.


Alejandro Pisanty: Love you so much Mansoor.


Markus Kummer: And we have Avri as a speaker on these issues. Please Avri.


Avri Doria: Okay thank you. I actually very much like this subject of multi-stakeholder and multilateral as opposed to multi-stakeholder versus. I think at the moment in this point history they both have to coexist and in fact we’ve seen that. We see that with examples of like the ICAN example. We see that with examples like the the Sao Paulo guidelines which basically finally talk about how we can bring the two together. Now there was a wonderful part of WGIG where we were actually I think it’s the last time we actually participated as equals and that’s one of the things that I’ve looked for since then. WGIG truly was I was able to sit there and argue for hours against some of the roles and responsibilities as believed by governments and until Neaton got tired of it, it was allowed to go on and that is important. That is something that does not happen at the moment because even in our multi-stakeholder models even in something as vaunted as the IGF that grew out of WGIG, it’s a top down. There is an authority that we all answer to in some way. We all appeal to. We all have to go to. So the multi-stakeholder is there. I would never say that IGF isn’t multi-stakeholder. but not completely, not fully. It has a ways to develop. And Wigig does give an example, does sort of point a way of it can work. And we’ve seen other examples where it can work. It was an interesting example to me. I came in as a techie, knew nothing about internet governance, knew a little bit about philosophy, didn’t know what ICANN, I knew what ICANN was, but nothing about it. And I learned a lot through that process and found it really quite, quite valuable. And it obviously changed my life. One of the other things, and I wanna point out, and I’m not sure it even fits in this thing, but there was one feature of the working group on internet governance that we’ve lost. We had Frank March, our secretary, our main writer, sitting in the room with us while he was writing, talking to us, asking about this paragraph or that paragraph. That goes even further on the notion that now we sort of contribute our comments into a bucket somewhere and somebody may look at them, may not look at them, may include a word or two, but not. We actually sat there irritating this poor, lovely man, saying, no, no, no, no, you gotta change that word. No, I need this paragraph. No, and he actually bore with it. And so while we’re all talking about the wonder of Wigig, I really wanted to bring up the wonderful example of Frank. So I think that for the foreseeable future, we have to work on a way to combine the two because governments are not gonna give up their multilateral insofar as they can get beyond unilateral but they’re not gonna give it up in any time. And we can’t afford to give up a multi-stakeholder model so the two of them have to work either in contention, which is not that useful. or figure out how to operate together. And I think we’ve got some motion in that direction. I think WIEGIG started it. I think it’s always good to go back to reading, not only the report, but as several people said, the background report. There’s so much in there to play with. And I recommend it to any student. So hopefully that answered my part of the question.


Markus Kummer: Thank you for that, Avri, and also thank you for mentioning Frank. Frank March played a very important role. I think without him, we wouldn’t have been able to produce the report. I invite you to share a thought on him because he passed away, sadly, a couple of years back. That was very sad to learn. It was really an important help for us to produce the report. And the last speaker on this segment, and thank you also, Avri, for moving away from the dichotomy, multi-stakeholder versus multilateral. No, I think it was the Brazilians who always make the point there is no false dichotomy between the two. They have to work together. And thank you also for underlying that. Charles, you also put your name down for that. And can you also say if there’s anything to report on what’s happening in the Zoom room? Please, Charles.


Charles Shaban: Thank you. Thank you very much, Marcos. In fact, I think what I will say is I will start by saying that I was still young in the internet governance sphere, not in age, I’m still young in age. Anyway, so, but it was really a wonderful experience to sit, as my colleagues mentioned, with the different stakeholders. And to give an example, maybe different of what my colleague mentioned, I mean, a specific example. Let’s go to my current practice. The intellectual property was somehow problematic, you can say. I think we need to make sure that we have a policy that is not based on the UDRP, for example. I think without the multi-stakeholderism, we couldn’t have this uniform resolution policy on the Internet. Why? Because as everybody knows, it started with ICANN, WIPO, WIPO government, ICANN, different technical body, we can say, private sector, lawyers, and the civil society. So we need to make sure that we have a policy that is not based on the UDRP, for example, and we need to find these different solutions for the disputes on the Internet domain names in specific. So I would like to mention this to have some additional of what my colleagues already covered, I think, in general. So I agree, maybe the last sentence, I will say, I will not talk a lot, I will concentrate on remote moderation today. What Avery mentioned, I think multilateralism is a very important issue, and I think we need to find a way to work with the multilateralism, which is very important, bottom-up and so on, and to be able to find a way to work with the different multilateral mechanisms. Thank you.


Markus Kummer: Thank you very much. And I think there was a time when there was a real antagonism between multi-stakeholder, multilateral, and the technical community in particular said, oh, we need to work together, and as working for a small country for the diplomatic service of Switzerland, we always believe in multilateralism because it protects the smaller countries. Multilateralism is always better than unilateralism, and I think in this current global situation, we really all need to feel, we feel strengthened through multilateralism, which does not include unilateralism. And with that, can we move on to the last segment, which is essentially looking to the IGF. Has the IGF delivered what we had hoped for, and it really built very much on WGIG. We had, it was mentioned before, we had our closed session on the Chatham House rule, but we always opened up in between, and the day before, we had an open consultation, and we thought that could be the model also for the IGF, and the IGF very much built on that model. So has the IGF actually lived up to our expectations, and what should we do to move forward? And we have two remote speakers here, Baher Esmat, who works for ICANN, and Carlos, they’re all remotely, they’re both remotely, and then we have Jovan and Vittoria, who are here in the room. Can we move to Baher? Baher, are you online?


Baher Esmat: I am. I hope you can hear me.


Markus Kummer: We can hear you loud and clear. Excellent.


Baher Esmat: All right. Thank you, Marcus, and hello, everyone. I’m pleased to take part in this session and to contribute to the discussion. alongside my WGIG colleagues, the IGF, and today we’re almost 19 years into this global forum, and I think the IGF has been, you know, the primary global multi-stakeholder forum for discussing internet governance issues. It kind of, you know, filled a gap that was identified by the WGIG members 20 years ago. It provided a space for discussion, for open discussion among all stakeholders, from governments, private sector, civil society, technical community, and academia, all on equal footing. It has also contributed to a very important element, especially to those coming from the developing world, which is the capacity building aspect. And I think today, with the numerous, you know, regional, national IGFs and other similar platforms from, you know, school and internet governance and so forth, this, you know, shows the impact of the IGF over the years. Another point I’d like to point to very quickly, and I think some of the previous speakers have touched on, which is how the IGF has evolved over time. I think the IGF has been, you know, continuously evolving over the past years. We’ve seen this in many aspects from, you know, trying to improve its outcomes in the forms of, you know, whether messages or reports and so on, but also in the form of topics and issues being addressed during the meetings. And as someone noted earlier, you know, this debate between, you know, internet governance, digital governance, and those definitions, I think because of the way the WGIG approach the issue of defining internet governance. And, you know, we’ve seen the definition itself is broad enough to sort of encompass, you know, a lot of issues, most of which were not even foreseen, they did not exist 20 years ago. So the IJF over the years has evolved in its agenda. And we’ve seen many topics that were not in, you know, the WGIG radar in 2004-2005. AI is, you know, the most popular, but there are many, many others. And I think this evolution is one of the key characteristics of the IJF. The other characteristics that has been debated over the years is the non-decision making nature of the IJF. And while some, you know, have debated that this is one of the weaknesses of the IJF, personally, I believe it’s one of the strengths. And I think it was not a bug in the system. I think it was intentionally by design to be made as a, again, an open and non-decision making forum to allow everyone to contribute and to participate on equal footing. Now looking to the future, and this is my last point, I think, you know, as we continue to consider how to evolve the IJF, how to improve and strengthen the IJF, I believe that the financial stability and sustainability of the IGF is key. And for the IGF to continue to serve as the global internet governance forum, I think we need the minds of all the participants and the contributors to the IGF to come together and to consider more innovative ideas to sort of guarantee or at least offer or put forward a sustainable model for funding and supporting the IGF to continue its role at the global level. Thank you.


Markus Kummer: And the next speaker is Carlos Afonso. Are you joining us from Rio?


Carlos Afonso: Yes, I am joining from Rio. Let me turn the camera on. Yes, I am there.


Markus Kummer: It’s great to see you, Carlos.


Carlos Afonso: So I find it a bit complicated to find new things related to the report that we did in 2005. I already mentioned a very important thing that Frank Marsh did a beautiful work of patience, of listening to us and trying to synthesize everything. It was really great and the report is much, much more than the definition. I think we did a definition that was to stand the test of time, simple enough to stand the test of time, but it’s still very simple in relation to… to all the complex issues we are facing. But the report was so important because it identified four key public policy areas, which are still the main public policy areas, whatever the development of internet and, you know, and these four policy areas we managed to detail in 13, issues, fundamental issues, which are valid until today. So I think that more than the definition, or the report is a very good reference, which the report itself is studying the test of time. And this is great. I think this is what’s the main contribution we could do in that group at the time. Thank you.


Markus Kummer: Thank you very much, Carlos. Great to see you. Now back to the room and we have Jovan and Vittoria who also address this issue. Jovan first.


Jovan Kurbalija: Thank you, Markus. Well, it’s a party time in a way, and I don’t know if it is good to bring some controversies, but since we will have four days to chat in the corridors and during the coffee break, I will propose three points about the future of IGF, which may require the great contribution for all people in the room and the former members of the working. The first one, and thank you for parking it, is the false dichotomy, multi-stakeholder versus multilateral. If you look carefully in the Tunis compromise, you will see that IGF is masterpiece of compromise by putting the multi-stakeholder body under the UN umbrella. Both camps got something. Pro-governmental camp got for the first time IG issues in the UN context. Multi-stakeholder camp got multi-stakeholder participation. That formula, unfortunately, will have to be revisited, and that cannot stay as fixed in the stone. As Executive Director of the UN High-Level Panel, I try to argue for IGF Plus. And one point, and I will now open controversies, is that I argued that the famous, very controversial, big elephant in the room question of enhanced cooperation should be brought as one of the track on the first day of IGF together with governments, civil society, businesses discussing. We may not call it enhanced cooperation, we may call it enhanced coordination, just to have it smoother consumption, but I never understood why it wasn’t possible. I understand political positioning elements of it more diplomatic, but in its core it was very simple solution to put that last bit from the Tunis formula, bit of salt, which I think British diplomat on the 18th of November during the negotiation brought as a solution to find the package. That’s controversial and I’m sure there will be many questions. The second point is capacity building, and that was a great achievement. And here is a personal story. When I started doing IG, my friends asked me, what are you doing? And then when I told them what I was doing, they were calling me to fix their printer, to install their software and these things. I usually did it. It’s great to help people, but then it inspired me to write a book, Introduction to Internet Governance. And fast forward, few months ago, I wrote the last book eight years ago, and I said there is no need anymore, but people convinced me and I’m preparing eight edition, which will be presented on Wednesday. And the question was, should I call it Internet Governance? And you will see if you come to that discussion, there is a reason to keep it as Internet Governance. Issues are the same. They’re not asking me anymore to install their printer. They’re now asking me about where their knowledge is, who is basically monopolizing their knowledge. Discussion moved on. And that sort of dynamics, by me writing the book every year and now revisiting this book, is a great sort of diary of the Internet Governance Forum and its achievement. And third point, which Bill mentioned, is extremely important. It is modus operandi of the IGF. Sometimes. And. estimate that. And I’ll give you one point, again, personal story. 15 years ago, I went to the IGF, I think, well, you will find the date, but it was, and you know how it goes. You come the first day and you have a big ambitions, great speakers, great workshops, you want to follow it all. And after the first morning, you realize that you cannot do it and you end up in the cafeteria meeting friends, chatting. And there is always that feeling of the missed opportunities, or I missed something. This is how deep loss reporting started. First with humans, our former students, interns, and now it’s now help with the AI. And that incremental development of new methodologies, now when everything is now and here, we have AI, let’s install AI agents. But that incremental in all aspects of IGF work, capacity building, bringing consensus, involving other people from our side, this reporting, I think it’s a great legacy of the IGF. And on that legacy, we should build the future of IGF. There were three points. First, revisit Tunis formula. We need to do it. Second, continue capacity building. That’s one of the great achievements of IGF. And third one, talk more about the way of modus operandi of the IGF. It’s completely, I would say, we are too shy about it. And this is untold story of high relevance for the broader governance and other communities. Thank you.


Markus Kummer: Thank you, Jovan. And last speaker is Vittorio.


Vittorio Bertola: Thank you. So as the former youngest member of the WIG, again, the least diplomat, I think I will have to also start some controversy. But I think we’re really at the point in time where we need to think of the future of the IGF. So we need also to look at what worked and what didn’t work. I’d start from the first half of the problem, which is the practical way of working on the IGF. And I think in the overall, we like this event, so we continue coming. So I think everybody finds value in the IGF. I think it should continue. I think it could be better. I mean, especially this year, I only meet people that are disappointed because they’ve been trying to propose a panel or workshop for several years now, and they never get accepted even if they meet all the criteria, because there’s, I guess, a sheer lack of space. I mean, there’s a limit to how many you can have. But we have to find a better way of mobilizing these energies, because otherwise there’s people, especially outside of the existing, I mean, regular participants that come, try several years. They get disappointed. They go away and say, oh, the IGF is just a smoke, just for the insiders, just whatever. And the other thing is, I think that will also solve this problem, is a much better working of the national IGF specialists. I mean, our experience with the Italian IGF is terrible. I mean, six years ago, six, seven years ago, it was captured by the then government, and it was used by a politician for self-promotion. There were no meeting anymore. I mean, it didn’t even happen for several years. And then now it’s still in the hands of the government, and it’s a new government. You know, in six years, we have had like seven governments of multiple colors. But still, every government is keeping it, and now they’re organizing maybe one this year. Again, it’s multi-stakeholder in the sense that there’s multiple ministries involved. So I think that we need to address these kinds of things, because it could make the credibility, I mean, especially the bottom-up process much better. But then the other part of the question is about the role of the IGF, and I mean, the question in the program is, I mean, did it meet the purpose that we thought it will meet? And to be honest, I think that while the narrow part of the definition worked well, and I think that nobody’s really unhappy with the governance of the technical resources, that the broadest part of the definition didn’t work very well. So we are now at a state, I mean, 20 years ago, we believed the Internet would bring democracy and wealth, and it was an instrument for progress. And nowadays, I mean, I already quoted this. One month ago, there was a survey in the UK, and they asked the young people, I mean, would you live better off if the Internet didn’t exist? And half of them said yes. So half of them think that the world would be better without the Internet. And this is really, really terrible for us that, I mean, work to create it and make it. a mass instrument. And I think what failed is, I mean, we were naive. We thought that by putting everyone together, we would be able to address the economic and social questions, and this didn’t happen, just especially because of the private sector, I have to say, I mean, we’re part of the private sector. The people that could make money out of breaking down the internet and turning it into walled gardens, they just went on and made money. And nobody could stop them because we had no stick. We had carrots, I mean, to come in, but we had no stick to force them. And this is exactly what caused now the transition of countries and regions like the European Union that have always been in favor of multistakeholderism and open governance and whatever, to a new, I mean, like a hard jurisdiction, hard law approach. And this is why I think we were getting now also pressure from multilateralism. And to be honest, I don’t like the idea of more multilateralism, but I also don’t like the idea of continuing with a few very big companies that are doing whatever they want over the internet. And so maybe the national level will be more important. I don’t have an answer on how to build a new balance between all these stakeholders, but indeed there needs to be a reflection which I think includes the IGF as a continuing entity, but also takes into account that by now, I mean, governments really have the need to put some hard rules over global businesses and that this is the tension that is not going away. Thank you.


Markus Kummer: Well, thank you, Vittorio, and also thank you for asking tough questions. I see there are already reactions. Charles, do you have?


Charles Shaban: Maybe Aisha first, but there is a question online.


Markus Kummer: Well, why don’t you go ahead with the question?


Charles Shaban: Okay. In fact, there was some discussions and Bill already answered some of them, but I think one of the questions still online without a response maybe, from Shaima Akhtar, Chairperson, Bangladesh Women IGF. Given the rise of AI, surveillance technologies and the dominance of platform monopolies, should we now push for a more intersectional and right-based approach in defining digital governance? that generally center the live realities of women, youth, and marginalized communities.


Markus Kummer: Thank you. And there were hands up, Aisha and Avri, yes.


Ayesha Hassan: Just briefly, I wanted to build a bit on what has been said by other members here. I think that the IGF is a unique, Marcus, you’ve said this, a watering hole. And I really think that the way it is, it has evolved over 20 years. There are many ways in which each year it’s a different experience. And I think that the range of stakeholders has expanded in all of the different stakeholder groups. So as we look forward, I think it’s encouraging the topics that are important to people to be taken up here, whether in the workshops or in the main sessions. And I was very pleased this year to see resilience being there, because I believe that resilience is a challenge of the future. And it’s not just about how to survive one shock. It’s about how do you build the capacity across the layers, and how is this worked on across stakeholder groups to raise awareness about it. Now that, unfortunately, for the young people who wish that the internet didn’t exist, it does. And our economies, our political systems, social, everything, we depend on this wonderful thing called the internet. So I just wanted to say that now I think an issue for everyone to come around together on is also resilience. How do we keep this being as reliable and secure as possible? And lastly, a shout out to you, Jovan. I’m still waiting for the new edition of the puzzle. Do you remember your puzzle?


Jovan Kurbalija: It’s coming. It’s even more complicated.


Markus Kummer: Thank you, Aisha and Avri.


Avri Doria: Thanks. First of all, I want to thank the comment that came in. And one of the things I want to remind us all is there was a time in the IEGF where we couldn’t even mention the word or the number. notion of rights of people, that we had long, long battles about, my God, rights belongs to some other department within the UN. You can’t talk about rights. And fortunately, we’ve gotten, we’re not doing a lot of it. And so the questioner is right. When can we start talking about them more? Any notion we have that IGF has bottomed up is something that we should quit pretending. It is not. It hasn’t been. And I’d love to see it bottom up, but it isn’t. So and others, you don’t get to be bottom up by having an occasional consultation that you ignore. So that is, we are a tribe of groups. We come here, we come here in our tribes, and we argue for our points. But we’ve been told what points we’re going to be able to argue from by those who control it. So I love the IGF, I love to see it continue, and I’d really like to see it become bottom up.


Markus Kummer: Thank you. Charles?


Charles Shaban: Not from me. Alejandro, raise his hand.


Markus Kummer: Alejandro, want to come in?


Alejandro Pisanty: Yes. Thank you. Complementing what Avery has just said, we’ve been looking to the future of the IGF. Another type of feedback I get is that we must make sure that it doesn’t become like rights come winter. We have to make sure that we engage in these discussions substantively, the people that are decision makers and not only commercial representatives of businesses and higher level officials that don’t stay apart in their own corrals. But we really get to have this conversation substantively. It has to be bottom up, but it has to reach whoever else is in the geometry. I wouldn’t say they are on the top, but they are at the center in effective. political decision-making and we have to make sure that there’s an engagement with them, otherwise it’s too much corridor and too little really multi-stakeholder engagement. Thank you.


Markus Kummer: Thank you and I would also like to open the discussion to the floor. There are microphones on the side so the easiest way will be if those who have a comment or a question just align themselves behind one of the microphones. Yes, Raul, please. Okay, please. Okay, thank you.


Jimson Olufuye: Good morning, everybody. Can you hear me? Yes, we can hear you. Okay, first and foremost, my name is Jimson Olufuye, Africa ICT Alliance, been in the ecosystem for quite a while. I want to salute all our forerunners. Great job you’ve done and we are still at it. First and foremost, I want to talk about the last point made about bottom-up. Actually, we need to understand that it has always been top-down, always been top-down, so it’s going to take a while before it becomes a bottom-up, not as with ICANN. No, ICANN by design is bottom-up and so we need patience, we need perseverance, and we need to continuously engage and talk about it. And that brings me to the second point about multilateralism and multi-stakeholderism. I favor, I love multi-stakeholderism. I love it. It makes life easy. Everybody claims ownership, have ownership, you know, for the common issue for the society, so it’s the way to go. But I have an experience, too, that I will just share briefly. During the wicket that is working group on the arts corporation, a representative accused us in the private sector that we wanted to take over the work of government. So I had to make an explanation that indeed, no, we are helping the government to fulfill their objective. objective, the responsibility to the citizens. So, and after that explanation, I think the accusation stopped. I never heard about it anymore. So I think we need to continuously engage and explain clearly our intention. Our intention is a better society, information society where nobody’s left behind. And when we all work together, we will achieve together, and the government will achieve their purposes. Then thirdly, this is a question now, with regard to Yohan. Yohan made a very important, at least, comment about the issue of consensus. Look, I recall the conclusion of WGATE in 2018, 2018, January 2018. We could not have consensus because we’re expecting 100% consensus. But if what is said now is what we adopted, that is rough consensus, okay, or near consensus, 99% consensus, maybe we’ll have had a kind of firm report that says this is the outcome of this group. So the question is, if we had had that report based on what you have said, do you think the follow-up summit of the future will still have happened? Because by July, the Secretary General set up the high-level panel on digital cooperation, and that led us to this because the WGATE failed. Thank you.


Markus Kummer: Thank you. Shall we go to the microphone behind, and please introduce yourself?


Israel Rosas: Yeah, thank you. Israel Rosas with the Internet Society. Just a brief question, and Yohan hinted at it a little bit. If you were to give a single piece of advice to the WGATE facilitators on how to reach or generate consensus for this process, for the outcome document they are drafting, what would it be that you recommend to them? That’s the only question that I have.


Markus Kummer: Okay, that’s a short question, and I’ll take it back to our panellists. And now, Ariette.


Ariette Esterhuisen: Thanks, Markus. Ariette Esterhuisen, past MAG member and past MAG chair. Do you think the IGF is ready to actually handle controversial issues? I think we’ve spent so many times, I support what Yohan is saying, this fear of putting enhanced cooperation on the agenda. Surely the IGF is mature enough now to be able to do that? This fear of putting issues that are controversial, such as fair tax payment on the agenda by big tech companies, do you feel the IGF is mature enough to be able to do that? And I say that having worked, for example, as someone said, it took years before we could actually talk about human rights at the IGF. It took years. we could talk about LGBT issues at the IGF. Is the IGF finally mature enough to have a strategy which is including and facilitating debate on controversy as opposed to a strategy which is based on avoiding any kind of risk or controversial topic?


Markus Kummer: All important questions and again back to the second microphone, please, yes.


Audience: Hi, I’m Nandini. I’m from IT4Change India and part of the Civil Society Coalition Global Digital Justice Forum. So the panel made a lot of very insightful observations about how the relationship between multilateralism and multi-stakeholderism should not be seen as antagonistic when we want the democratic governance bottom line. My question is in recent years we have seen a lot of digital governance issues, data governance issues in particular, being taken out of a democratic space and into very closed-door multilateral spaces such as digital trade negotiations. So how does the panel think that, you know, we could use the digital cooperation mechanisms available to us to counter the view because even as the GDC processes for data governance are ongoing, we still see those issues and many new bottom lines being sealed in trade deals and sometimes even regional closed-door plurilateral deals. Thank you.


Markus Kummer: Thank you for the question and Bertrand, please.


Bertrand de la Chapelle: Good morning, my name is Bertrand de la Chapelle. I’m the Executive Director of the Internet and Jurisdiction Policy Network. Two comments. One, I’m very happy that both Juan and Jovan and others make a reference to the methodology and the way to work. What was striking from everything I understood from the working of the WGIG was this interaction between people, and Avri was mentioning it as well, and indeed the role of the Secretary had the capacity to make a summary and to present not only a one version that is watered down to get consensus, but something that says there are different options. That was extremely important. The methodology was important and it could be taken into account for the IGL. itself, because as Jordan Carter was saying in another session earlier, the IGF is not supposed to make decisions. It is, in my view, to help frame the issues, to bring the different actors around one topic so that instead of having different sessions addressing the different point of views on the same topic, the different actors can have the kind of interaction that you had within the WGIC. And second, a very quick question. The WGIC was a way out of some sort of roadblock at the end of the first phase of the WSIS. It created what all of you have said and what I believe is still today the most multi-stakeholder process that has taken place in 20 years. We are now stuck with the question of what is going to be the future of the IGF and I personally do not believe that the WSIS plus 20 process until December is going to solve the question of what are the next stages. So my question is, do you think that there would be a benefit in having a sort of new exercise of that sort, a new WGIC after December, or call it, as we were discussing yesterday, a CSTD working group, whatever multi-stakeholder discussion on revision of the mandate of the IGF after 20 years and institutionalization of this organization that now should become a mature organization with funding and processes?


Markus Kummer: Thank you. Food for thought. And I see there are various colleagues who put their hand up. I think Raul was first, then Jovan. Alejandro. Oh, sorry, Alejandro was already in the waiting room. Yes. Alejandro, please.


Alejandro Pisanty: Thank you. Quick replies to Israel. and to Andrea, and part of it has already been given by Bertrand. Consensus is not necessarily an objective of the IGF. It’s more like conveying of different views, reminding ourselves also that it has to be non-duplicative. So if there’s a forum, like satellites and the ITU or what have you, then it’s better to continue the discussion there, but after framing and bringing in new stakeholders that would be otherwise excluded. And to Andrea, and I think you’ll agree, the IGF is mature for lots of much more controversial issues. The ones that are not mature enough are some of the stakeholders. I would say particularly some governments that would prefer to continue power games in closed venues, or let’s say government-only venues where the politics is more like, you know, I’ll trade you some internal governance for some oil or some water rights. So we have to make sure that they are as mature as the forum. Thanks.


Markus Kummer: Thank you. Now we have Raul.


Raul Echeberria: Yes, thank you, Marcus. I cannot address all the points that were brought, so I will pick a couple of them. With regard to the maturity of IGF, I think that’s aligned with what Alejandro said. I think, yes, the IGF is mature enough, and I remember that in 2013 IGF was the first place in the world where we had an open discussion about surveillance after Snowden revelation, and it was a high-level panel, very, I think that it was very good. Yeah, and I organized it.


Jovan Kurbalija: It was one of the most difficult moderations. Yeah. I thought you were my friend.


Raul Echeberria: So the point is, I think that’s the the challenge is that it’s the commitment of the stakeholders to have those discussions. So I think that the tool is good and it’s mature enough, but I think that the point of failure is that it’s the commitment of stakeholders to come and engage and to be honest, and I think that’s all of us agree on this point, this is not the best moment for international cooperation. So I don’t think that we can be very optimistic, at least in the short term. With regard to the call that Israel asked about, what we would suggest to the facilitators, I’m not sure if you were talking about the work toward the December recommendations or the implementation or the recommendation themselves, but I think that the solution is exactly, the point is exactly the same and it’s what Bertram mentioned. I think with regard to the work toward the WSIS plus 20 evaluation, I think that the process has not been participative enough and it would have been good to have an exercise like WGIG to evaluate and make the recommendations, but we still have time to include that as part of the recommendation for the future. So I think that the idea could be the same. Thank you.


Markus Kummer: Thank you, and Wolfgang and Avri as well. I also. Okay, well let’s just take a round pan and start with Wolfgang.


Wolfgang Kleinwachter: Okay, yeah, the short answer to Bertram’s question is yes, certainly, because we will see in December a situation where there is no consensus. And always, you know, the best thing in such a situation, you know, delegate the open questions to a working group. So in Germany we say, if you’re not further informed, then start a working group. So that means if you are in a helpless situation, establish a working group and wait for the future, because this is not the time to reach an agreement. That’s my pessimistic preview for December. But I want to comment on another issue. When we discussed the IGF and the mandate, there was really the intention not to give the IGF a decision-making capacity. To give it only for discussion. Because this has opened the mind, opened the ears, opened the eyes for everybody. On equal footing, everybody can talk to everybody. But you need, at least in the multi-stakeholder context, four for a tango. And that means, if we didn’t want to have a governmental controlled internet, but we are also rejecting a business controlled internet. That’s why the academic, the civil society community, the technical community, was seen as an important part to bring all perspectives. What we see now is indeed that we are coming in a situation where we have either the tech oligarchs or the governments who want to manage this. When Mark Zuckerberg made the announcement in early January that he will stop content moderation, my first reaction was, Mark Zuckerberg should come to the IGF and to explain his decision to the pro-internet community. So I think this is what we need. We have to have the decision-makers on the table for discussions. And then they can go home and can make the decisions where they have a mandate in their own corporation. But it’s part of the accountability system also for the rulers of the internet of today. And this meets exactly what Vittorio has said. Thank you.


Markus Kummer: Thank you. We have one participant behind the microphone. Are you queuing behind the microphone? Two behind the microphone and then we go around here. But can you be very short? One minute?


Hadi Alminyawi: Yes, sure. This is Hadi Alminyawi. Following up with some of the controversial questions and following up and hearing this discussion, So the IGF, as been discussed, is a forum to discuss issues and, like, frame the issues in order to be discussed in a more formal way and come up with decisions and recommendations in the ITU. And that brings a question. So why would governments actually participate or take a role or be interested even in the discussions taking place in the Internet Governance Forum? And so I’ve heard others saying, well, governments would like to manage the IGF. Why would they be even interested in managing the IGF if all what we do here is, like, discussing the issues, framing them, and then moving them to the ITU in order to be discussed in a multilateral form? And there the decisions are made. So yeah, this is my question. Thank you.


Markus Kummer: Sébastien?


Sébastien Bachelet: Usually I come to a microphone and I speak in French, but it seems that it’s not possible here. Sébastien Bachelet, Isaac Frantz and Uralo. I just wanted to ask two questions. The first one, how many government representatives are in the room? Because they will learn a lot. The second question, do you want already that we book the castle for you for the next discussion? Thank you.


Markus Kummer: Thank you. French castle. Okay, now we go back to the panel. And we have five minutes left. So okay, we start with Avri.


Avri Doria: Thank you. Yeah, I have a couple of questions. I had actually, Sébastien, that same question. I was wondering, looking around, trying to figure out how many might be government and such. I won’t ask for a raise of hands, though I’d love to be able to ask for a raise of hands. But yeah, we have a couple at the table. But anyway, I wanted to also come back to people talking about the models. And I think one of the things that happened is with Working Group on Internet Governance, to get back to that, the whole discussion about multi-stakeholder and multi-stakeholderism and multi-stakeholder models really began in earnest. There were certainly examples of it before then, but the model discussion. And at this point, we’ve gotten where we really have to recognize that the model that we have at the IGF is really just one example of a way to do it. We now have many other models. We have to do more work. And it could even be a good thing to do at the IGF of understanding the pervasive number of models. The other thing we have to understand is one of the things that was talked about was IGF does not make a decision. There are multi-stakeholder models that do make decisions. For example, ICANN was brought up. But there you have a case of marrying sort of a multi-stakeholder, bottom-up policymaking with a top-down corporation doing its thing. And that marriage itself is really quite a fascinating thing that needs studying. So there are an immense number of models. And my last thing was on advice. It’s patient perseverance.


Markus Kummer: Thank you. Juan?


Participant: Yes. I didn’t want to talk, but I want to react to what you said, because maybe I’m the only government representative in the room. And I can say that why there’s no interest of government to come to IGF. And one of the things that we discuss in WGI, the roles and responsibilities of stakeholders, we need to define the roles and responsibilities of the processes that are surrounding WSIS. And that is not very well understood. I found out by experience, that the IGF is fantastic as an agenda setter, as framing the issues. And maybe I’m not agreeing with Avri, I think we have still, it can be improved the bottom up, but still there’s some bottom up because the workshops are proposed from the bottom, and also now we have the NRI, the National and Regional IGF. So the IGF is good to bring to the table problems and issues that may not be aware, not even by the governments of their own countries themselves, because of the candor in which it’s presented during the IGF, especially if you’ve been in the IGF that has been held in developing countries, the two where there were in Brazil, the one in Mexico, the one in Kenya. So I think that’s a good thing. So I think that what we need, Bertrand, after this process of the WSIS plus 20, is to really define what is the role of the IGF. The IGF has to be the agenda setter, that’s the role of the IGF, and the WSIS forum, the action, and that’s it.


Markus Kummer: Thank you. There may be others, a very last word.


Jovan Kurbalija: Maybe a quick comment from Jameson. Jameson, why governments are not coming? They are not finding answers to their questions. That’s it. 193 member states, easier than a few thousand members of the IGF. Now, we should not do government bashing. As Jameson said, we should explain why they can benefit. Bottom up, let’s use it with AI. This is now a crucial battle. Can we preserve our knowledge through bottom up AI? Not even Internet governance. This is a critical battle which is happening now and here. Are we ready to start these discussions?


Markus Kummer: Last 50 seconds.


Jovan Kurbalija: That’s it.


Markus Kummer: Bill?


William J. Drake: All right. Well, obviously, not enough time to say anything. On controversial questions, Henrietta. I would just say, remember how much time it took to talk about so-called critical Internet resources. The first couple of years of IGF, everybody was so stressed, we couldn’t even begin to have the conversation. One way to strengthen the ability of the IGF to actually take on these issues is to give us a permanent mandate. If we could, if everybody would stop worrying that somehow the mandate would be snatched back from us if we do anything wrong, then maybe we could get into things more. I also want to just say quickly, the woman who mentioned the trade questions, trade has been brought up in the IGF. It’s been very hard to talk about trade issues in the IGF because trade people don’t understand this space or care much about it, and often it’s hard for people in this space to get their head around how things work in trade. Last point on the new group thing. I think less than a new group to talk about the IGF’s future, narrowly, we need a group that thinks about the relationship between Internet governance, data governance, AI governance, and so on, digital governance. There’s a lot of confusion. There’s just an enormous amount of conceptual gunk out there, and this translates into proposals to do things like changing the name of the IGF and so on, and I think we need to get these issues sorted out and take into account what’s already been discussed and learned by people out there in the field who do these things professionally, among other things. By the way, I can’t think of one thing the ITU has done on Internet governance. I don’t see the IGF as just making inputs to the ITU.


Markus Kummer: Thank you. Well, thank you all for contributing, the participants and the panelists, and what you could see is that 20 years after, we still talk to each other and we’re still friends, so that is a lasting legacy. Please join me in giving all the panelists a big hand, and thank you all.


W

William J. Drake

Speech speed

185 words per minute

Speech length

1639 words

Speech time

530 seconds

WGIG demonstrated benefits of multistakeholder cooperation in UN context and legitimated this approach

Explanation

Drake argues that WGIG showed for the first time in the Internet governance space that multistakeholder collaboration could be effective and problem-solving within the United Nations framework. This legitimated the multistakeholder approach after early WSIS stages where stakeholders were being locked out of rooms and told not to speak.


Evidence

Early stages of WSIS process where stakeholders were being locked out of rooms, thrown out of rooms, told not to speak


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Legal and regulatory


Agreed with

– Markus Kummer
– Raul Echeberria
– Wolfgang Kleinwachter
– Participant
– Avri Doria
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


WGIG facilitated WSIS negotiations by providing systematic mapping and structured discussion of issues

Explanation

Drake explains that WGIG brought order to discussions that were previously all over the place and not cumulative. The group did systematic mapping and worked through issues in a structured and methodical basis that laid out main issues in an understandable way.


Evidence

For first couple of years in Geneva cycle, people were saying conversations were all over the place, not cumulative, going nowhere


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Legal and regulatory


Agreed with

– Markus Kummer
– Raul Echeberria
– Wolfgang Kleinwachter
– Participant
– Avri Doria
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


WGIG promoted public engagement through innovative processes like public comments and transparent procedures

Explanation

Drake notes that WGIG did many innovative things that are now taken for granted but were new in the UN context at the time. These included public comments, open transparent processes, everything on the web, and simultaneous translation in sessions.


Evidence

Public comments, open transparent processes, having everything on the web, having simultaneous translation in sessions – all things IGF now does but were new in UN context then


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Legal and regulatory


WGIG demystified internet governance by establishing that governance does not mean government

Explanation

Drake argues that WGIG was able to work through the concept of internet governance and demonstrate that governance does not mean government. They showed the need for a holistic, broad approach that included not just infrastructure but also internet use and rule systems.


Evidence

Previous debates about whether internet governance existed as a concept, or if it just meant what ICANN does or what intergovernmental agencies like ITU should do


Major discussion point

Nature and Evolution of Internet Governance


Topics

Legal and regulatory


Agreed with

– Ayesha Hassan
– Wolfgang Kleinwachter
– Carlos Afonso

Agreed on

Internet governance definition’s enduring relevance and broad applicability


WGIG developed holistic analysis and working definition that took broad approach to internet governance

Explanation

Drake explains that WGIG developed a working definition drawn from political science literature on international regimes that set out who does internet governance, what it consists of, and where it’s done. This took attention off debates about ITU taking over ICANN by taking a broad approach.


Evidence

Working definition drawn from political science literature on international regimes; de-centered the ITU controversy by taking broad approach


Major discussion point

Nature and Evolution of Internet Governance


Topics

Legal and regulatory


WGIG proposed creation of Internet Governance Forum to continue dialogue without negotiation pressure

Explanation

Drake states that WGIG proposed creating an Internet governance forum as a permanent space attached to the United Nations where open discussion could continue without the pressure of negotiating outcomes. This was to help solve deadlocks occurring around internet governance.


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Legal and regulatory


New working group needed to address relationship between internet governance, data governance, and AI governance

Explanation

Drake argues that rather than a new group to talk about IGF’s future narrowly, there’s need for a group that thinks about relationships between internet governance, data governance, AI governance and digital governance. He notes there’s enormous conceptual confusion that translates into proposals like changing IGF’s name.


Evidence

Proposals to change the name of the IGF and conceptual gunk in the field


Major discussion point

Contemporary Challenges and Future Directions


Topics

Legal and regulatory


Disagreed with

– Bertrand de la Chapelle
– Jovan Kurbalija

Disagreed on

Future governance structure needs


M

Markus Kummer

Speech speed

143 words per minute

Speech length

1793 words

Speech time

747 seconds

WGIG report found its way into final WSIS outcome and significantly impacted the process

Explanation

Kummer emphasizes that at a basic level, the WGIG report was successfully incorporated into the final WSIS outcome, representing significant impact. He also notes there was a significant difference between the Geneva and Tunis phases, with Tunis being far more open procedurally.


Evidence

Significant difference between Geneva phase and Tunis phase of WSIS, with Tunis being more open; ICANN community was present and consulted in Tunis


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Raul Echeberria
– Wolfgang Kleinwachter
– Participant
– Avri Doria
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


Internet governance encompasses anything related to use and abuse of internet, including new applications that rely on internet

Explanation

Kummer explains that the Tunis agenda has a whole chapter on internet governance making clear that anything related to internet use and abuse is part of internet governance. He argues that new applications like AI rely on the internet, so digital governance is fundamentally about connecting computers.


Evidence

Tunis agenda chapter on internet governance with explanatory paragraphs; AI and other applications rely on internet connectivity


Major discussion point

Nature and Evolution of Internet Governance


Topics

Legal and regulatory


Multilateralism protects smaller countries and should not exclude multistakeholderism

Explanation

Kummer argues from his experience working for Switzerland’s diplomatic service that multilateralism is always better than unilateralism because it protects smaller countries. He believes in the current global situation, multilateralism is needed to feel strengthened, but this doesn’t exclude multistakeholderism.


Evidence

Experience working for diplomatic service of Switzerland as a small country


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Legal and regulatory


Agreed with

– Alejandro Pisanty
– Avri Doria
– Charles Shaban
– Jimson Olufuye

Agreed on

Need for multistakeholder and multilateral approaches to coexist rather than compete


A

Ayesha Hassan

Speech speed

167 words per minute

Speech length

567 words

Speech time

203 seconds

Internet governance definition has stood test of time and adapted to new technologies like AI governance

Explanation

Hassan argues that the WGIG definition has been nicely shaped and adapted over time, remaining valid today. She believes the discussion started 20 years ago has evolved to address new issues like AI governance and other technologies through adaptable approaches.


Major discussion point

Nature and Evolution of Internet Governance


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Wolfgang Kleinwachter
– Carlos Afonso

Agreed on

Internet governance definition’s enduring relevance and broad applicability


Internet governance has expanded to include emerging countries and new technologies while maintaining core discussion framework

Explanation

Hassan notes that internet governance has expanded as many emerging countries use the internet for communication and many new issues are being discussed. The framework established 20 years ago provides adaptable ways to address these issues through partnerships and cross-stakeholder cooperation.


Evidence

Many emerging countries using internet as means of communication; new technologies and AI governance discussions


Major discussion point

Nature and Evolution of Internet Governance


Topics

Development


Disagreed with

– Vittorio Bertola
– Baher Esmat

Disagreed on

IGF’s effectiveness in addressing broader internet governance issues


Resilience across internet layers should be priority for future stakeholder collaboration

Explanation

Hassan emphasizes that resilience is a challenge of the future, involving building capacity across layers and working across stakeholder groups. Since economies, political systems, and social systems depend on the internet, keeping it reliable and secure should be a priority for all stakeholders to collaborate on.


Evidence

Economies, political systems, social systems all depend on the internet


Major discussion point

Contemporary Challenges and Future Directions


Topics

Infrastructure


R

Raul Echeberria

Speech speed

148 words per minute

Speech length

801 words

Speech time

323 seconds

WGIG strengthened concept of multistakeholderism and consolidated idea that all stakeholder participation is crucial

Explanation

Echeberria argues that WGIG was an innovative experience that strengthened multistakeholderism as a concept. He notes that while not all current internet governance organizations are pure multistakeholder models, most are now open to participation of all stakeholders, which is crucial for the future.


Evidence

Organizations involved in internet governance today are not all pure multistakeholder models but most are open to all stakeholder participation


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Markus Kummer
– Wolfgang Kleinwachter
– Participant
– Avri Doria
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


W

Wolfgang Kleinwachter

Speech speed

141 words per minute

Speech length

1109 words

Speech time

469 seconds

WGIG created unique culture of collaboration where every stakeholder brought different expertise to the table

Explanation

Kleinwachter describes how WGIG fostered mutual recognition that every stakeholder could bring different expertise to the table. He gives an example of an ambassador learning about IP addresses from Paul Wilson, which started the understanding that different stakeholders have valuable knowledge to contribute.


Evidence

Anecdote about ambassador asking ‘what is an IP address’ and Paul Wilson from APNIC explaining, leading to ambassador’s appreciation


Major discussion point

WGIG’s Historical Impact and Contributions


Topics

Infrastructure


Agreed with

– William J. Drake
– Markus Kummer
– Raul Echeberria
– Participant
– Avri Doria
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


WGIG definition with multistakeholder, collaborative, and holistic approaches is universal and applicable to AI governance

Explanation

Kleinwachter argues that the WGIG definition’s three basic elements – multistakeholder approach, collaborative approach, and holistic approach – are universal and can be used for all governance aspects discussed today. He specifically notes that AI governance requires the same three approaches as internet governance.


Evidence

AI governance has to be multistakeholder, collaborative, and holistic, just like internet governance; confusion about AI governance mirrors confusion about internet governance 20 years ago


Major discussion point

Nature and Evolution of Internet Governance


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Ayesha Hassan
– Carlos Afonso

Agreed on

Internet governance definition’s enduring relevance and broad applicability


P

Participant

Speech speed

145 words per minute

Speech length

825 words

Speech time

340 seconds

WGIG’s methodology successfully brought together 30 people with different viewpoints to agree on contentious subjects

Explanation

The participant explains that asking 30 intelligent people with different viewpoints to agree on very contentious subjects seemed crazy, but WGIG succeeded through specific methodology. This approach has been useful for many subsequent discussions and applied in other forums.


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Markus Kummer
– Raul Echeberria
– Wolfgang Kleinwachter
– Avri Doria
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


WGIG’s methodology of collecting all opinions in comprehensive document while focusing on consensus areas was innovative

Explanation

The participant describes how WGIG decided to have two reports – a big report collecting all opinions that wasn’t necessarily consensus-based, and a focused report concentrating on areas of agreement. This approach made the large amount of material tractable and provided useful ideas for later implementation.


Evidence

Decision to create two reports – comprehensive compendium and focused consensus report


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


WGIG’s approach of narrowing disagreements to basic alternatives provided actionable results without full consensus

Explanation

The participant explains that when consensus couldn’t be reached on governance arrangements, WGIG narrowed different proposals to bare basics and ended with four alternatives. This methodology can be used in contentious discussions to provide actionable results rather than zero results from lack of consensus.


Evidence

Four different basic alternatives for governance arrangements in final report


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


IGF serves as effective agenda setter and issue framer but needs clearer role definition

Explanation

The participant argues that IGF is fantastic as an agenda setter and for framing issues, bringing problems to the table that governments may not be aware of. After the WSIS+20 process, there’s need to clearly define IGF’s role as the agenda setter while other forums handle action.


Evidence

IGF effectiveness demonstrated in developing countries like Brazil, Mexico, Kenya where issues are presented candidly


Major discussion point

IGF’s Performance and Future Challenges


Topics

Legal and regulatory


Agreed with

– Baher Esmat
– Vittorio Bertola
– Jovan Kurbalija

Agreed on

IGF’s value as discussion forum while acknowledging limitations


Disagreed with

– Avri Doria
– Jimson Olufuye

Disagreed on

IGF’s bottom-up nature and democratic participation


A

Alejandro Pisanty

Speech speed

135 words per minute

Speech length

821 words

Speech time

362 seconds

All internet governance problems are better solved by multistakeholder mechanisms with different stakeholder weights

Explanation

Pisanty argues that all internet governance problems and many others are much better solved by multistakeholder mechanisms, though stakeholder weights must be different. Institutional and organizational design are key, with governments involved decisively in areas like law enforcement.


Evidence

Anti-phishing working group and upcoming global anti-scam alliance as examples; ICANN reform process where governments chose not to take board seats due to legal liabilities and inability to agree on representatives


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Legal and regulatory


Agreed with

– Markus Kummer
– Avri Doria
– Charles Shaban
– Jimson Olufuye

Agreed on

Need for multistakeholder and multilateral approaches to coexist rather than compete


Countries pushing for multilateral approaches are also pushing against internet freedom

Explanation

Pisanty presents this as an acid test for why multistakeholder approaches need to be maintained. He argues that every single country that pushes for more multilateral governance is simultaneously pushing against internet freedom.


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Human rights


IGF is mature enough to handle controversial issues but some stakeholders are not ready for such discussions

Explanation

Pisanty argues that the IGF itself is mature enough for much more controversial issues, but the problem lies with some stakeholders, particularly governments that prefer power games in closed venues. He suggests governments would rather trade internet governance issues for other resources like oil or water rights.


Evidence

Some governments prefer closed government-only venues where politics involves trading internet governance for oil or water rights


Major discussion point

IGF’s Performance and Future Challenges


Topics

Legal and regulatory


Disagreed with

– Jovan Kurbalija
– Ariette Esterhuisen

Disagreed on

Approach to handling controversial topics in IGF


A

Avri Doria

Speech speed

157 words per minute

Speech length

1090 words

Speech time

416 seconds

Both multistakeholder and multilateral models must coexist and work together rather than in opposition

Explanation

Doria argues against viewing multistakeholder versus multilateral as opposition, instead advocating for coexistence. She points to examples like ICANN and São Paulo guidelines that show how the two can be brought together, noting that governments won’t give up multilateral approaches.


Evidence

ICANN example and São Paulo guidelines showing how multistakeholder and multilateral can work together


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Legal and regulatory


Agreed with

– Markus Kummer
– Alejandro Pisanty
– Charles Shaban
– Jimson Olufuye

Agreed on

Need for multistakeholder and multilateral approaches to coexist rather than compete


WGIG provided example of true participation as equals, which is missing in current governance models

Explanation

Doria emphasizes that WGIG was the last time participants truly participated as equals, where she could argue for hours against roles and responsibilities until the chair got tired. She notes that even IGF, while multistakeholder, operates top-down with authorities that participants must appeal to.


Evidence

Personal experience of being able to argue for hours against government positions until Nitin Desai got tired; current IGF has top-down authority structure


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Markus Kummer
– Raul Echeberria
– Wolfgang Kleinwachter
– Participant
– Jovan Kurbalija

Agreed on

WGIG’s lasting impact and successful methodology


Frank March’s role as secretary who wrote while listening and incorporating real-time feedback was crucial innovation

Explanation

Doria highlights Frank March’s unique role as secretary who sat in the room writing while talking to participants and asking about paragraphs in real-time. This contrasts with current processes where contributions disappear into buckets and may or may not be considered.


Evidence

Frank March sitting in room while writing, asking about paragraphs, bearing with participants saying ‘no, no, change that word’


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


C

Carlos Afonso

Speech speed

135 words per minute

Speech length

211 words

Speech time

93 seconds

WGIG report identified four key public policy areas and 13 fundamental issues that remain valid today

Explanation

Afonso argues that beyond the definition, the WGIG report identified four key public policy areas that are still the main areas today, detailed in 13 fundamental issues that remain valid. He believes this comprehensive framework has stood the test of time and serves as an excellent reference.


Evidence

Four key public policy areas and 13 fundamental issues detailed in the report


Major discussion point

Nature and Evolution of Internet Governance


Topics

Legal and regulatory


Agreed with

– William J. Drake
– Ayesha Hassan
– Wolfgang Kleinwachter

Agreed on

Internet governance definition’s enduring relevance and broad applicability


J

Jovan Kurbalija

Speech speed

146 words per minute

Speech length

1122 words

Speech time

458 seconds

Current processes lack transparency in how contributions are reflected in final documents

Explanation

Kurbalija contrasts WGIG where participants felt their input was taken care of, even if not accepted verbatim, with current processes where contributions disappear in a ‘governance Bermuda Triangle.’ He notes that while AI can now help trace contributions, the human approach is preferable.


Evidence

AI can trace contributions to final documents; many current processes call for contributions that disappear without clear reflection


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


Need to revisit Tunis compromise formula that balanced multistakeholder participation with UN umbrella

Explanation

Kurbalija argues that the Tunis compromise was a masterpiece that gave both camps something – pro-governmental camp got IG issues in UN context, multistakeholder camp got participation. However, this formula cannot stay fixed and must be revisited, suggesting IGF Plus approach.


Evidence

Tunis compromise as masterpiece giving both camps something; his work as Executive Director of UN High-Level Panel arguing for IGF Plus


Major discussion point

Contemporary Challenges and Future Directions


Topics

Legal and regulatory


Disagreed with

– William J. Drake
– Bertrand de la Chapelle

Disagreed on

Future governance structure needs


Enhanced cooperation discussions should be brought into IGF as regular track rather than avoided

Explanation

Kurbalija controversially argues that the enhanced cooperation question should be brought as a track on IGF’s first day with all stakeholders discussing. He suggests calling it ‘enhanced coordination’ for smoother consumption and sees it as a simple solution to complete the Tunis formula.


Evidence

Enhanced cooperation as the ‘big elephant in the room’ and ‘last bit from Tunis formula’ that British diplomat brought as solution


Major discussion point

Contemporary Challenges and Future Directions


Topics

Legal and regulatory


Disagreed with

– Ariette Esterhuisen
– Alejandro Pisanty

Disagreed on

Approach to handling controversial topics in IGF


IGF has been valuable for capacity building and creating incremental development of new methodologies

Explanation

Kurbalija describes capacity building as a great achievement, sharing how his book ‘Introduction to Internet Governance’ serves as a diary of IGF’s evolution. He emphasizes the incremental development of methodologies, from human reporting to AI assistance, as an important but underappreciated legacy.


Evidence

Personal story of writing Introduction to Internet Governance book for 8 editions; development from human reporting to AI-assisted reporting at IGF


Major discussion point

IGF’s Performance and Future Challenges


Topics

Development


Agreed with

– Baher Esmat
– Vittorio Bertola
– Participant

Agreed on

IGF’s value as discussion forum while acknowledging limitations


IGF methodology and modus operandi represents untold story of high relevance for broader governance

Explanation

Kurbalija argues that IGF’s way of working, including incremental development in capacity building, consensus building, and reporting methodologies, is an untold story of high relevance for broader governance communities. He believes this legacy should be the foundation for IGF’s future and that the community is too shy about promoting it.


Evidence

Incremental development in all aspects of IGF work including capacity building, consensus building, and reporting


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


B

Baher Esmat

Speech speed

111 words per minute

Speech length

575 words

Speech time

310 seconds

IGF has been primary global multistakeholder forum providing space for open discussion and capacity building

Explanation

Esmat argues that over almost 19 years, IGF has filled the gap identified by WGIG by providing a space for open discussion among all stakeholders on equal footing. He emphasizes the capacity building aspect as particularly important for those from the developing world.


Evidence

Numerous regional, national IGFs and similar platforms like school internet governance showing IGF’s impact


Major discussion point

IGF’s Performance and Future Challenges


Topics

Development


Agreed with

– Vittorio Bertola
– Participant
– Jovan Kurbalija

Agreed on

IGF’s value as discussion forum while acknowledging limitations


IGF has continuously evolved in topics and outcomes while maintaining non-decision making nature as strength

Explanation

Esmat argues that IGF’s evolution in addressing new topics like AI that didn’t exist 20 years ago shows the broad definition’s value. He believes the non-decision making nature, while debated as weakness by some, is actually one of IGF’s strengths and was intentionally designed to allow equal participation.


Evidence

AI and many other topics that were not in WGIG radar in 2004-2005 now being addressed; broad WGIG definition encompassing unforeseen issues


Major discussion point

IGF’s Performance and Future Challenges


Topics

Legal and regulatory


Disagreed with

– Vittorio Bertola
– Ayesha Hassan

Disagreed on

IGF’s effectiveness in addressing broader internet governance issues


IGF needs financial stability and sustainability through innovative funding models

Explanation

Esmat emphasizes that for IGF to continue serving as the global internet governance forum, financial stability and sustainability is key. He calls for all participants and contributors to come together and consider innovative ideas for sustainable funding models.


Major discussion point

IGF’s Performance and Future Challenges


Topics

Economic


V

Vittorio Bertola

Speech speed

215 words per minute

Speech length

833 words

Speech time

231 seconds

IGF should continue but needs better mobilization of energies and improved national IGF processes

Explanation

Bertola argues that while people find value in IGF and it should continue, there are practical problems like lack of space for workshops leading to disappointment. He criticizes national IGF processes, citing Italy’s capture by government and lack of genuine multistakeholder participation.


Evidence

People disappointed after trying to propose panels for several years without acceptance; Italian IGF captured by government for 6-7 years with multiple government changes


Major discussion point

IGF’s Performance and Future Challenges


Topics

Legal and regulatory


Agreed with

– Baher Esmat
– Participant
– Jovan Kurbalija

Agreed on

IGF’s value as discussion forum while acknowledging limitations


IGF failed to address economic and social questions due to lack of enforcement mechanisms against private sector

Explanation

Bertola argues that while technical resource governance worked well, the broader definition failed because they were naive in thinking putting everyone together would address economic and social questions. Private sector actors broke down the internet into walled gardens for profit without any enforcement mechanisms to stop them.


Evidence

Survey in UK where half of young people said world would be better without internet; companies making money by breaking internet into walled gardens


Major discussion point

IGF’s Performance and Future Challenges


Topics

Economic


Disagreed with

– Baher Esmat
– Ayesha Hassan

Disagreed on

IGF’s effectiveness in addressing broader internet governance issues


C

Charles Shaban

Speech speed

154 words per minute

Speech length

374 words

Speech time

145 seconds

Multistakeholderism helps governments fulfill their responsibilities to citizens rather than taking over government work

Explanation

Shaban shares an example from intellectual property disputes where multistakeholder approaches like UDRP required cooperation between ICANN, WIPO, private sector, lawyers, and civil society. He argues this demonstrates how different stakeholders can work with multilateral mechanisms effectively.


Evidence

UDRP (Uniform Domain Name Dispute Resolution Policy) involving ICANN, WIPO, private sector, lawyers, and civil society


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Legal and regulatory


Agreed with

– Markus Kummer
– Alejandro Pisanty
– Avri Doria
– Jimson Olufuye

Agreed on

Need for multistakeholder and multilateral approaches to coexist rather than compete


Need intersectional and rights-based approach to digital governance centering marginalized communities

Explanation

Shaban relays a question from Shaima Akhtar asking whether the rise of AI, surveillance technologies, and platform monopolies requires a more intersectional and rights-based approach to digital governance that centers the lived realities of women, youth, and marginalized communities.


Evidence

Rise of AI, surveillance technologies and dominance of platform monopolies


Major discussion point

Contemporary Challenges and Future Directions


Topics

Human rights


J

Jimson Olufuye

Speech speed

157 words per minute

Speech length

424 words

Speech time

161 seconds

Multistakeholderism helps governments fulfill their responsibilities to citizens rather than taking over government work

Explanation

Olufuye shares his experience during WGIG when a government representative accused the private sector of wanting to take over government work. He explained that they were actually helping government fulfill their objectives and responsibilities to citizens, which stopped the accusation and led to better understanding.


Evidence

Personal experience during WGIG where accusation stopped after explanation of helping government achieve objectives


Major discussion point

Multistakeholder vs Multilateral Governance Models


Topics

Legal and regulatory


Agreed with

– Markus Kummer
– Alejandro Pisanty
– Avri Doria
– Charles Shaban

Agreed on

Need for multistakeholder and multilateral approaches to coexist rather than compete


A

Ariette Esterhuisen

Speech speed

143 words per minute

Speech length

164 words

Speech time

68 seconds

IGF should facilitate debate on controversial topics like fair taxation of big tech companies

Explanation

Esterhuisen questions whether IGF is mature enough to handle controversial issues, citing the long-standing fear of putting enhanced cooperation on the agenda. She argues IGF should be able to facilitate debate on issues like fair tax payment by big tech companies, noting it took years before human rights and LGBT issues could be discussed.


Evidence

Historical examples of how long it took to discuss human rights and LGBT issues at IGF


Major discussion point

IGF’s Performance and Future Challenges


Topics

Economic


Disagreed with

– Jovan Kurbalija
– Alejandro Pisanty

Disagreed on

Approach to handling controversial topics in IGF


A

Audience

Speech speed

131 words per minute

Speech length

146 words

Speech time

66 seconds

Digital governance issues being moved to closed-door trade negotiations undermines democratic governance

Explanation

An audience member from IT4Change India points out that data governance issues are being taken out of democratic spaces and into closed-door multilateral spaces like digital trade negotiations. They ask how digital cooperation mechanisms can counter this trend, especially as issues get sealed in trade deals while GDC processes are ongoing.


Evidence

Data governance issues being decided in digital trade negotiations and regional closed-door plurilateral deals


Major discussion point

Contemporary Challenges and Future Directions


Topics

Economic


S

Sébastien Bachelet

Speech speed

149 words per minute

Speech length

69 words

Speech time

27 seconds

Need to assess government participation in internet governance discussions and offer practical solutions for future collaboration

Explanation

Bachelet raises two important questions about the current state of internet governance: how many government representatives are actually present in the room to learn from these discussions, and whether there’s readiness to organize future collaborative sessions. His offer to book a castle for future discussions suggests the need for dedicated spaces for meaningful dialogue.


Evidence

Observation of limited government representation in the room; offer to book castle for future discussions


Major discussion point

IGF’s Performance and Future Challenges


Topics

Legal and regulatory


B

Bertrand de la Chapelle

Speech speed

142 words per minute

Speech length

360 words

Speech time

151 seconds

WGIG’s methodology of presenting different options rather than watered-down consensus should be adopted by IGF

Explanation

De la Chapelle emphasizes the importance of WGIG’s methodology, particularly the interaction between people and the secretary’s ability to present multiple options rather than a single watered-down consensus version. He argues this approach was extremely important and could be applied to IGF to help frame issues and bring different actors together for substantive interaction.


Evidence

WGIG’s approach of presenting different options instead of watered-down consensus; role of secretary in making summaries


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


Need for new multistakeholder working group to address IGF’s future after WSIS+20 process

Explanation

De la Chapelle argues that WGIG served as a way out of roadblocks at the end of WSIS first phase and created the most multistakeholder process in 20 years. He believes the WSIS+20 process won’t solve questions about IGF’s future, so there should be a new exercise – either a new WGIG or CSTD working group – to discuss IGF mandate revision and institutionalization after 20 years.


Evidence

WGIG as solution to WSIS roadblock; belief that WSIS+20 process won’t solve IGF future questions


Major discussion point

Contemporary Challenges and Future Directions


Topics

Legal and regulatory


Disagreed with

– William J. Drake
– Jovan Kurbalija

Disagreed on

Future governance structure needs


I

Israel Rosas

Speech speed

161 words per minute

Speech length

68 words

Speech time

25 seconds

WGIG experience should inform current facilitators on how to generate consensus in contentious processes

Explanation

Rosas seeks practical advice from WGIG members about consensus-building, asking what single piece of advice they would give to current WGIG facilitators working on outcome documents. This reflects the ongoing relevance of WGIG’s methodology for contemporary internet governance processes that face similar challenges in reaching agreement among diverse stakeholders.


Major discussion point

Governance Methodology and Process Innovation


Topics

Legal and regulatory


H

Hadi Alminyawi

Speech speed

114 words per minute

Speech length

154 words

Speech time

80 seconds

Questioning government motivation to participate in IGF if decisions are made elsewhere

Explanation

Alminyawi raises a fundamental question about the logic of government participation in IGF, asking why governments would be interested in managing or participating in discussions at IGF if the forum only frames issues for decision-making in other venues like the ITU. This highlights the tension between IGF’s discussion-focused mandate and the need for actionable outcomes that would motivate government engagement.


Evidence

IGF’s role as discussion forum that frames issues for decision-making in multilateral venues like ITU


Major discussion point

IGF’s Performance and Future Challenges


Topics

Legal and regulatory


Agreements

Agreement points

WGIG’s lasting impact and successful methodology

Speakers

– William J. Drake
– Markus Kummer
– Raul Echeberria
– Wolfgang Kleinwachter
– Participant
– Avri Doria
– Jovan Kurbalija

Arguments

WGIG demonstrated benefits of multistakeholder cooperation in UN context and legitimated this approach


WGIG facilitated WSIS negotiations by providing systematic mapping and structured discussion of issues


WGIG report found its way into final WSIS outcome and significantly impacted the process


WGIG strengthened concept of multistakeholderism and consolidated idea that all stakeholder participation is crucial


WGIG created unique culture of collaboration where every stakeholder brought different expertise to the table


WGIG’s methodology successfully brought together 30 people with different viewpoints to agree on contentious subjects


WGIG provided example of true participation as equals, which is missing in current governance models


Summary

Multiple speakers agree that WGIG was a groundbreaking success that legitimated multistakeholder cooperation in the UN context, created innovative methodology for consensus-building, and had lasting impact on internet governance processes.


Topics

Legal and regulatory


Internet governance definition’s enduring relevance and broad applicability

Speakers

– William J. Drake
– Ayesha Hassan
– Wolfgang Kleinwachter
– Carlos Afonso

Arguments

WGIG demystified internet governance by establishing that governance does not mean government


Internet governance definition has stood test of time and adapted to new technologies like AI governance


WGIG definition with multistakeholder, collaborative, and holistic approaches is universal and applicable to AI governance


WGIG report identified four key public policy areas and 13 fundamental issues that remain valid today


Summary

Speakers consistently agree that the WGIG definition of internet governance has proven durable and remains applicable to contemporary challenges including AI governance and other emerging technologies.


Topics

Legal and regulatory


Need for multistakeholder and multilateral approaches to coexist rather than compete

Speakers

– Markus Kummer
– Alejandro Pisanty
– Avri Doria
– Charles Shaban
– Jimson Olufuye

Arguments

Multilateralism protects smaller countries and should not exclude multistakeholderism


All internet governance problems are better solved by multistakeholder mechanisms with different stakeholder weights


Both multistakeholder and multilateral models must coexist and work together rather than in opposition


Multistakeholderism helps governments fulfill their responsibilities to citizens rather than taking over government work


Summary

There is strong consensus that the dichotomy between multistakeholder and multilateral approaches is false, and both models need to work together complementarily rather than in opposition.


Topics

Legal and regulatory


IGF’s value as discussion forum while acknowledging limitations

Speakers

– Baher Esmat
– Vittorio Bertola
– Participant
– Jovan Kurbalija

Arguments

IGF has been primary global multistakeholder forum providing space for open discussion and capacity building


IGF should continue but needs better mobilization of energies and improved national IGF processes


IGF serves as effective agenda setter and issue framer but needs clearer role definition


IGF has been valuable for capacity building and creating incremental development of new methodologies


Summary

Speakers agree that IGF provides valuable space for multistakeholder dialogue and capacity building, but acknowledge it needs improvements in process, participation, and role clarity.


Topics

Legal and regulatory | Development


Similar viewpoints

These speakers share concern about the loss of transparent, participatory methodology that characterized WGIG, where contributions were clearly reflected and participants could see their input being incorporated in real-time.

Speakers

– Jovan Kurbalija
– Avri Doria
– Bertrand de la Chapelle

Arguments

Current processes lack transparency in how contributions are reflected in final documents


Frank March’s role as secretary who wrote while listening and incorporating real-time feedback was crucial innovation


WGIG’s methodology of presenting different options rather than watered-down consensus should be adopted by IGF


Topics

Legal and regulatory


These speakers believe IGF has the capacity to address controversial issues but is held back by stakeholder reluctance and lack of enforcement mechanisms, particularly regarding private sector accountability.

Speakers

– Alejandro Pisanty
– Ariette Esterhuizen
– Vittorio Bertola

Arguments

IGF is mature enough to handle controversial issues but some stakeholders are not ready for such discussions


IGF should facilitate debate on controversial topics like fair taxation of big tech companies


IGF failed to address economic and social questions due to lack of enforcement mechanisms against private sector


Topics

Legal and regulatory | Economic


These speakers emphasize IGF’s evolutionary capacity and its strength in building resilience and capacity across the internet governance ecosystem through continuous adaptation and learning.

Speakers

– Ayesha Hassan
– Baher Esmat
– Jovan Kurbalija

Arguments

Resilience across internet layers should be priority for future stakeholder collaboration


IGF has continuously evolved in topics and outcomes while maintaining non-decision making nature as strength


IGF has been valuable for capacity building and creating incremental development of new methodologies


Topics

Infrastructure | Development | Legal and regulatory


Unexpected consensus

Enhanced cooperation should be brought into IGF discussions

Speakers

– Jovan Kurbalija
– Ariette Esterhuizen

Arguments

Enhanced cooperation discussions should be brought into IGF as regular track rather than avoided


IGF should facilitate debate on controversial topics like fair taxation of big tech companies


Explanation

This represents unexpected consensus because enhanced cooperation has traditionally been seen as too controversial for IGF. The agreement that IGF should tackle this ‘elephant in the room’ issue directly challenges the forum’s historical avoidance of contentious topics.


Topics

Legal and regulatory


Need for new working group or process to address contemporary governance challenges

Speakers

– William J. Drake
– Bertrand de la Chapelle
– Wolfgang Kleinwachter

Arguments

New working group needed to address relationship between internet governance, data governance, and AI governance


Need for new multistakeholder working group to address IGF’s future after WSIS+20 process


Need to have the decision-makers on the table for discussions


Explanation

Unexpected because these speakers, who were part of the original WGIG success, are calling for essentially recreating that model to address current challenges, suggesting the original approach was so effective it should be replicated.


Topics

Legal and regulatory


Overall assessment

Summary

The discussion reveals strong consensus on WGIG’s historical importance and methodology, the enduring relevance of its internet governance definition, the need for multistakeholder-multilateral cooperation, and IGF’s value as a discussion forum. There is also agreement on the need for better transparency in governance processes and the importance of capacity building.


Consensus level

High level of consensus among speakers, particularly on foundational principles and historical assessment. The strong agreement suggests that WGIG’s core contributions remain relevant and that its methodology could inform current governance challenges. However, there are also shared concerns about current limitations and the need for evolution in internet governance processes.


Differences

Different viewpoints

IGF’s bottom-up nature and democratic participation

Speakers

– Avri Doria
– Participant
– Jimson Olufuye

Arguments

Any notion we have that IGF has bottomed up is something that we should quit pretending. It is not. It hasn’t been. And I’d love to see it bottom up, but it isn’t.


IGF serves as effective agenda setter and issue framer but needs clearer role definition


Need to understand that it has always been top-down, always been top-down, so it’s going to take a while before it becomes a bottom-up, not as with ICANN


Summary

Avri Doria argues that IGF is not truly bottom-up and never has been, while the government participant defends IGF as having some bottom-up elements through workshops and National/Regional IGFs. Jimson Olufuye acknowledges it’s always been top-down but argues for patience in the transition.


Topics

Legal and regulatory


IGF’s effectiveness in addressing broader internet governance issues

Speakers

– Vittorio Bertola
– Baher Esmat
– Ayesha Hassan

Arguments

IGF failed to address economic and social questions due to lack of enforcement mechanisms against private sector


IGF has continuously evolved in topics and outcomes while maintaining non-decision making nature as strength


Internet governance has expanded to include emerging countries and new technologies while maintaining core discussion framework


Summary

Bertola argues that IGF failed in its broader mandate because it lacked enforcement mechanisms against private sector actors who broke the internet into walled gardens. Esmat and Hassan view IGF’s evolution and non-decision making nature as strengths that have allowed it to adapt and remain relevant.


Topics

Economic | Legal and regulatory


Approach to handling controversial topics in IGF

Speakers

– Jovan Kurbalija
– Ariette Esterhuisen
– Alejandro Pisanty

Arguments

Enhanced cooperation discussions should be brought into IGF as regular track rather than avoided


IGF should facilitate debate on controversial topics like fair taxation of big tech companies


IGF is mature enough to handle controversial issues but some stakeholders are not ready for such discussions


Summary

While all agree IGF should handle controversial topics, they disagree on the approach. Kurbalija wants to directly address enhanced cooperation, Esterhuizen focuses on economic issues like taxation, and Pisanty blames stakeholder immaturity rather than IGF’s capacity.


Topics

Legal and regulatory | Economic


Future governance structure needs

Speakers

– William J. Drake
– Bertrand de la Chapelle
– Jovan Kurbalija

Arguments

New working group needed to address relationship between internet governance, data governance, and AI governance


Need for new multistakeholder working group to address IGF’s future after WSIS+20 process


Need to revisit Tunis compromise formula that balanced multistakeholder participation with UN umbrella


Summary

Drake wants a conceptual working group to clarify relationships between different governance areas, de la Chapelle wants a group focused on IGF’s institutional future, while Kurbalija wants to fundamentally revisit the Tunis compromise that created IGF.


Topics

Legal and regulatory


Unexpected differences

Assessment of IGF’s success in fulfilling WGIG’s vision

Speakers

– Vittorio Bertola
– Baher Esmat
– Carlos Afonso

Arguments

IGF failed to address economic and social questions due to lack of enforcement mechanisms against private sector


IGF has been primary global multistakeholder forum providing space for open discussion and capacity building


WGIG report identified four key public policy areas and 13 fundamental issues that remain valid today


Explanation

This disagreement is unexpected because all speakers were involved in or supportive of WGIG’s work, yet they have fundamentally different assessments of whether IGF achieved WGIG’s goals. Bertola’s harsh critique contrasts sharply with Esmat’s positive assessment and Afonso’s emphasis on enduring relevance.


Topics

Legal and regulatory | Economic


Role of Frank March and methodology importance

Speakers

– Avri Doria
– Jovan Kurbalija
– Bertrand de la Chapelle

Arguments

Frank March’s role as secretary who wrote while listening and incorporating real-time feedback was crucial innovation


Current processes lack transparency in how contributions are reflected in final documents


WGIG’s methodology of presenting different options rather than watered-down consensus should be adopted by IGF


Explanation

While all speakers praise WGIG’s methodology, they unexpectedly focus on different aspects as most important – Doria emphasizes the human element and real-time interaction, Kurbalija focuses on transparency and traceability, while de la Chapelle emphasizes the option-presentation approach. This suggests different understandings of what made WGIG successful.


Topics

Legal and regulatory


Overall assessment

Summary

The main areas of disagreement center on IGF’s democratic legitimacy and bottom-up nature, its effectiveness in addressing broader governance challenges beyond technical issues, approaches to handling controversial topics, and what type of institutional reforms are needed for the future.


Disagreement level

Moderate disagreement with significant implications. While speakers share common values about multistakeholder governance and IGF’s importance, they have fundamentally different assessments of IGF’s performance and different visions for its future. These disagreements reflect deeper tensions in internet governance between idealistic multistakeholder principles and practical governance challenges, particularly regarding enforcement mechanisms and democratic participation. The disagreements suggest the internet governance community faces critical decisions about IGF’s evolution and role in addressing contemporary digital governance challenges.


Partial agreements

Partial agreements

Similar viewpoints

These speakers share concern about the loss of transparent, participatory methodology that characterized WGIG, where contributions were clearly reflected and participants could see their input being incorporated in real-time.

Speakers

– Jovan Kurbalija
– Avri Doria
– Bertrand de la Chapelle

Arguments

Current processes lack transparency in how contributions are reflected in final documents


Frank March’s role as secretary who wrote while listening and incorporating real-time feedback was crucial innovation


WGIG’s methodology of presenting different options rather than watered-down consensus should be adopted by IGF


Topics

Legal and regulatory


These speakers believe IGF has the capacity to address controversial issues but is held back by stakeholder reluctance and lack of enforcement mechanisms, particularly regarding private sector accountability.

Speakers

– Alejandro Pisanty
– Ariette Esterhuizen
– Vittorio Bertola

Arguments

IGF is mature enough to handle controversial issues but some stakeholders are not ready for such discussions


IGF should facilitate debate on controversial topics like fair taxation of big tech companies


IGF failed to address economic and social questions due to lack of enforcement mechanisms against private sector


Topics

Legal and regulatory | Economic


These speakers emphasize IGF’s evolutionary capacity and its strength in building resilience and capacity across the internet governance ecosystem through continuous adaptation and learning.

Speakers

– Ayesha Hassan
– Baher Esmat
– Jovan Kurbalija

Arguments

Resilience across internet layers should be priority for future stakeholder collaboration


IGF has continuously evolved in topics and outcomes while maintaining non-decision making nature as strength


IGF has been valuable for capacity building and creating incremental development of new methodologies


Topics

Infrastructure | Development | Legal and regulatory


Takeaways

Key takeaways

WGIG’s 20-year legacy demonstrates that multistakeholder cooperation in UN context can be effective and legitimating


The WGIG definition of internet governance has stood the test of time and remains applicable to emerging technologies like AI governance


Multistakeholder and multilateral governance models should coexist and complement each other rather than being viewed as antagonistic


The IGF has successfully served as a global forum for capacity building and agenda setting, but faces challenges in handling controversial topics and ensuring bottom-up participation


Current governance processes often lack transparency in how stakeholder contributions are reflected in final outcomes


The methodology used by WGIG – collecting all opinions while focusing on consensus areas and narrowing disagreements to basic alternatives – remains relevant for contemporary governance challenges


Financial sustainability and clearer role definition are critical for the IGF’s future effectiveness


There is growing tension between the need for multistakeholder governance and the reality of platform monopolies and government regulation


Resolutions and action items

Jovan Kurbalija will present the 8th edition of ‘Introduction to Internet Governance’ book on Wednesday


Suggestion to use AI tools to help trace stakeholder contributions to final governance documents


Proposal to include enhanced cooperation discussions as a regular track in IGF rather than avoiding the topic


Call for more innovative funding models to ensure IGF’s financial stability and sustainability


Unresolved issues

How to balance multistakeholder governance with the need for enforcement mechanisms against powerful private sector actors


Whether the IGF should transition from discussion-only forum to having some decision-making capacity


How to improve national and regional IGF processes that have been captured by governments or become ineffective


How to make governance processes truly bottom-up rather than top-down with occasional consultations


How to engage decision-makers and governments more effectively in IGF discussions


How to address digital governance issues being moved to closed-door trade negotiations


Whether a new working group is needed to address the relationship between internet governance, data governance, and AI governance


How to handle controversial topics like fair taxation of big tech companies and rights-based approaches to digital governance


Suggested compromises

Establishing a new multistakeholder working group after December 2024 to address IGF’s future mandate and institutionalization


Adopting ‘rough consensus’ or ‘near consensus’ (99%) rather than requiring 100% consensus in governance processes


Creating hybrid models that combine multistakeholder policy-making with multilateral implementation mechanisms


Using the IGF as an agenda-setter and issue-framer while allowing other forums to make formal decisions


Implementing the Sao Paulo guidelines approach to bring multistakeholder and multilateral mechanisms together


Allowing the IGF to handle controversial topics while maintaining its non-decision making nature


Developing clearer roles and responsibilities for different stakeholders in various governance processes


Thought provoking comments

We had Frank March, our secretary, our main writer, sitting in the room with us while he was writing, talking to us, asking about this paragraph or that paragraph… What is the major problem today is that we have so many processes which call you to have your say, make contribution. And your contribution disappears in some sort of a governance Bermuda Triangle.

Speaker

Avri Doria and Jovan Kurbalija


Reason

This observation brilliantly captures a fundamental shift in governance processes from genuine collaborative writing to performative consultation. The metaphor of a ‘governance Bermuda Triangle’ where contributions vanish is particularly powerful in highlighting the erosion of meaningful participation.


Impact

This comment shifted the discussion from celebrating past achievements to critically examining current governance failures. It introduced the concept that procedural innovation (having the writer in the room) was as important as substantive outcomes, influencing later speakers to focus more on methodology and process design.


I’m convinced that all problems of Internet governance and many others are much better solved by multi-stakeholder mechanisms… every single country that pushes for more multilateral is also pushing against internet freedom. That’s probably the acid test.

Speaker

Alejandro Pisanty


Reason

This comment provides a provocative litmus test for evaluating governance approaches by linking multilateralism to internet freedom restrictions. It cuts through diplomatic niceties to suggest a clear correlation between governance preference and freedom outcomes.


Impact

This stark framing challenged the prevailing diplomatic tendency to treat multilateral and multistakeholder approaches as equally valid. It pushed subsequent speakers to move beyond the ‘false dichotomy’ language and grapple with real tensions between these approaches.


We thought that by putting everyone together, we would be able to address the economic and social questions, and this didn’t happen… we were naive… The people that could make money out of breaking down the internet and turning it into walled gardens, they just went on and made money. And nobody could stop them because we had no stick.

Speaker

Vittorio Bertola


Reason

This is a brutally honest assessment that challenges the fundamental assumptions underlying the WGIG’s approach. It acknowledges that multistakeholder governance failed to prevent the internet’s fragmentation into commercial silos, introducing the crucial concept that effective governance requires enforcement mechanisms (‘sticks’), not just dialogue.


Impact

This comment created the most significant turning point in the discussion, forcing participants to confront the limitations of their achievements. It shifted the conversation from celebration to critical self-reflection and sparked responses about the need for harder regulatory approaches and the tension between openness and control.


Any notion we have that IGF has bottomed up is something that we should quit pretending. It is not. It hasn’t been… So I love the IGF, I love to see it continue, and I’d really like to see it become bottom up.

Speaker

Avri Doria


Reason

This comment directly challenges one of the core mythologies of the IGF – that it represents genuine bottom-up governance. Coming from a WGIG veteran, this critique carries particular weight and forces honest examination of the gap between rhetoric and reality.


Impact

This blunt assessment validated concerns raised by other speakers and shifted the discussion toward more realistic appraisals of current governance structures. It influenced subsequent speakers to acknowledge the IGF’s limitations more openly and discuss concrete reforms rather than defensive justifications.


The famous, very controversial, big elephant in the room question of enhanced cooperation should be brought as one of the track on the first day of IGF… I never understood why it wasn’t possible.

Speaker

Jovan Kurbalija


Reason

This comment directly addresses the political taboos that have constrained IGF discussions. By naming the ‘elephant in the room’ and questioning why controversial topics are avoided, it challenges the forum’s risk-averse culture and suggests that maturity requires engaging with difficult issues.


Impact

This observation opened space for other participants to discuss controversial topics and the IGF’s capacity to handle them. It led to a broader conversation about whether the forum is mature enough to tackle divisive issues like taxation of big tech companies and enhanced cooperation.


One month ago, there was a survey in the UK, and they asked the young people… would you live better off if the Internet didn’t exist? And half of them said yes… this is really, really terrible for us that work to create it and make it a mass instrument.

Speaker

Vittorio Bertola


Reason

This statistic serves as a devastating indictment of how far the internet has diverged from its original promise. It provides concrete evidence that the internet governance community has failed in its broader mission, moving beyond technical governance to fundamental questions about the internet’s social value.


Impact

This shocking statistic reframed the entire discussion by introducing the perspective of those who feel harmed rather than helped by the internet. It forced participants to confront the possibility that their work, while technically successful, may have contributed to broader social problems.


Overall assessment

These key comments fundamentally transformed what began as a celebratory reunion into a critical examination of both achievements and failures. The discussion evolved through three distinct phases: initial celebration of WGIG’s procedural innovations and definitional work, honest acknowledgment of governance limitations and the gap between rhetoric and reality, and finally a sobering confrontation with the internet’s current social problems. The most impactful comments challenged core assumptions about multistakeholder governance effectiveness, forced recognition of enforcement gaps, and introduced uncomfortable evidence about the internet’s social impact. Rather than defensive responses, these provocative observations generally prompted deeper reflection and more nuanced analysis from other participants, demonstrating the intellectual maturity of this community even when confronting difficult truths about their life’s work.


Follow-up questions

Should we now push for a more intersectional and rights-based approach in defining digital governance that generally centers the lived realities of women, youth, and marginalized communities?

Speaker

Shaima Akhtar (via online question)


Explanation

This question addresses the need to evolve digital governance frameworks to be more inclusive and representative of marginalized groups, particularly given technological advances like AI and surveillance technologies.


If you were to give a single piece of advice to the WGATE facilitators on how to reach or generate consensus for this process, for the outcome document they are drafting, what would it be?

Speaker

Israel Rosas


Explanation

This seeks practical guidance from experienced WGIG members on consensus-building methodologies that could be applied to current digital governance processes.


Do you think the IGF is ready to actually handle controversial issues? Is the IGF mature enough now to be able to put enhanced cooperation, fair tax payment by big tech companies, and other controversial topics on the agenda?

Speaker

Ariette Esterhuisen


Explanation

This questions whether the IGF has evolved sufficiently to tackle difficult and politically sensitive topics rather than avoiding controversial discussions.


How could we use digital cooperation mechanisms available to us to counter the trend of digital governance issues being taken out of democratic spaces and into closed-door multilateral spaces such as digital trade negotiations?

Speaker

Nandini (IT4Change India)


Explanation

This addresses the challenge of maintaining democratic governance of digital issues when they are increasingly being decided in exclusive trade negotiation forums.


Do you think there would be a benefit in having a sort of new exercise like WGIG after December, or a CSTD working group on revision of the mandate of the IGF after 20 years?

Speaker

Bertrand de la Chapelle


Explanation

This proposes the need for a new structured multi-stakeholder process to address the future of the IGF and resolve current institutional challenges.


Why would governments actually participate or take a role in the IGF if all it does is discuss issues and frame them for decision-making in other forums like the ITU?

Speaker

Hadi Alminyawi


Explanation

This questions the value proposition of the IGF for government participation given its non-decision-making nature.


How many government representatives are in the room?

Speaker

Sébastien Bachelet


Explanation

This highlights the ongoing challenge of government engagement in IGF processes and the need to understand participation patterns.


Should we revisit the Tunis formula that established the IGF as a multi-stakeholder body under UN umbrella?

Speaker

Jovan Kurbalija


Explanation

This suggests that the foundational compromise that created the IGF may need to be reconsidered given current challenges and changing circumstances.


How can we better define the relationship between Internet governance, data governance, AI governance, and digital governance to reduce conceptual confusion?

Speaker

William J. Drake


Explanation

This addresses the need for conceptual clarity as new forms of governance emerge and overlap with traditional internet governance frameworks.


How can we improve the national and regional IGF processes to make them more effective and truly multi-stakeholder?

Speaker

Vittorio Bertola


Explanation

This addresses practical challenges in implementing the IGF model at national and regional levels, using the Italian IGF as a problematic example.


How can we ensure financial stability and sustainability of the IGF to guarantee its continued role at the global level?

Speaker

Baher Esmat


Explanation

This addresses a fundamental operational challenge that could affect the IGF’s ability to continue serving as the global internet governance forum.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches

WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches

Session at a glance

Summary

This discussion focused on the relevance of Digital Public Goods (DPGs) for advancing regional Digital Public Infrastructure (DPI) approaches, featuring perspectives from Africa, India, Europe, and Latin America. The session was hosted by the Digital Public Goods Alliance and explored how different regions are implementing DPI using open-source solutions to achieve digital transformation and inclusion.


From the African perspective, Desire Kachenje highlighted that DPI development is government-driven but ecosystem-enabled, with countries like Tanzania building interoperable systems using DPGs like X-Road while engaging private sector partners. She emphasized the importance of challenge-driven approaches that ensure citizen adoption and the need for local capacity building, citing Rwanda’s DPI center as an example. However, she noted significant challenges around data governance frameworks and policy harmonization across borders.


Rahul Matthan from India explained that the “India Stack” approach focuses on creating modular, interoperable, and open systems that can be layered in any order, not necessarily following the identity-payments-data sequence. He emphasized that DPI enables countries to leapfrog development, achieving in 10 years what might otherwise take 50 years, and advocated for embedding governance directly into digital architecture rather than relying solely on traditional regulatory approaches.


Henri Verdier discussed Europe’s approach to digital sovereignty, noting strong political alignment with DPI principles due to Europe’s tradition of public services and open standards. He highlighted the challenge of coordinating 27 different national solutions while building interoperability, emphasizing that the EU stack should be a “cloud of solutions” rather than a single system. He stressed the importance of maintaining democratic control over digital infrastructure to prevent corporate capture of governance functions.


Renata Avila presented Latin America’s community-driven approach, noting that seven countries have legislation supporting open source and open content. She highlighted successful examples like Brazil’s PIX payment system, which has expanded internationally, and emphasized the region’s strength in building active communities around digital public goods. The discussion revealed common challenges including ensuring interoperability, addressing local capacity needs, and maintaining data privacy and security, while funding was surprisingly deprioritized by participants who recognized that DPI offers cost-effective alternatives to proprietary solutions.


Keypoints

## Major Discussion Points:


– **Regional Approaches to Digital Public Infrastructure (DPI) Development**: Speakers from Africa, India, Europe, and Latin America shared distinct regional strategies – Africa focusing on government-driven but ecosystem-enabled approaches, India’s modular “stack” methodology, Europe’s emphasis on digital sovereignty and interoperability, and Latin America’s community-driven open source initiatives.


– **The Role of Digital Public Goods (DPGs) in Scaling DPI**: Discussion centered on how open source software, open data, and open content can enable countries to build sustainable, interoperable digital infrastructure while maintaining local control and reducing dependencies on proprietary solutions.


– **Cross-Border Interoperability and Cooperation**: Emphasis on the importance of building DPI systems that can work across national boundaries, with examples like Brazil’s PIX payment system being used in Europe and regional cooperation initiatives in Africa and the Caribbean for financial inclusion and data sharing.


– **Challenges in Implementation**: Key barriers identified include lack of local capacity and technical expertise, data privacy and security concerns, ensuring inclusive access (especially for the 2.6 billion people without internet), and balancing innovation speed with proper safeguards and governance.


– **Digital Sovereignty vs. Global Cooperation**: Tension between maintaining national control over digital infrastructure while enabling international collaboration, with particular focus on reducing dependence on big tech platforms and building locally-controlled alternatives.


## Overall Purpose:


The discussion aimed to explore different regional approaches to scaling Digital Public Infrastructure globally, examining how Digital Public Goods can facilitate this scaling while addressing challenges around local agency, interoperability, and inclusive technology development.


## Overall Tone:


The discussion maintained a collaborative and optimistic tone throughout, with speakers sharing experiences and best practices rather than competing perspectives. There was a strong sense of shared purpose among participants from different continents, united by common concerns about digital sovereignty and inclusion. The tone became particularly energized when discussing concrete examples of successful cross-border cooperation and when addressing audience questions about governance models and access challenges.


Speakers

**Speakers from the provided list:**


– **Jon Lloyd** – Director of Advocacy and 50 and 5 at the Digital Public Goods Alliance Secretariat; Session moderator


– **Desire Kachenje** – Senior principal at Codevelop fund, based in Dar es Salaam, Tanzania; Looks after investments in Africa for core develop fund (non-profit investment fund focusing on supporting governments in rolling out digital public infrastructure)


– **Rahul Matthan** – From Trilegal; Expert on India’s digital public infrastructure (India Stack)


– **Henri Verdier** – From France; Former head of IT department for the French government, currently ambassador; Expert on European approaches to digital sovereignty and open source


– **Renata Avila** – From the Open Knowledge Foundation; Expert on Latin American perspectives on digital public infrastructure and digital commons


– **Audience** – Israel Rosas from the Internet Society (speaking in personal capacity)


**Additional speakers:**


– **Pei-Lin** – Online moderator, colleague at the Digital Public Goods Alliance (mentioned but did not speak in transcript)


– **Max** – Rapporteur for the session, colleague at the Digital Public Goods Alliance (mentioned but did not speak in transcript)


Full session report

# Digital Public Infrastructure and Digital Public Goods: Regional Approaches to Global Scaling


## Executive Summary


This workshop session, hosted by the Digital Public Goods Alliance at IGF, brought together experts from four continents to examine how Digital Public Goods (DPGs) can advance regional Digital Public Infrastructure (DPI) approaches. The session featured perspectives from Africa (Desire Kachenje), India (Rahul Matthan), Europe (Henri Verdier), and Latin America (Renata Avila), moderated by Jon Lloyd from the Digital Public Goods Alliance Secretariat.


The discussion revealed both convergent principles and divergent implementation strategies across regions, with strong consensus emerging around core DPI values of modularity, interoperability, and local capacity building. Jon Lloyd announced that Kazakhstan had joined as the 26th country in the 50 and 5 campaign, demonstrating growing global momentum for DPI initiatives.


## Regional Approaches to Digital Public Infrastructure Development


### Africa: Government-Led with Ecosystem Engagement


Desire Kachenje outlined Africa’s approach as “government-driven but ecosystem-enabled,” emphasizing that while governments must lead DPI initiatives to ensure public interest alignment, successful implementation requires active private sector and civil society engagement.


Tanzania exemplifies this approach, building DPI layers using both established DPGs like X-Road and locally developed platforms including the Jamii data exchange platform and Jamii wallet. Kachenje emphasized challenge-driven implementations, noting that projects addressing specific citizen needs achieve higher adoption rates than technology-first approaches.


The SADC region’s cross-border financial inclusion project demonstrates this principle, focusing on solving real problems for citizens conducting cross-border transactions. Rwanda’s establishment of a DPI center shows the continent’s commitment to building local capacity, though significant challenges remain around data governance frameworks and policy harmonization across borders.


### India: Flexible Modular Architecture


Rahul Matthan clarified misconceptions about India’s “stack” approach, acknowledging the terminology has created impressions of rigid implementation sequences. “I almost feel I must apologise for India stack because we started this idea of a stack, which leaves the impression that you must necessarily layer first identity, then payments, and then data sharing.”


India’s approach actually focuses on creating modular, interoperable elements that can be implemented in any order to address specific national priorities. This modularity enables countries to potentially achieve in 10 years what might otherwise require 50 years of incremental progress.


Matthan introduced the concept of building governance into system architecture itself, arguing that digital infrastructure can enable simultaneous innovation and regulation on the same platform.


### Europe: Digital Sovereignty Through Public Services


Henri Verdier positioned Europe’s approach within digital sovereignty concerns and the continent’s strong public service tradition. European alignment with DPI principles stems from both ideological commitment to public services and practical concerns about corporate control over digital infrastructure.


Verdier framed the discussion around fundamental democratic governance questions: whether societies can still empower people through infrastructure and good governments, or must accept living within big corporations’ infrastructure frameworks.


Rather than pursuing a single unified system, Europe emphasizes creating interoperable solutions that respect national sovereignty while building coherent regional capabilities. Verdier cited France’s partnership between the National Geographical Institute and OpenStreetMap as an example of successful public-private collaboration.


### Latin America: Community-Driven Innovation


Renata Avila highlighted Latin America’s strength in community-driven development, noting that several countries have legislation supporting open source and open content. The region’s approach emphasizes grassroots engagement, with active volunteer communities maintaining digital public goods like CKAN, Decidim, and other platforms.


Brazil’s PIX payment system exemplifies successful regional innovation, now expanding internationally. Similarly, India’s UPI system is gaining traction in the region through South-South cooperation agreements.


Avila distinguished between having strong DPG communities and implementing comprehensive DPI strategies, noting that “in Latin America we are very good at the digital public goods but we haven’t jumped yet to the big digital public infrastructure plans.”


## Key Challenges and Priorities


Interactive polls revealed participant priorities and concerns. When asked about top challenges, responses were evenly split between lack of interoperability, data privacy and security concerns, and local capacity limitations. A second poll showed participants prioritizing open source first principles and local talent development over funding concerns.


### Digital Inclusion and Access


The challenge of reaching 2.6 billion people without internet access emerged as critical, though participants agreed this shouldn’t halt DPI development. Solutions discussed included USSD-based platforms for basic mobile phones, physical access points at government offices, and offline-capable systems that synchronize when connectivity becomes available.


Matthan emphasized hybrid physical-digital approaches, while Kachenje outlined practical accommodations for various access methods. The consensus was that DPI systems should be designed from the outset to accommodate multiple access methods rather than assuming universal internet connectivity.


### Cross-Border Interoperability


Concrete examples demonstrated practical cross-border cooperation possibilities. Brazil’s PIX system’s international expansion and India’s UPI agreements with multiple countries illustrate how national DPI systems can achieve international reach while maintaining local control.


Regional initiatives are emerging across continents, with Africa developing payment systems and Latin America sharing geospatial infrastructure for climate-related data. The technical foundation for this interoperability lies in the modular, open-source nature of DPG-based systems.


## Governance Models and Democratic Control


An audience question from Israel Rosas about multi-stakeholder versus decentralized governance models revealed nuanced disagreements about optimal governance approaches. Avila argued for commons-based approaches ensuring community engagement beyond political changes, while Verdier emphasized that digital sovereignty requires “the ability to implement collective democratic decisions through technology infrastructure.”


The discussion revealed different pathways toward shared objectives of democratic control and community engagement, rather than incompatible visions.


## Economic Considerations


Contrary to common assumptions about resource constraints, funding emerged as a deprioritized concern among participants. This reflected growing awareness of inefficient spending on proprietary technology solutions that fail to deliver value or contribute to local economic development.


Avila noted awareness of “money wasted on tech monopolies that don’t deliver value or pay taxes locally,” while Matthan described DPI as “a relatively cheap alternative to traditional development approaches.” However, Verdier called for better economic theory to understand DPI’s role as public service infrastructure creating ecosystem-wide value.


## Climate Applications and Future Directions


Matthan suggested applying DPI principles to climate challenges, arguing that the same approaches enabling financial inclusion could revolutionize climate action by connecting previously siloed climate data systems. Avila reinforced this potential by highlighting Latin America’s successful regional cooperation on geospatial infrastructure for climate monitoring.


The Digital Public Goods Alliance’s open source policies survey was mentioned as part of ongoing efforts to understand and support DPG implementation globally.


## Areas of Consensus and Ongoing Challenges


Despite diverse regional contexts, participants demonstrated remarkable consensus on fundamental principles including modular, interoperable approaches and the priority of local capacity building over funding concerns.


Persistent challenges include harmonizing data governance frameworks across borders, integrating legacy systems with new DPI approaches, and developing sustainable funding models for DPGs beyond donor-funded projects.


## Conclusion


The discussion revealed both the potential and complexity of scaling Digital Public Infrastructure globally through Digital Public Goods. While regional approaches vary significantly in implementation details, convergence on fundamental principles of openness, interoperability, and local capacity building provides a foundation for continued cooperation and mutual learning.


The session demonstrated that DPI development concerns not merely technical challenges but fundamentally involves democratic governance, economic development, and social inclusion in the digital age. The emphasis on community engagement and local capacity building suggests that sustainable DPI approaches must emerge from local priorities and capabilities, supported by global cooperation on technical standards and knowledge sharing.


Session transcript

Jon Lloyd: The following is a work of fiction. Any resemblance to anyone, living or dead, is coincidental and unintentional. Please welcome the speakers. Good morning, everyone. It’s a real pleasure to be with you here this morning. Thank you to our generous hosts, the Kingdom of Norway. And a warm welcome to our session. It is the relevance of DPGs for advancing regional DPI approaches here in Workshop Room 2. I’m delighted to be here. It is a real pleasure to be here. Thank you for joining us. I’m delighted to be in the world you are and in person for being here bright-eyed and bushy-tailed in our first session on the second day. I’m the Director of Advocacy and 50 and 5 at the Digital Public Goods Alliance Secretariat. We’re delighted to have such a diverse group of participants both in person and online today. We’re delighted to have the global digital compact with the global digital infrastructure, also known as DPI, and digital public goods, which are also known as DPGs. Before we get going too far, though, I just wanted to introduce the 50 and 5 campaign. It’s one of the ways that the global digital compact has been put into action with the goal of making the world a safer and more inclusive place for all. We’re delighted to be here with the 50 and 5 campaign. It’s just over a year and a half into the campaign now, and in the spirit of this session, I’m very excited to announce that Kazakhstan is formally participating in the campaign as the 26th 50 and 5 country. We’re absolutely thrilled to share their commitment to implementing safe and inclusive DPI alongside their fellow 50 and 5 countries, including Kazakhstan, and we’re delighted to have them here today. So, let’s get started. I’m going to hand it over to our panelists. There we go. This topic is especially timely and relevant, as we consider last year’s global digital compact. In the GDC, countries have committed to implementing digital public infrastructure with safety and inclusion at its core, as well as committing to collaborate and cooperate with one another through sharing digital public goods. So, what does this mean in practice? So, in practice, countries can freely adopt digital public goods. That’s open source software, open data, open AI models, and open content collections that adhere to privacy and other applicable laws and best practices, do no harm, and help attain their sustainable development goals. So, what this means in practice is that countries can freely adopt digital public goods and use them to build components of their own, and to make the world a better place. So, let’s get started. So, just before I introduce our panel, I also want to introduce our fantastic team, helping facilitate the session. Joining as our online moderator here is Pei-Lin, and serving as our rapporteur today is Max, both my colleagues here at the Digital Public Goods Alliance. So, thank you to the two of you. We’re incredibly fortunate to have you here today. So, let’s get started. So, in the first room, we have Desire Kachenge from Codevelop. Next to me here is Rahul Matthan from Trilegal, Henri Verdier from France, and Renata Avila from the Open Knowledge Foundation. So, our objective for the next 75 minutes is clear. We want to explore different regional approaches to scaling DPI, and an opportunity to sporadically tackle scale of propriety and local agency and tech development. So, for our remote participants, you’ll be able to use our virtual platforms Q&A feature for questions. The chat for quick comments and reactions, and that is how we’ll be obtaining our questions. Onsite, here. Thank you again for being here. Let’s make this a truly insightful and collaborative session, and without further ado, over to you, Desire.


Desire Kachenje: Thank you so much. I will start by introducing myself. So, I am very honoured to be here. My name is Desire Kachenge. I’m based in Dar es Salaam, Tanzania. I am a senior principal. I look after investments in particularly Africa for core develop fund. So, just really quickly, I’m sure a lot of our ecosystem here has heard about us, but core develop is an investment fund, a non-profit investment fund that focuses on supporting specifically governments, but also other stock stakeholders in rolling out digital public infrastructure. Our focus also expands to sort of like supporting in some of the challenges and bottlenecks, but also doing some research to understand a little bit more what digital public infrastructure looks like, but also the different approaches when it comes to deploying. And we work very closely with DPGs to see how they can make this sustainable. John, I don’t know if you want me to proceed?


Jon Lloyd: Sure, yeah. I would love to hear some context-setting remarks from the African perspective. Great.


Desire Kachenje: So, I think one of the key things that a lot of us are hearing, and what we’re seeing in the continent, is that digital public infrastructure is not just infrastructure. There are a number of other things that need to be considered when we want to roll out sustainable and scalable DPI. It’s also evolving to be not just a digital transformation initiative, but also sort of like a growth within the ecosystem to inspire innovation, but also to bring in private sector and other players. So, there are three key things that we’re seeing in the African region that I want to talk about. So, the first one would be, it’s being very government-driven, but ecosystem-enabled. So, I can give a good example here with Tanzania, where I’m based. Over the past few years, even before the concept around digital public infrastructure was formed, and some of the research around it had been done, Tanzania has been building different layers of DPI. So, they started working on the ID, like most countries, and then they started building an interoperable digital payment system, you know, starting with connecting mobile, you know, starting with connecting mobile operators separately, and then connecting banks, and then thinking, how can we build a platform that connects these two? In their first exploration on how this payment platform would be, they did work with a DPG, and they then decided to roll out an in-house platform that was and is still managed by the government. Currently, right now, Tanzania is exploring a very interesting, I would say, a very interesting implementation plan, whereby, while it’s government-driven, it is now rolling out what they are calling the Jamii data exchange platform. This is the third layer of the DPI. As they are doing this, they are not only working with local implementers within Tanzania, but they are bringing in different DPGs. Their data exchange platform is built on X-Rod, so they are working closely with X-Rod, but we are also working with them to build out what they are calling a Jamii wallet, which is a use case on top of this data exchange platform. They are doing this using digital local, which is a different type of DPG. At the same time, they are working closely with private sector to see how these platforms can interact, not just with government institutions, but also interact with private sector institutions and allow data exchange. That’s why I’m saying government-driven, but very much ecosystem-enabled. Another example is also around challenge-driven DPI. A lot of the data exchange platforms that are being rolled out, they are not being used by government institutions. A lot of public infrastructure platforms that have been rolled out so far, we are seeing a little bit of slowness when it comes to adoption. Even if it’s interoperable, even if it’s safe, if there is an adoption, and that means there’s no engagement with citizens, there’s no engagement with government institutions that need to use it, it cannot meet some of the potential that we see when it comes to DPI. So, the challenges that start with a challenge are not only more innovative and more customer-citizen-focused, but they also have more adoption from the point of rollout. I’ll give a good example in the SADC region right now. SADC region is the Southern Africa sub-Saharan countries. There are about 16 of them who are currently in the SADC group. One of the questions or challenges that came up is there is a lot of immigrants that are moving between these countries. While a lot of these immigrants might have a national ID and other documents, they’re still not in the formal form. The question then became why, and the quick answer was some of them just don’t have documents. For example, Zimbabwean, and I’m now working in South Africa, but I don’t have the right documents in South Africa to support me to be formally included in financial services. But then at the same time, we know that a lot of these countries already have national IDs. They already have national digital IDs. Then the SADC group had reached out to us and another partner within South Africa called FinMAC Trust to figure out, hey, can we build a use case that actually connects the national foundational IDs that are existing within these countries to be able to support immigrants to use formal financial services within Africa? We’ve seen the quick stakeholder engagement in this project. We already have private sector. These are the banks joining in. We already have the central banks who are paying attention to the project and seeing how they can come in. Then last but not least, innovation needs to meet serenity. That’s the other bit that I’m sure we’ll be exploring a lot more today, but there’s always a question around, hey, can we build a use case that actually connects the national foundational IDs that are existing within these countries to be able to support immigrants to use formal financial services within Africa? When we’re building DPI, especially specifically when we’re building DPI with DPGs, how do we ensure that countries have the local ecosystem to not only own, but run, maintain and develop new use cases? I think a good example is in Rwanda right now where they’re rolling out a number of use cases when it comes to DPI. And what they’re also doing on the side is they’re now deploying or they’re building up what they’re calling a center, a DPI center. And this DPI center is supposed to support ecosystem players, including developers, whether they’re in government or their private sector, and seeing in what ways they can be able to understand, build capacity for them to understand the use cases and how they can support in aspects of maintaining some of the use cases that will be rolled out in a few years. So I think that gives a bit of an example of how DPI has been rolled out in Africa. And if you know, I’m happy to explore this a little bit more. One thing I would mention is that there’s also challenges, which I think are globally, and we’re seeing it very in real time in Africa. And these are mainly to do with safeguards. So one of the one of them that I can mention is around data governance. So when we’re building these data exchange platforms, especially when the original bringing more than one country, we do not have we have very fragmented digital data governance frameworks that are existing. So this moves beyond just the platform itself. But how do we bring policy makers? How do we enable countries to harmonize some of the policies that are in place to ensure that the platforms have been built are safe and inclusive? So I will hand over back to


Jon Lloyd: you, John, and happy to take any questions. Thank you very much, Desire. That was a really excellent insight into into how Africa is approaching DPI development, especially using DPGs. Rahul, we’ll move on to you now. And we’ve heard so much about the India stack. And I would like to just hear more about like how India has approached its DPI development and its approach


Rahul Matthan: to DPGs as well. Thank you. Thank you, John. So I, I almost feel I must apologize for India stack, because we started this idea of a stack, which leaves the impression that you must necessarily layer first identity, then payments, and then data sharing. And really, it’s not mandatory that you have to do it in that way. So even though we sort of think about in stack and the stack approach as a pathway by which you must progress up the chain of DPI is, I’m here to say that the the real idea of a stack is that we are creating modular elements, DPGs that can be layered on top of each other in whichever order you want for whichever solution you want. And that really is the the stack approach that India has followed. Now, India happened to start with identity, and then built a very powerful payment system, which is currently doing 18 billion transactions a month, and then has built data sharing. But if you think sideways, India has built DigiLocker, which is a very powerful credential system. And credentials are very useful for a number of things, from skilling, to government to person payment solutions, to all sorts of things. And so the real India approach, I would say, is leaning into some of the definition of what DPI is, which is open, interoperable, modular systems. And you can start wherever you want on the stack, you don’t necessarily have to have an identity, digital identity system, it’ll help if you have one, but you don’t necessarily have to have one. But you have to build these solutions to be modular, interoperable, and open. Because it’s only if you build them to be modular, interoperable, and open, that you can really reach population scale, because you never have to rebuild something that you have built previously. And that is a really powerful statement of how digitalization needs to happen. Now, India has, of course, as you know, been doing this for 15 years. So if you do it for a decade and a half, you have some time for introspection, and you can go back and see, what is this thing that you’ve built? And once you’ve figured out that you’ve built something which is pretty cool, you can then go and see what else it can be used for. And one of the things that I’ve been playing with is this idea that some of the things that Desai was talking about, that we’ve got to really sort out data governance. Because at the end of it all, we are unlocking a lot of data, and we need to do it in a safe manner. And so how do you do it? The traditional lawyer in me says, you’ve got to write laws, and you’ve got to build policies, and you’ve got to do it the way we’ve done it for many centuries. But the power of digital, and the power of digital where everything is digital, is that you can actually build some of that governance into the design of the architecture that you’re using. Now, this is an idea that’s not new. 25 years ago, Lawrence Lessig wrote a marvelous book called Code and the Other Laws of Cyberspace, where he said that on the internet, code is law. Now, at that point in time, the internet was a thin sliver of what we all do. Today, the internet and digital is everything that we do. I’m here in Norway, and I don’t need to take out my wallet, which is remarkable, because in many cities, even in Europe, you have to. And I can travel around from Oslo to Lillstrom using just the app, know which train is coming when the train is canceled on me. I can very easily get the other train. It’s remarkable that you can do all of this using an entirely end-to-end digital system. But when we do that, we’ve got to remember that that same digital system gives us the tools to actually in-build governance directly into the interactions that we have. This, to me, is the hidden secret of digital public infrastructure, because digital public infrastructure really is an infrastructure layer that you have created for transactions. Laws are the offline way of telling us how transactions need to be conducted. And in an entirely online world, the laws that tell you how to transact can actually be written directly into the ways in which these different modules interact with each other. And so the term that I coined, the phrase I coined, is that we’re building an infrastructure on which regulators can regulate and innovators can innovate. And this is different from the other infrastructures that are built by innovators on which only innovation can happen on the terms of the people who control the platforms. And then regulators are forced to use the traditional old world ways of coming up and saying laws, etc. But if the regulators can also participate as regulators on that infrastructure, they will be able to set the rules and the innovators will be able to innovate on the same platform. And that really, I think, is the secret hidden message of digital public infrastructure.


Jon Lloyd: Yeah, it’s very interesting, especially given India has been a real leader in the DPI space and a real example that a lot of countries are looking to. And Omri, maybe we can move to you now. And we’ve heard a lot about this idea of the Eurostack, although we know it’s several different things. But I would like to hear more about the European approaches to digital sovereignty, role of open source in that, and in particular this concept of the digital commons and how that aligns with the digital public goods agenda. Okay, so thank you for being there. It’s a pleasure


Henri Verdier: to exchange with my friends. So I was listening to the two first speakers. I feel that the perception of the DPI movement in Europe was immediate and very positive, maybe because we have a long tradition and we love good specification. You know, Europe is a birthplace of, I don’t know, the metric system, the ITU, and a lot of open standards, Wi-Fi, Bluetooth, a lot of open source like Linux, etc. And we love when the world is properly organized. So these ideas that you can manage, plan, we love it. And the second thing is that there is probably an ideological alignment because we have a strong culture of public services, and maybe in the discussion we need to precise that public service is not government, so that’s not exactly the same thing. And in France, the culture of the public servant is really strong, and they don’t consider that they are there to obey to the minister. They have a service to deliver and they do it. And probably a third connection is the idea that when we see the digital world as we see it, if we want to protect an open, free, decentralized, vibrant internet, and to protect democracy, the right to the people to decide their collective future together, and to impose some solution, we need this layer of DPI to implement some political decision and collective decision on the big, the open internet. So I think that we are ideologically aligned, but because the need to change was not the same, we didn’t go as fast as India, for example. I say very often in Bangalore, I use UPI in Bangalore, but in France you can buy a baguette with a contactless payment with your phone without any fee for decades. But that’s a very old and imperfect system. You need a bank account, a credit card, an emulation of the credit card, the connection, so that’s a very complex system, but you can do it, and that’s quite free. So the need to change was not the same until the growing concern regarding sovereignty. So France was probably the most concerned with sovereignty forever, because we are a bit Gaullist, so we say we decide, we want strategic autonomy. I have to say that a lot of other Europeans were a bit less concerned and then we have this movement which is not just the Trump administration we have also companies that becomes very very big and heavy and start to act as political actors so when you see of course the most obvious is Elon Musk but you can if you see carefully you can see others they want to decide the future of the of the world of geopolitics and national politics as to Brazil what Trump must try to do so we have a growing concern and to be frank we are really really really heavily dependent and vulnerable to American companies and American infrastructure everywhere can you do you remember last month the American government decided to cut the email of the prosecutor of the criminal international criminal court that’s something impressive so because of this we are progressing toward the slowly but firmly to other kind of digital public infrastructure will all this culture of good organization and public service and for example next year we’ll have the you digital wallet which is a very important step because we will have the same standard in all the European citizens phone to present and interact with digital ID and we we’ve learned to separate some attributes from the idea so for example I will be able to prove that that I am more than 18 years old without giving my name which is very compromising for a lot of issues and and now I’m going in the last 30 seconds to our very good and open source but we are still European and we are still 27 countries and so when you say to any European country let’s build a common EU stack they will say all of them will say of course let’s take my solution let’s take X road let’s take France connect let’s take and probably we’ve lost 10 years because of this because of a kind of competition between national solutions and now we are learning thanks to the open source movements to build buckets of interoperable solutions for example there is a Franco-German project a sweet numeric that’s the desk for the public servants this is not one solution this is a series of module and you can you can add your own module the group will test if this is interoperable with the others the group will develop the interoperability if needed and we will have so I finished to say that the only way from my perspective for real EU stack is not one stack is a variety of solution and altogether we pay attention and we build interoperability and the EU stack will be a cloud of nebulous we say in French of solutions but and we you will be able to decide I take matrix or I take another one but I know that when I do enter in this world I can interact with all the others and we are progressing fast from one year more or less things are accelerated and this is deeply connected to the DPG movement because for this we need all those communities of open source open standards developers small companies


Jon Lloyd: associations etc excellent thank you Henri I think you’ve touched on especially those issues around I guess the sovereignty issue which is coming up design and Rahula both kind of covered that in their introductions as well and Renata let’s move to you and I’d like to hear more from the Latin American perspective as well about how how countries in lack approaching DPI as


Renata Avila: well yes I think that it really connects well with the last bit of Henry because I think that the emphasis in Latin America is communities and it is very interesting because we start early the vision of the public infrastructure was being discussed in Latin America early 2000s and an example of that is that seven countries in Latin America have legislation for open source and open content so those those two pillars it is not words but actions in many countries in Latin America I can list it Argentina Brazil Ecuador Peru Venezuela Uruguay and Cuba and I will say that the first thing that the Latin American countries understood together with code is law is that technology is politics and we learned the hard way you know like after some sanctions to some countries in Latin America of course you are shot you’re cut off vital things that you need to do your work and that accelerated in the early 2000s a transition you know when when when Latin American countries were like you know adopting some policies that were not welcome but the main provider of technology had to move the open source was the the way you know and and the interesting thing is what it was institutionalized they were like yeah units inside ministers in charge of this and there were like resources allocated to that and but in parallel and it’s something that is very important because that didn’t last long the institutionality after transitions and left right left right left right as it is seen in our very vital democracies and many like you know was discarded or defunded but what it stayed was the community component and the that’s the highlight of the continent the communities around open source the communities around free software the communities around open content are very active like it’s very interesting you know you see in Europe like a lot of funding for community works and digital social innovation and so on compared it’s not enough but compared to Latin America in Latin America is the volunteer you know it’s what you do after work it was what you do on weekends you edit a Wikipedia article you code and contribute to a collaborative platform and and so what I will say that what it is different is that in Latin America we are very good at the digital public goods and but we haven’t jumped yet to the big digital public infrastructure plans except some specific cases one case that is very exciting is the case of pics in Brazil and it is it is it even made it to Europe you know like I am at the moment living partially in Portugal and I was in Lisbon and I saw pics I could pay by pics in in Portugal Brazilians can pay with pics in Portugal imagine that you know like it’s very interesting and it is being adopted in Panama Peru Bolivia Paraguay Venezuela Ecuador and Argentina is doing a pilot and it makes sense because Brazil is you know half of our continent so all the border countries are making easy to exchange we are not Europe we don’t have the euro it’s difficult these frictions with currencies and a lot of very complicated legislation because of money laundry and so on so this is making a real difference and the other thing that I want to highlight this the ugly dog nobody speaks excite in an exciting way about it is the geospatial infrastructure most of Latin America sure geospatial infrastructure which is that is a digital public good and that’s amazing that really enables the work of many many public offices and on the other highlight is data see can which open knowledge foundation coded initially and in now is one of the default platforms in more than 20 countries in Latin America and the Caribbean and and D can as well and so the civic and the CDM also is many of the Latin American participatory platforms and many other smaller civic tech projects follow the logic and I already listed a digital public goods but one thing that before I do not want to forget because it connects to India is an emergent emergent trend of South South cooperation which is very very exciting in in the frame of bricks and carry comb India stack has signed emojis with Cuba Colombia Suriname to not in Tobago and you remember Buddha and Barbados and that’s a subregion practically you know the Caribbean and Colombia Colombia is the only one in the mainland but it is very very exciting to see how all of this connects the the Caribbean Caribbean communities also super strong in communities of open source and so what I will say that is the highlight of the region is that it’s a region that can make what will be different is that is not only the tech techies and the regulators but is the community ready to take active informed participation in how you build your digital public infrastructure and so that those are the exciting news from the region great thank you so much and


Jon Lloyd: we’re going to jump in now to a real question-and-answer component of this we’d really like to have your feedback and questions particularly online make sure that you’re submitting those we’ve heard a lot of common threads coming through here and common use cases in particular I think one compelling thing that’s coming out is this idea of the cross-border use cases which I don’t necessarily think that countries are considering when they’re implementing their own DPI but the importance of interoperability and the solutions is becoming more and more prominent you mentioned it with pics and I’m funnily enough that you can now use in Europe And, of course, Desire covered that as well with just being able to access financial services using your digital ID. And so, but Rahul, maybe I can jump to you quickly. But what do you think are some of the big differences between the way that India has been approaching its DPI development and what you’ve been hearing from the other speakers?


Rahul Matthan: So look, I don’t think there’s much that’s different. I think, you know, we came from a different place. As Henri said, we, to India, and I think this is true of a lot of the global south, this is the only way to leapfrog development so that we can, I think the statistics are, do in 10 years what would have otherwise taken us 50 years. And I think a lot of countries are seeing that. Now, France didn’t need to do that because France had already put the 50 years in and they were sort of 50 years ahead. And for the rest of the developed world, this is not, in that sense, a necessity for many of the traditional elements of the stack. And I think, you know, digital identity, if you’ve got a very strong civil registry, you don’t really need to do what India did and get biometrics to the whole population. You just rely on your civil registry. If you’ve got a working payment system that reaches everyone in the country, you don’t need to go and build a PICS because already there’s a way in which to do it. But for the countries of the global south where they’re seeing that there’s a lot that we have to achieve, this is a quick way to do it. Now, just to touch on the interoperability across borders, India, as you know, is a subcontinent. And so in many ways, of course, there is a need to interoperate with other countries, but we are a billion people and we really need to focus first on interoperating inside and reaching everyone in the country, which is what the identity system did. Unfortunately, the payment system, as big as it is, is only covering maybe 300 to 400 million people. We have a billion left to cover. So we have a long road ahead of us, even within the country. But I think there are many other elements where we can and must cooperate. I’m going to put a plug for climate because using digital public infrastructure to solve the climate crisis is probably the most important, urgent innovation that we can think of as the next step of digital public infrastructure. You may think, as I’m saying this, you may say, what is he talking about? But let’s just agree this is 10 years from the Paris Agreement. The Paris Agreement isn’t really working. Temperatures are rising beyond the point where I think we can scale it back to our climate objectives. The approach of building grand consensus between countries is not working because countries are not committing and people are walking out of the agreement. We’ve got to find a different way. And the one thing about DPI is DPI unlocks abundance. We are stuck with data and valuable opportunity in silos that don’t connect to each other. And if we can just rethink the way in which we address climate challenges by connecting our silos, we will find that solving the climate problem does not cost the trillions of dollars that it’s estimated to cost because that is an old world way of thinking of solutions. And I strongly urge all of us here, I mean, COP is in Brazil this year, and I strongly urge all of us to rethink the way we go about this because DPI has shown a way for very, very big challenges like financial inclusion. I see no reason why that can’t be applied to something as important as climate change.


Jon Lloyd: And Renato, I’ll jump to you because you mentioned this a little bit when you were speaking earlier, but Latin Americans are sharing geospatial data, I think, in terms of addressing that climate using digital public goods. How have they been built and governed in a way that’s enabling this cooperation and collaboration?


Renata Avila: Well, I think that the Geosur project, I don’t know the details of the governance, but it is usually cooperation across the region is very dynamic and fluid, especially in two issues, in cross-border cooperation and in health issues. I think that the pandemic, PAHO, for example, has played a key… There’s regional mechanisms that have enabled this cooperation. I think that also two actors have played a crucial role, and one is the Inter-American Development Bank, and the other is, you know, the regional mechanisms such as CELAC and OAS, to make countries agree on general frameworks, of course, not at the level of coordination of Europe, but close to that. I think that also the lack of tailored solutions and the lack of prioritization by the Global North of solutions specific to Latin America had accelerated that in the geospatial area. And one more that is very important. It has also… Latin America has been one of the pioneers together with India, actually, in opening also knowledge and opening research that will be like key… It’s a combination of data, knowledge and infrastructure that makes the region very ripe for more ambitious efforts around climate.


Jon Lloyd: Excellent. And speaking about regional ambitions, desire, we’ve heard a lot, especially from the East Africa community recently, we know that there are commitments being made on this idea of cooperation and collaboration. So there’s a lot of political will there. And that’s a thread that I’ve kind of heard coming through as well. But is there also a risk through moving too fast and not necessarily taking a multi-stakeholder approach and the effects that that might have on inclusion or exclusion of people and in terms of being able to access services?


Desire Kachenje: Thank you. Great question. So I think the way you’ve put it is actually very correct. There’s a lot of political will. And this is more specifically, I would say, when it comes to digital payments, even like with the East African communities, mostly around digital payments and how we can, you know, connect some of this digital infrastructure that’s existing. So right now, just for context, it’s worthwhile to understand that, you know, in Africa, DPI is looked at mainly to kind of solve some of the fragmentation issues, but also to bring together inclusion. So like to solve for the digital divide. And digital payments took off more, I think, in Africa region than, you know, in most other parts of the world, because it came from a need, a necessity that a lot of African citizens want to send small amounts of money to rural areas that probably don’t have network and don’t have internet. And that caused, you know, mobile banking to skyrocket, et cetera. And then from there, it was now easy to create interoperability because it was very clear the use case that needs to be there. Now, I think to answer your question, there are two risks that, you know, a lot of countries are looking at and a lot of regional operators are looking at. We already have PASS, which is the Pan-African Payments Incentive System. We have TCIB for the southern region. And there are a number of others that are coming up. But one of the bigger questions has been there is a lot of smaller amounts that are being shared across these countries. And the platforms, while the interoperability is there, they are not really accommodating such amounts. And so some of the work that is being done, and I think I want to give, you know, more like congratulations to some of the DPGs in DPI right now who are focusing on working with some of these regional organizations, for example, Comesa, to say, hey, can we have a use case that just focuses on something like merchant payments, where these are your smaller payments, and to enable smaller SMEs or to smaller amounts to be sent in the region. So this is really helping when it comes to inclusion. An example of this is that Comesa is currently working with Modulook, which is a DPG for payment platforms. And they’re doing a trade platform whereby it will allow smaller traders to send smaller amounts of money within the region. The other bit I think we’ve mentioned, it comes to that there is a lot of legacy systems that are existing in Africa currently. And a lot of these legacy systems, it’s been hard to then transition them to actual platforms, and then it’s even harder to transition them in a way that we’re using the DPI approach, you know, making them interoperable, making them open. And we are seeing right now that DPGs is taking, a lot of countries are taking a different look at DPGs and they’re more open, although we’re also seeing specific requests that, fine, let’s use an open source platform, but can we have the data being on-prem, so being on the ground, which then brings a lot of questions around, you know, how do we manage this? How do we incorporate this? So I would say when it comes to inclusion, it’s usually to look after the low-income populations and the harder-to-reach populations. And the other bit is then how do we ensure that, you know, a lot of the governments are comfortable in some of the open systems as they are built, and spending less more time in trying to customise them specifically for one country, because then now when we’re trying to connect regionally, we have to do more customisation, which can be quite expensive and time-consuming.


Jon Lloyd: Interesting. And Omri, I’m going to ask you a question a little bit about some of this regulation. You know, we’ve heard the Silicon Valley approach of move fast and break things. Unfortunately, it seems like they’ve broken too much, perhaps. But I think in Europe, given that often the rest of the world would look to Europe in terms of tech regulation. and the DSA, the AI Act, all of that kind of thing. But what we’re hearing here is that with DPGs and for DPI, there needs to be a lot of flexibility and openness in the approach. But in terms of, I guess from this European or French perspective, what are some of the non-negotiable elements that need to be in place, I guess from your perspective, in order to maintain that sense of innovation and collaboration, but need to, I guess, set the box in order for us to all play in?


Henri Verdier: Very interesting questions. I’m building an answer. Yes, obviously, there are non-negotiable elements and that are all the decision of free democracies. So privacy, human dignity, free speech, as we consider that free speech should be. So it doesn’t allow you to ask to kill people online, et cetera, et cetera. And if we cannot implement our collective decision, we are not a democracy anymore. And that’s why sovereignty matters, because you cannot conceive a democracy without sovereignty. You can conceive sovereignty without democracy, but you cannot conceive democracy without sovereignty. So first, second observation, starting from the Indian, I will take the Indian vocabulary. You cannot regulate just with law. You need a techno-legal approach. By the way, you need champions if you’re not creative, innovative. If you don’t have research, intellectuals, creators, companies, startups, et cetera, you won’t impose your views. So you have to be part of the movement, and you have to conceive a way to implement a bit the regulations. So that’s why we need a longer conversation regarding rules as code. But yes, that’s important to be sure that when you decide something, you know how to implement it. Interesting example, for years and years in Europe, we are very concerned by age verification, because we know that very, very young children go to pornographic websites, but I’m speaking about the age of eight. Ten percent of the children less than eight years old are seeing pornographic content online. But if you just say, we want to check the age, it doesn’t mean nothing. So the question is, how can we do this? And because we need to respect privacy, et cetera, the only solution is to have a proof of age separate from the identity. And for this, you need to conceive an infrastructure, and that is efficient, auditable, because we need also to be sure that they are really separating those information, et cetera. Because your question is about regulation, it’s also politics, I just want to add one last word. I was asking to myself, what do we have in common, because we are from four continents, and we have a lot in common. And I was thinking that something that we have in common is a bit hidden, because it’s so obvious that we don’t pay attention to this. In a nutshell, I think that we would all agree that a good society needs equal access to some basic public services, needs democracy and a collective decision, needs free speech and free innovation for the market, et cetera, et cetera. But this is not a real consensus. So we have this in common. This is not what most of the companies of the Valley think. And they are developing an infrastructure to capture our economies, to transform us into a kind of global Uber driver’s economy, within their platform, where they take all the added value. And if you pay attention, there is a real theory, probably it started with Milton Friedman, but now they have more and more books, the network states, worse to be seen, because that’s the program of Elon Musk. Let’s replace the old-fashioned nation states with big vertical companies, one for education, one for… And that’s the big front line and the big battle. Can we still empower the people through infrastructure and good governments to respect dignity and innovation and everything, or do we have to let our lives within big corporations and their infrastructure? And that may be one of the secret connections between all those movements.


Jon Lloyd: I’m just going to pause for a second here, and we’re going to launch a Mentimeter poll. Those of you online, there was applause in the room next to us, which we heard. So there is a Mentimeter poll here. If you’re able to join using this QR code or go to the link here in Menti, we’ll launch that in just a second. I think people are still taking photos of the QR code. It’s just coming up now. Okay. Let’s launch that poll. So we’ve heard a little bit about some of these challenges in DPI implementation from regional perspectives here, but in terms of scaling DPI globally, what do you think are the biggest challenges? We can see some of the answers coming in now. All of us panelists are watching them come in. Do we have multiple answers? It’s the biggest challenge, Henri. You had to choose just one. Yeah, these are quite interesting answers. I’ll just read out what the options are here. So we’ve got lack of interoperability, ensuring data privacy and security, addressing local capacity and agency, funding and resource allocation, and political will and governance. It looks like we’ve got a bit of a dead heat here between three of these options. I’ll just open it up to our panelists. What are your reactions to seeing this, Renata?


Renata Avila: You know what I love is that everybody deprioritized funding and resource allocation because we are now well aware of all the money spent in tech monopolies that deliver nothing and not even pay taxes in our countries. So I think that that’s clear. That’s consensus now. And when you were saying what we have in common, what we have in common is that we are squeezed without options and we really need to work together, all the people here at this table, in reverting the lack of options and the heavy dependencies that we have in a system that is not delivering for democracy or for sovereignty. The second thing that is very interesting in the results is also the need to address local capacity and agency because very quickly something that I am a part of is that most of the local capacity, most of the programs training civil servants are run by big tech from Silicon Valley. Most of it, most of it, like basically training our civil service just to think that they are the only solution available and ignoring all this rich ecosystem of possibilities that we could take. So those are like my two comments.


Henri Verdier: Henri, very briefly this time, I am the only one that did vote for economic allocation because of the word allocation. First, it was very difficult to choose because everything here was very important, but I feel that we need a better economic theory of the economic role of DPI. If we decided in Europe a century ago to make postal service or whatever as public services, it was because it did create so much value everywhere that it was quite impossible to take the value everywhere. So I think that this is the best way to finance something that creates value everywhere. We need such a theory for the modern version of public service that is DPI.


Jon Lloyd: Do we still have Desire with us? I hope so because Desire, how are digital public goods addressing some of these issues around like funding resource allocation where Africa is a historically resource constrained area still now? What is the role of digital public goods in helping address that in terms of DPI development and launch?


Desire Kachenje: Yes, so I think firstly it’s very interesting for me to see this because I agree, I really like that adjusting local capacity is quite high up there. I think what we’re seeing here, funding and resource allocation is still quite a huge issue because a lot of these other issues, addressing local capacity, political will and lack of interoperability for a lot of African countries, they do need funding to sort of solve some of these issues. I think there’s a slight difference there in terms of the outlook here. In terms of DPG, I would say it’s a double-edged sword. So there’s one side that a lot of DPGs already have easy to use and source code that can be easily implemented within African countries. But then at the same time, a lot of DPGs still need to build local capacity. within the countries that they’re operating in, which then this requires time and resources and funding. So DPGs have done a great job in understanding the challenges and needs, specific needs, when it comes to different African countries and different approaches to DPI. And I think DPGs are also putting their hand up in terms of coming up with easier ways or more efficient ways to roll out. But at the same time, there’s also the other side that a lot of DPGs are funded. So sustainability around maintaining some of the DPGs long term for them internally itself is still something that needs to be discussed. But when they are rolling out in African countries, the customization and the the ability to remain sustainable past the project, especially because most of most of the project when it comes to DPIs that use DPGs are donor funded. So sustainability past that donor funded project is still a question. And then last but not least, John, if I really want to touch on data, privacy and security, and I know that, you know, it’s it’s something that we’re discussing on and on. But it’s such a crucial issue, even when it comes to working with DPGs, because a lot of let’s say if we look at something like that, there are some countries that still do not have clearly defined, you know, what is what does it look like when it comes to digital data for specific populations, like when we’re looking at children? What is that a privacy for children, especially when it comes to things like digital IDs or digital bathroom registrations? So while we have a number of DPGs such as OpenCRVS, which are doing a great job when it comes when it comes to like registrations. But this, you know, we’re seeing a bit of a struggle here for countries to then adapt privacy and security issues, which kind of make a lot of projects halt in between, because then you will have CSOs coming in and saying, how safe is this? Yes. So I think that’s my quick take.


Jon Lloyd: Excellent. Yeah, that was extremely useful. We’re going to launch a second Mentimeter question now, because it relates a lot to what Desire was just speaking about. So here we go. Hopefully you still have the Mentimeter link up from before. But which of these do you believe is most effective in ensuring countries build and evolve technologies based on their own priorities? So here’s the Mentimeter link. If you if you lost it before our options here, we’ve got open source first principles, decentralized DPG governance models, promoting digital commons, which I think relates to the European agenda here, local talent development and training to address some of those capacity issues. And then this idea of, I guess, international funding, but with local control. So this is very interesting as well in terms of open source first principles. I know Renati, you spoke about many countries in Latin America having this open source first approach. And seeing some of the answers changing here now as well. Oh, God. It’s very interesting. Also, if there’s anyone present with us who would like to ask a question, please feel free to come up and and ask them at the microphones either side of the stage. We’d love to hear from you. Hey, Len, how are answers coming in, questions from online? Or we can take a look in a second. Great. And Rahul, maybe as these results are coming in, what are your reflections about this?


Rahul Matthan: I mean, it’s interesting to see funding right at the bottom. I think this is repeating what we saw in the previous Mentimeter. And I think the idea really is that it doesn’t take a lot of funding. It’s not that funding is not important, but actually, DPIs are a relatively cheap alternative to doing this. I think it’s interesting to see local talent development right on top, because this is something that we sometimes don’t really fully grok as to how important this is. You can build these wonderful platforms, but there is a last mile that needs to be implemented by government servants, NGOs, even ordinary citizens. And the development that’s required in order to do that is actually non-trivial. And we can’t automate everything away, as no matter what you do with digital, there’s always that last mile. But even as I’m speaking, I’m seeing that we’ve got three tied for third place. And it’s clearly local talent and open source as the top two, which in many ways really aligns with the way I think about these things. This is really the two most important things we should be thinking about.


Jon Lloyd: And do you see the use of digital public goods in assisting with the local talent development and training?


Rahul Matthan: Of course. No, I mean, look, we can keep chipping away at the amount of physical or non-digital steps that we need to take to build this. And I think certainly in doing that, building DPG type training, talent development solutions using Sunbird and things like that are extraordinarily powerful. But I think that at least when I speak to DPI development in other countries and not just in my country, I find that this is the thing that governments are most concerned about. And it may just be a fear of the unknown, but a lot of governments are concerned about how much it’s going to take to actually really roll this out in countries. And I think that’s certainly something that we can look to improve using DPG solutions, building DPIs even just for talent development. But we can’t ignore the fact that this is a concern and that this is something that needs to be actively addressed.


Jon Lloyd: Thank you. And we have a gentleman here with a question. If you could just start by introducing yourself and then, yeah, and if it’s directed to the panel or anyone in particular.


Audience: Okay. Thank you very much. My name is Israel Rosas. I’m with the Internet Society, but this question is in my personal capacity. I’ve seen that the data public infrastructure is broadly prominent in the global data compact, for instance. And now that we are discussing how to integrate the global data compact implementation into the WSIS Plus 20 review process, I’m curious about the framing of decentralized governance models for DPI. What would be the panel’s impression on, instead of just decentralized governance models, adopting a truly multi-stakeholder governance approach for DPI? Because I think that there are slight but important differences, so it will be interesting hearing your thoughts. It’s a broad question for the panel. Thank you.


Rahul Matthan: I mean, look, I’m a huge fan of decentralized anything, but we’ve got to realize that we all need to cleave to a certain set of common principles. And so, once again, picking back on Henri’s point, the concern with a lot of this is sovereignty. And one of the challenges with sovereignty is that if you’re utterly decentralized, in the process of being completely decentralized and multi-stakeholder, you can lose some of the sovereignty requirements that you individually need. So to me, I like the global digital compact. I’m part of the DPI safeguards framework, once again, all of which are saying, let’s build some principles that we all agree with. And then let’s leave it to countries to develop the bespoke governance frameworks that are appropriate for their context and what they want to achieve. And I think that that is the combination we need. As Henri said, what is common to all of us? There are many things that are common to all of us. We must absolutely adhere to those things, because that’s the reason why we can all meet at places like this and exchange views in a language that we all understand. But at the same time, we’ve got to recognize that we as nations and as sub-national institutions have our own objectives that we want to achieve. Some of those in India, in Africa, in Latin America are very different from what Europe and North America want to achieve and can achieve. It’s not wrong. It’s just that those are differences that we’ve got to recognize. And part of the way we recognize the difference is to also recognize the commonalities and say, as long as we’re common, you can be different. And maybe that’s what you’re trying to say, because multi-stakeholderism also is that. But I fear that we lose that if we make it not grounded on common principles. And I think the common principles are important.


Henri Verdier: If I can add one word. As Raoul said at the beginning, a EU stack or Indian stack doesn’t have to be a cathedral. It can be diverse and modular, etc. Some things are very regalian, we say in French, regalian. The source of citizenship is a state. You cannot crowdsource citizenship. But you can build important parts in a very multi-stakeholder approach and with a new form of cooperation for the state with civil society, with public goods. For example, in France, before being the ambassador, I was the head of the IT department for the government. I did build a strong partnership between the National Geographical Institute and OpenStreetMap. And now we do deliver some important public services in cooperation between OpenStreetMap and the National Geographic Institute. Or we did build our own instant messaging system, CHAP, with Matrix. And we just asked Matrix to develop some features and we did finance. And they did implement it as they want. So you can, for a lot of important parts of this, you can be completely multi-stakeholder in the governance and development. And for other parts, probably you cannot because the state has some role.


Renata Avila: I wrote a paper precisely about that with some colleagues and we suggest instead of multi-stakeholder approach a commons-based governance approach for digital public infrastructure and it beyond the national uses is the only way that it will help scale and localize digital common efforts. It increases transparency and accountability, it accelerates impact, it reduces governance data and even localization frictions and the most important thing it secures community engagement so even if the government changes you have people actively involved in the governance of infrastructures that are of common benefit.


Jon Lloyd: We have a question from the chat here that I will just address and this is a lot to do with inclusion and specifically around access and desire. I’m going to look to you because we’ve heard examples from digital public goods for example Mojaloop which is a digital payment system. Many people access that on feature phones rather than smartphones but the question here is with 2.6 billion people not using the internet, any comments on how to overcome the divide?


Desire Kachenje: I was actually just reading that same question and asking myself the same thing. I think what we have seen with a lot of this is not just for payment systems even when you’re looking at something like ID, one of the key questions you would get from the African citizens, we were talking about this ID for Africa a month ago whereby a lot of citizens are asking the question, I don’t have internet, why do I even need a digital ID and what does that look like? Instead of that can we focus on providing access such as internet to rural areas? So yes, it’s a big question and I think one of the things that I’ve seen a lot of specifically LADPGs trying to address is to create other ways for access when it comes to some of the use cases that are being deployed. So like you mentioned Mojaloop, they are providing access for future phones. If you look at other things such as data exchange platforms, there are some data exchange platforms that are built upon USSDs which is something that is very common for a lot of African countries but the governments have really tried and some of the nations and some of the governments have really tried to create systems whereby all local communities have spaces within their proximity that have access to desktops and internets and all of these digital platforms can be accessed there. So if you move in, if you walk around some of the revenue authorities across African countries, they do have desktops and access points whereby you can still use the digital platform at their office using those platforms. So that is something that has been a lot of countries are trying to do but I think Joseph is raising a very significant question that goes beyond just rolling out use cases when it comes to DPI. The question is the access to internet is still a prominent challenge when it comes to a lot of African countries and I think other countries.


Jon Lloyd: Sorry, you just broke up there and I think that illustrates this point a little bit in terms of access. We’ve only got a little bit of time left, thank you Desire. Just one final question before we continue, which is we’ve heard so much about this idea of like e-gov and all of that kind of thing leading up to this, now it’s like the DPI approach. Just very quickly, are there any differences? Are we talking about the same thing? Is the DPI approach, particularly using DPGs, is this a fundamental new way of thinking? I mean look, I hate to use the Shakespearean


Rahul Matthan: phrase arose by any other name. I really don’t care as long as it is open, modular, interoperable and to me I think we are trying to call the same thing by different names and I’m not going to sort of stick my hat on a particular name. We’ve got to achieve the same thing no matter where in the world we are, no matter what we call it and I think that if we stop, as Andre says, saying my solution is best and try and find a way to say, look, these are all solutions that have the same common ideas, let’s find a way to make them work because countries have built entire infrastructures on a particular solution. It’s no point saying that that is a bad solution. You’ve got to find a way to make that work with whatever you’ve got because we are now moving to a multi-stakeholder world where these systems have to work with each other. I do want to use UPI in Brazil the same way PIX is being used in Portugal but I can’t. I want to use UPI here in Norway but I can’t. We’ve got to now sort of stop worrying about which it is as long as we can make them interoperable. I just want to pick up on that last question which was around the need for the internet. I don’t want to ignore the statement that there is a large population that has been denied access to some of these miracles because they don’t have the internet but at the same time I don’t want that to be a reason for us to stop building DPIs until the whole world is connected because we can’t do that either. We have to push this out. I realized my horror that there are parts of Canada that don’t have 24 by 7 electricity. Now electricity is a hundred year old technology and if we wait for every last person on the planet to be connected whether Elon Musk does it from space or we do it on the ground, it’s too late. What we have to do is to ensure that access to DPIs is not denied because of a lack of connectivity and there are many ways to do that. We can build offline solutions. All we’re saying is that this is a digital public infrastructure. We’re not saying it’s an internet driven digital public infrastructure though that’s how it’s delivered in a lot of places. We’re saying lean into the power of digital and then the way you access digital could be using the internet, it could be using QR codes, you can do what we call fidgetal, half physical half digital, you can do online offline, many many solutions. When India rolled out Aadhaar, we did not have internet in the entire country that we were reaching. Many people went out, enrolled people offline, came back to wherever they’ve got connectivity and uploaded it into the server. We still do that in a lot of different technologies. Africa should do it, there are parts of the world even in the developed world where you don’t have wonderful connectivity. That does not mean you stop building our DPI because we’re not saying it is internet driven. We’re saying lean into digital. On that note, I think we need to wrap it


Jon Lloyd: up. So thank you especially to our speakers, Desire, Rahul, Henri and Renata for your insights here. What we’re hearing is that despite these kind of like nuanced regional approaches, the things that really count are openness, interoperability, this ability to work together, build out your local vendor ecosystems, capacity development, all those kind of things are important and the importance of political will. Essentially we have a shared ambition for this inclusive and interoperable DPI. Digital public goods are kind of coming through as a way to ensure that that is able to happen. One of the things that came up in the Mentimeter was this idea of open source policies. I’m just going to plug a survey here that the Digital Public Goods Alliance, 24 members of the alliance have put together this open source policies and practices survey. We would love everybody, as many people as possible to be doing this. We’re aiming to collect even if you don’t necessarily have an open source first policy, it’s really useful to have your insights in this and the learnings from that are going to enable other countries to learn from one another, organizations to learn from one another and able to implement digital public goods. I’ll also just mention here we will be having some actionable recommendations coming out of this policy for strengthening things like local capacity and agency in addition to the survey. We encourage you to continue to engage. Do we have to make your survey on a google doc? Really? Really? I was hoping that wouldn’t come up. Next time. Our rapporteur Max, he’s going to be synthesizing the key takeaways from this session and we’ll upload that to the IGF session page. Thanks for calling us. Thank you again. We look forward to continuing this important work together. We ended slightly early. Thank you again so much. Workshop two. Workshop two.


D

Desire Kachenje

Speech speed

190 words per minute

Speech length

2799 words

Speech time

882 seconds

Africa’s approach is government-driven but ecosystem-enabled, with Tanzania building DPI layers using both DPGs and in-house platforms

Explanation

African DPI development involves government leadership while enabling broader ecosystem participation. Tanzania exemplifies this by building identity systems, interoperable payment platforms, and data exchange systems using a mix of digital public goods like X-Road and in-house solutions, while engaging private sector partners.


Evidence

Tanzania built ID systems, connected mobile operators and banks, developed Jamii data exchange platform using X-Road DPG, created Jamii wallet using digital local DPG, and partnered with private sector for institutional data exchange


Major discussion point

Regional Approaches to Digital Public Infrastructure (DPI) Development


Topics

Development | Infrastructure | Economic


Challenge-driven DPI implementations have better adoption rates than technology-first approaches, as seen in SADC region’s cross-border financial inclusion project

Explanation

DPI projects that start with addressing specific challenges rather than building technology first achieve higher adoption rates from citizens and government institutions. The SADC region’s approach to connecting national IDs for immigrant financial services demonstrates this principle by addressing a real need for formal financial inclusion.


Evidence

SADC group of 16 countries working on connecting national foundational IDs to support immigrants accessing formal financial services, with engagement from banks, central banks, and other stakeholders


Major discussion point

Digital Public Goods (DPGs) Implementation and Challenges


Topics

Development | Economic | Infrastructure


DPGs provide cost-effective alternatives but require local capacity building and sustainable funding models beyond donor-funded projects

Explanation

While DPGs offer accessible source code for easy implementation, they still need significant investment in local capacity building and face sustainability challenges when donor funding ends. Countries need to develop local ecosystems to own, run, maintain and develop new use cases.


Evidence

Rwanda building a DPI center to support ecosystem players including developers in government and private sector to understand and maintain use cases; most DPI projects using DPGs are donor funded with sustainability questions remaining


Major discussion point

Digital Public Goods (DPGs) Implementation and Challenges


Topics

Development | Capacity development | Economic


Agreed with

– Rahul Matthan
– Jon Lloyd

Agreed on

Local capacity building is crucial for DPI success


Disagreed with

– Henri Verdier
– Renata Avila

Disagreed on

Funding priorities and economic theory for DPI


Regional payment systems like PASS and TCIB are being developed but need to accommodate smaller transaction amounts for true inclusion

Explanation

Existing regional payment systems in Africa have interoperability but don’t adequately serve smaller transactions that are common among low-income populations and SMEs. New initiatives are focusing on merchant payments and smaller amounts to improve inclusion.


Evidence

PASS (Pan-African Payments Incentive System), TCIB for southern region exist but don’t accommodate small amounts; Comesa working with Modulook DPG for trade platform allowing smaller traders to send smaller amounts within the region


Major discussion point

Cross-Border Interoperability and Cooperation


Topics

Economic | Development | Inclusive finance


2.6 billion people without internet access represents a significant challenge, but DPI development shouldn’t wait for universal connectivity

Explanation

The digital divide affects billions of people who question the value of digital services without internet access. However, solutions can be developed using alternative access methods like USSD systems and physical access points at government offices.


Evidence

Citizens asking why they need digital ID without internet access; DPGs creating access through feature phones, USSD systems; governments providing desktop access points at revenue authority offices


Major discussion point

Digital Inclusion and Access Challenges


Topics

Development | Digital access | Infrastructure


R

Rahul Matthan

Speech speed

172 words per minute

Speech length

2719 words

Speech time

943 seconds

India’s stack approach creates modular, interoperable elements that can be layered in any order, not necessarily following the identity-payments-data sequence

Explanation

The India stack concept is about creating modular DPGs that can be combined in any order for different solutions, rather than a mandatory progression through identity, payments, and data sharing layers. The key principles are building open, interoperable, and modular systems that enable population-scale solutions without rebuilding existing components.


Evidence

India built identity first, then payments (18 billion transactions monthly), then data sharing, but also built DigiLocker credentials system; 15 years of development experience; payment system covers 300-400 million people with a billion left to reach


Major discussion point

Regional Approaches to Digital Public Infrastructure (DPI) Development


Topics

Infrastructure | Digital standards | Economic


Agreed with

– Henri Verdier

Agreed on

Modular and interoperable approach to DPI development


DPI enables ‘regulators to regulate and innovators to innovate’ on the same platform through techno-legal approaches rather than traditional law-only regulation

Explanation

Digital public infrastructure allows governance to be built directly into system architecture rather than relying solely on traditional laws. This creates an infrastructure where both regulatory compliance and innovation can happen simultaneously on the same platform, unlike private platforms where only innovation occurs on the platform owner’s terms.


Evidence

Reference to Lawrence Lessig’s ‘Code and the Other Laws of Cyberspace’ from 25 years ago; example of end-to-end digital systems in Norway for payments and transportation; contrast with traditional offline law-making approaches


Major discussion point

Governance and Sovereignty Concerns


Topics

Legal and regulatory | Infrastructure | Digital standards


Agreed with

– Henri Verdier

Agreed on

Digital sovereignty requires technical capability to implement democratic decisions


Disagreed with

– Audience
– Renata Avila
– Henri Verdier

Disagreed on

Governance models for DPI – Multi-stakeholder vs Commons-based vs Sovereignty-focused approaches


DPI can unlock abundance by connecting data silos and should be applied to climate challenges as a more cost-effective alternative to traditional approaches

Explanation

Digital public infrastructure can address climate change by connecting isolated data and opportunity silos, potentially solving climate problems without the trillions of dollars estimated using traditional methods. This represents a new approach compared to the failing consensus-building model of agreements like the Paris Agreement.


Evidence

10 years since Paris Agreement with temperatures rising beyond climate objectives; countries not committing and walking out of agreements; COP in Brazil this year; DPI has shown success with financial inclusion challenges


Major discussion point

Climate Change and DPI Applications


Topics

Development | Sustainable development | Infrastructure


DPI represents a relatively cheap alternative to traditional development approaches, enabling 10 years of progress in what would otherwise take 50 years

Explanation

For Global South countries, DPI offers a leapfrog development opportunity that dramatically accelerates progress compared to traditional development timelines. This is particularly valuable for countries that haven’t already invested decades in building traditional infrastructure systems.


Evidence

Statistics showing 10 years of progress versus 50 years through traditional methods; contrast with developed countries like France that already invested 50 years in infrastructure development


Major discussion point

Economic and Funding Considerations


Topics

Development | Economic | Infrastructure


Agreed with

– Renata Avila

Agreed on

Funding is not the primary constraint for DPI development


DPI can use various access methods including QR codes, ‘phygital’ (physical-digital) solutions, and offline-online hybrid approaches

Explanation

Digital public infrastructure doesn’t require universal internet connectivity and can be implemented through multiple access methods. Solutions can be designed as half physical, half digital, or use offline enrollment with later online uploading to serve populations without consistent internet access.


Evidence

India’s Aadhaar enrollment done offline in areas without internet, then uploaded when connectivity available; parts of Canada lacking 24/7 electricity despite 100-year-old technology; various access methods beyond internet-driven solutions


Major discussion point

Digital Inclusion and Access Challenges


Topics

Development | Digital access | Infrastructure


Agreed with

– Desire Kachenje
– Jon Lloyd

Agreed on

Local capacity building is crucial for DPI success


H

Henri Verdier

Speech speed

146 words per minute

Speech length

1827 words

Speech time

748 seconds

Europe has ideological alignment with DPI due to strong public service culture and concerns about digital sovereignty, leading to interoperable solutions rather than one unified stack

Explanation

Europe’s tradition of public services, open standards, and specifications creates natural alignment with DPI principles. However, competition between national solutions has delayed progress, leading to a new approach of building interoperable modules rather than a single unified system. The EU stack will be a cloud of interoperable solutions where users can choose components while maintaining compatibility.


Evidence

Europe created metric system, ITU, open standards like Wi-Fi and Bluetooth, Linux; EU digital wallet launching next year with attribute separation capabilities; Franco-German project ‘sweet numeric’ with modular approach; 10 years lost due to national competition


Major discussion point

Regional Approaches to Digital Public Infrastructure (DPI) Development


Topics

Infrastructure | Digital standards | Legal and regulatory


Agreed with

– Rahul Matthan

Agreed on

Modular and interoperable approach to DPI development


Digital sovereignty requires the ability to implement collective democratic decisions through technology infrastructure, not just technical independence

Explanation

True digital sovereignty means having the technical capability to enforce democratic decisions and collective choices through digital infrastructure. Without this capability, democracies cannot function effectively because they cannot implement their decisions, making sovereignty essential for democracy to exist.


Evidence

Non-negotiable elements include privacy, human dignity, free speech as defined by democracies; example of age verification requiring proof of age separate from identity; American government cutting email access to International Criminal Court prosecutor


Major discussion point

Governance and Sovereignty Concerns


Topics

Human rights | Legal and regulatory | Infrastructure


Agreed with

– Rahul Matthan

Agreed on

Digital sovereignty requires technical capability to implement democratic decisions


Disagreed with

– Audience
– Renata Avila
– Rahul Matthan

Disagreed on

Governance models for DPI – Multi-stakeholder vs Commons-based vs Sovereignty-focused approaches


Better economic theory needed for DPI’s role as public service that creates value everywhere, similar to historical postal services

Explanation

DPI requires a new economic framework similar to how postal services were established as public services a century ago because they created value throughout society. The challenge is developing economic theory that recognizes DPI’s value creation across all sectors and justifies public investment in infrastructure that benefits everyone.


Evidence

Historical decision to make postal service public because it created value everywhere; difficulty in capturing value everywhere through private means; need for modern version of public service theory for DPI


Major discussion point

Economic and Funding Considerations


Topics

Economic | Infrastructure | Legal and regulatory


Disagreed with

– Renata Avila
– Desire Kachenje

Disagreed on

Funding priorities and economic theory for DPI


R

Renata Avila

Speech speed

141 words per minute

Speech length

1359 words

Speech time

577 seconds

Latin America emphasizes community-driven approaches with strong open source legislation in seven countries and active volunteer communities maintaining digital public goods

Explanation

Latin America has institutionalized open source and open content through legislation in seven countries, but the lasting strength comes from volunteer communities that maintain digital public goods beyond political transitions. These communities continue their work after hours and on weekends, providing stability that transcends government changes.


Evidence

Argentina, Brazil, Ecuador, Peru, Venezuela, Uruguay, and Cuba have open source/open content legislation; volunteer communities edit Wikipedia, code, and contribute to collaborative platforms; institutionalized units were defunded after political transitions but communities remained active


Major discussion point

Regional Approaches to Digital Public Infrastructure (DPI) Development


Topics

Legal and regulatory | Development | Sociocultural


Brazil’s PIX payment system demonstrates successful cross-border expansion, now usable in Portugal and being adopted across Latin American countries

Explanation

PIX represents a successful regional DPI expansion that addresses currency friction challenges across Latin America. The system’s adoption by border countries makes practical sense given Brazil’s size and the complexity of currency exchanges and money laundering regulations in the region.


Evidence

PIX usable in Portugal for Brazilian payments; adoption in Panama, Peru, Bolivia, Paraguay, Venezuela, Ecuador; Argentina doing pilot; Brazil represents half the continent; addresses currency friction and complex money laundering legislation


Major discussion point

Cross-Border Interoperability and Cooperation


Topics

Economic | Infrastructure | E-commerce and Digital Trade


India’s UPI system shows potential for international cooperation, with agreements signed with Caribbean and South American countries through South-South cooperation

Explanation

India’s digital payment infrastructure is expanding internationally through South-South cooperation agreements, particularly in the Caribbean region and parts of South America. This represents an emerging trend of developing countries sharing DPI solutions with each other rather than relying solely on developed country technologies.


Evidence

India stack signed MOUs with Cuba, Colombia, Suriname, Trinidad and Tobago, and Barbados; represents practically a subregion with strong open source communities in the Caribbean


Major discussion point

Cross-Border Interoperability and Cooperation


Topics

Economic | Infrastructure | Development


Commons-based governance approach for DPI is preferable to traditional multi-stakeholder models as it ensures community engagement beyond government changes

Explanation

A commons-based governance model for DPI provides better continuity and community involvement than traditional multi-stakeholder approaches. This model increases transparency, accountability, accelerates impact, reduces governance costs, and maintains community engagement even when governments change, ensuring infrastructure sustainability.


Evidence

Paper written with colleagues on commons-based governance; benefits include increased transparency, accountability, accelerated impact, reduced governance data and localization frictions, secured community engagement


Major discussion point

Governance and Sovereignty Concerns


Topics

Legal and regulatory | Development | Sociocultural


Disagreed with

– Audience
– Rahul Matthan
– Henri Verdier

Disagreed on

Governance models for DPI – Multi-stakeholder vs Commons-based vs Sovereignty-focused approaches


Funding is deprioritized because of awareness of money wasted on tech monopolies that don’t deliver value or pay taxes locally

Explanation

There’s growing consensus that funding isn’t the primary constraint for DPI development because people recognize the massive amounts spent on technology monopolies that provide little value and avoid paying taxes in the countries where they operate. This awareness has shifted focus away from funding as the main barrier.


Evidence

Poll results showing funding and resource allocation at the bottom of priorities; observation about money spent on tech monopolies that deliver nothing and don’t pay taxes locally


Major discussion point

Economic and Funding Considerations


Topics

Economic | Taxation | Development


Agreed with

– Rahul Matthan

Agreed on

Funding is not the primary constraint for DPI development


Disagreed with

– Henri Verdier
– Desire Kachenje

Disagreed on

Funding priorities and economic theory for DPI


Latin America’s shared geospatial infrastructure demonstrates successful regional cooperation on climate-related data sharing

Explanation

The Geosur project represents successful regional collaboration in sharing geospatial data across Latin America, facilitated by regional mechanisms and organizations. This cooperation has been particularly effective in cross-border and health issues, with the pandemic accelerating collaboration through organizations like PAHO.


Evidence

Geosur project for shared geospatial infrastructure; cooperation facilitated by Inter-American Development Bank, CELAC, and OAS; PAHO played key role during pandemic; combination of data, knowledge and infrastructure makes region ready for climate efforts


Major discussion point

Climate Change and DPI Applications


Topics

Development | Sustainable development | Infrastructure


J

Jon Lloyd

Speech speed

153 words per minute

Speech length

2447 words

Speech time

956 seconds

Open source first principles and local talent development are most effective for ensuring countries build technologies based on their priorities

Explanation

Based on poll results from the session, open source first principles and local talent development emerged as the top priorities for enabling countries to develop technologies according to their own needs and priorities. This approach ensures greater autonomy and capacity building compared to other alternatives like international funding or decentralized governance models.


Evidence

Mentimeter poll results showing open source first principles and local talent development as top two responses; funding ranked at the bottom consistently across multiple polls


Major discussion point

Digital Public Goods (DPGs) Implementation and Challenges


Topics

Development | Capacity development | Infrastructure


Agreed with

– Desire Kachenje
– Rahul Matthan

Agreed on

Local capacity building is crucial for DPI success


A

Audience

Speech speed

168 words per minute

Speech length

120 words

Speech time

42 seconds

Multi-stakeholder governance approach should be adopted for DPI instead of just decentralized governance models

Explanation

The audience member from Internet Society suggests that truly multi-stakeholder governance approaches for DPI would be more effective than simply decentralized models. They emphasize there are important differences between these approaches that should be considered in the context of implementing the Global Digital Compact and WSIS Plus 20 review process.


Evidence

Reference to Global Digital Compact implementation and WSIS Plus 20 review process; distinction between decentralized and multi-stakeholder approaches


Major discussion point

Governance and Sovereignty Concerns


Topics

Legal and regulatory | Development | Infrastructure


Disagreed with

– Renata Avila
– Rahul Matthan
– Henri Verdier

Disagreed on

Governance models for DPI – Multi-stakeholder vs Commons-based vs Sovereignty-focused approaches


Agreements

Agreement points

Modular and interoperable approach to DPI development

Speakers

– Rahul Matthan
– Henri Verdier

Arguments

India’s stack approach creates modular, interoperable elements that can be layered in any order, not necessarily following the identity-payments-data sequence


Europe has ideological alignment with DPI due to strong public service culture and concerns about digital sovereignty, leading to interoperable solutions rather than one unified stack


Summary

Both speakers emphasize that DPI should be built as modular, interoperable components that can be combined flexibly rather than following rigid sequential approaches or creating monolithic unified systems


Topics

Infrastructure | Digital standards


Funding is not the primary constraint for DPI development

Speakers

– Rahul Matthan
– Renata Avila

Arguments

DPI represents a relatively cheap alternative to traditional development approaches, enabling 10 years of progress in what would otherwise take 50 years


Funding is deprioritized because of awareness of money wasted on tech monopolies that don’t deliver value or pay taxes locally


Summary

Both speakers agree that funding constraints are overemphasized, with DPI offering cost-effective alternatives and growing awareness that money spent on tech monopolies has been wasteful


Topics

Economic | Development


Local capacity building is crucial for DPI success

Speakers

– Desire Kachenje
– Rahul Matthan
– Jon Lloyd

Arguments

DPGs provide cost-effective alternatives but require local capacity building and sustainable funding models beyond donor-funded projects


DPI can use various access methods including QR codes, ‘phygital’ (physical-digital) solutions, and offline-online hybrid approaches


Open source first principles and local talent development are most effective for ensuring countries build technologies based on their priorities


Summary

All speakers emphasize that building local capacity and talent development is essential for sustainable DPI implementation, regardless of the technical approach used


Topics

Development | Capacity development | Infrastructure


Digital sovereignty requires technical capability to implement democratic decisions

Speakers

– Henri Verdier
– Rahul Matthan

Arguments

Digital sovereignty requires the ability to implement collective democratic decisions through technology infrastructure, not just technical independence


DPI enables ‘regulators to regulate and innovators to innovate’ on the same platform through techno-legal approaches rather than traditional law-only regulation


Summary

Both speakers agree that true digital sovereignty means having the technical infrastructure to enforce democratic decisions and regulatory frameworks, not just independence from foreign technology


Topics

Legal and regulatory | Infrastructure | Human rights


Similar viewpoints

Both speakers emphasize community-driven and challenge-focused approaches to DPI development, prioritizing real user needs and community engagement over technology-first implementations

Speakers

– Desire Kachenje
– Renata Avila

Arguments

Challenge-driven DPI implementations have better adoption rates than technology-first approaches, as seen in SADC region’s cross-border financial inclusion project


Latin America emphasizes community-driven approaches with strong open source legislation in seven countries and active volunteer communities maintaining digital public goods


Topics

Development | Sociocultural | Legal and regulatory


Both speakers highlight successful examples of South-South cooperation in DPI, showing how developing countries can share and adapt each other’s digital infrastructure solutions

Speakers

– Renata Avila
– Rahul Matthan

Arguments

India’s UPI system shows potential for international cooperation, with agreements signed with Caribbean and South American countries through South-South cooperation


Brazil’s PIX payment system demonstrates successful cross-border expansion, now usable in Portugal and being adopted across Latin American countries


Topics

Economic | Infrastructure | Development


Both speakers agree that lack of universal internet connectivity should not prevent DPI development, and that alternative access methods can bridge the digital divide

Speakers

– Rahul Matthan
– Desire Kachenje

Arguments

DPI can use various access methods including QR codes, ‘phygital’ (physical-digital) solutions, and offline-online hybrid approaches


2.6 billion people without internet access represents a significant challenge, but DPI development shouldn’t wait for universal connectivity


Topics

Development | Digital access | Infrastructure


Unexpected consensus

Deprioritization of funding as main constraint

Speakers

– Rahul Matthan
– Renata Avila
– Henri Verdier

Arguments

DPI represents a relatively cheap alternative to traditional development approaches, enabling 10 years of progress in what would otherwise take 50 years


Funding is deprioritized because of awareness of money wasted on tech monopolies that don’t deliver value or pay taxes locally


Better economic theory needed for DPI’s role as public service that creates value everywhere, similar to historical postal services


Explanation

Unexpectedly, speakers from different regions (India, Latin America, Europe) all agreed that funding is not the primary barrier to DPI development, contrary to common assumptions about resource constraints in developing countries


Topics

Economic | Development | Infrastructure


Commons-based governance over traditional multi-stakeholder approaches

Speakers

– Renata Avila
– Henri Verdier

Arguments

Commons-based governance approach for DPI is preferable to traditional multi-stakeholder models as it ensures community engagement beyond government changes


Europe has ideological alignment with DPI due to strong public service culture and concerns about digital sovereignty, leading to interoperable solutions rather than one unified stack


Explanation

Both speakers unexpectedly converged on preferring community-based governance models over traditional institutional approaches, emphasizing continuity beyond political changes


Topics

Legal and regulatory | Development | Sociocultural


Overall assessment

Summary

Strong consensus emerged around core DPI principles: modularity and interoperability over monolithic systems, local capacity building as essential, funding not being the primary constraint, and the need for governance approaches that ensure democratic control and community engagement


Consensus level

High level of consensus despite different regional contexts, suggesting that DPI principles are universally applicable while allowing for local adaptation. This consensus has significant implications for global DPI development, indicating that a common framework can accommodate diverse regional approaches while maintaining core principles of openness, interoperability, and democratic governance


Differences

Different viewpoints

Governance models for DPI – Multi-stakeholder vs Commons-based vs Sovereignty-focused approaches

Speakers

– Audience
– Renata Avila
– Rahul Matthan
– Henri Verdier

Arguments

Multi-stakeholder governance approach should be adopted for DPI instead of just decentralized governance models


Commons-based governance approach for DPI is preferable to traditional multi-stakeholder models as it ensures community engagement beyond government changes


DPI enables ‘regulators to regulate and innovators to innovate’ on the same platform through techno-legal approaches rather than traditional law-only regulation


Digital sovereignty requires the ability to implement collective democratic decisions through technology infrastructure, not just technical independence


Summary

Speakers disagreed on the optimal governance model for DPI. The audience member advocated for multi-stakeholder approaches, Renata preferred commons-based governance for continuity, Rahul emphasized techno-legal integration allowing both regulation and innovation, while Henri stressed sovereignty and democratic decision-making capability.


Topics

Legal and regulatory | Development | Infrastructure


Funding priorities and economic theory for DPI

Speakers

– Henri Verdier
– Renata Avila
– Desire Kachenje

Arguments

Better economic theory needed for DPI’s role as public service that creates value everywhere, similar to historical postal services


Funding is deprioritized because of awareness of money wasted on tech monopolies that don’t deliver value or pay taxes locally


DPGs provide cost-effective alternatives but require local capacity building and sustainable funding models beyond donor-funded projects


Summary

Henri emphasized the need for better economic theory and proper funding allocation for DPI as public service, Renata argued funding isn’t the main constraint due to waste on tech monopolies, while Desire highlighted ongoing funding challenges in Africa where donor dependency remains problematic.


Topics

Economic | Development | Infrastructure


Unexpected differences

Role of funding in DPI development priorities

Speakers

– Henri Verdier
– Renata Avila
– Desire Kachenje

Arguments

Better economic theory needed for DPI’s role as public service that creates value everywhere, similar to historical postal services


Funding is deprioritized because of awareness of money wasted on tech monopolies that don’t deliver value or pay taxes locally


DPGs provide cost-effective alternatives but require local capacity building and sustainable funding models beyond donor-funded projects


Explanation

Unexpectedly, speakers from different regions had contrasting views on funding importance. While poll results consistently showed funding as low priority, Henri (Europe) argued for better economic allocation theory, Renata (Latin America) dismissed funding concerns due to tech monopoly waste, and Desire (Africa) emphasized ongoing funding challenges. This regional divide on funding perspectives was surprising given the supposed consensus.


Topics

Economic | Development | Infrastructure


Overall assessment

Summary

The main areas of disagreement centered on governance models for DPI (multi-stakeholder vs commons-based vs sovereignty-focused) and the role of funding in DPI development, with unexpected regional differences on economic priorities.


Disagreement level

Low to moderate disagreement level. While speakers had different approaches and emphases, they shared fundamental agreement on core DPI principles (openness, interoperability, modularity) and the importance of local capacity building. The disagreements were more about implementation methods and governance structures rather than fundamental goals, suggesting productive debate rather than irreconcilable differences. This level of disagreement is constructive for the DPI field as it allows for diverse regional approaches while maintaining common principles.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize community-driven and challenge-focused approaches to DPI development, prioritizing real user needs and community engagement over technology-first implementations

Speakers

– Desire Kachenje
– Renata Avila

Arguments

Challenge-driven DPI implementations have better adoption rates than technology-first approaches, as seen in SADC region’s cross-border financial inclusion project


Latin America emphasizes community-driven approaches with strong open source legislation in seven countries and active volunteer communities maintaining digital public goods


Topics

Development | Sociocultural | Legal and regulatory


Both speakers highlight successful examples of South-South cooperation in DPI, showing how developing countries can share and adapt each other’s digital infrastructure solutions

Speakers

– Renata Avila
– Rahul Matthan

Arguments

India’s UPI system shows potential for international cooperation, with agreements signed with Caribbean and South American countries through South-South cooperation


Brazil’s PIX payment system demonstrates successful cross-border expansion, now usable in Portugal and being adopted across Latin American countries


Topics

Economic | Infrastructure | Development


Both speakers agree that lack of universal internet connectivity should not prevent DPI development, and that alternative access methods can bridge the digital divide

Speakers

– Rahul Matthan
– Desire Kachenje

Arguments

DPI can use various access methods including QR codes, ‘phygital’ (physical-digital) solutions, and offline-online hybrid approaches


2.6 billion people without internet access represents a significant challenge, but DPI development shouldn’t wait for universal connectivity


Topics

Development | Digital access | Infrastructure


Takeaways

Key takeaways

Regional approaches to DPI development vary significantly but share common principles of openness, interoperability, and modularity


Digital Public Goods (DPGs) provide cost-effective alternatives to proprietary solutions but require substantial local capacity building and sustainable funding models


Cross-border interoperability is emerging as a critical success factor, with examples like Brazil’s PIX system and Africa’s regional payment initiatives demonstrating practical implementation


Digital sovereignty and democratic governance can be maintained through commons-based governance approaches rather than traditional multi-stakeholder models


Local talent development and open source first principles are the most effective strategies for ensuring countries build technologies based on their own priorities


DPI development should not wait for universal internet connectivity – offline and hybrid solutions can bridge the digital divide


Climate change represents a significant opportunity for DPI application through connecting data silos and enabling more cost-effective solutions


The distinction between e-government and DPI is less important than achieving open, modular, and interoperable systems regardless of terminology


Resolutions and action items

Digital Public Goods Alliance to continue collecting responses for the open source policies and practices survey from 24 alliance members


Rapporteur Max to synthesize key takeaways from the session and upload to the IGF session page


Continue engagement on developing actionable recommendations for strengthening local capacity and agency


Kazakhstan formally announced as the 26th country participating in the 50 and 5 campaign


Unresolved issues

How to address the 2.6 billion people without internet access while continuing DPI development


Sustainable funding models for DPGs beyond donor-funded projects


Harmonization of fragmented data governance frameworks across countries, particularly for cross-border implementations


Balancing speed of implementation with multi-stakeholder inclusion to avoid exclusion of populations


Data privacy and security frameworks for specific populations like children in digital ID systems


Long-term sustainability and maintenance of DPI systems after initial implementation


Integration of legacy systems with new DPI approaches in a cost-effective manner


Suggested compromises

Use modular, interoperable approaches rather than insisting on specific technological solutions or naming conventions


Implement ‘phygital’ (physical-digital) hybrid solutions to accommodate areas without reliable internet connectivity


Adopt commons-based governance that balances state sovereignty requirements with multi-stakeholder participation


Focus on common principles while allowing countries to develop bespoke governance frameworks appropriate for their context


Build offline-capable DPI solutions that can sync when connectivity is available


Combine government-driven initiatives with ecosystem-enabled implementation involving private sector and civil society


Start DPI implementation wherever countries have capacity rather than following a prescribed sequence


Thought provoking comments

I almost feel I must apologize for India stack, because we started this idea of a stack, which leaves the impression that you must necessarily layer first identity, then payments, and then data sharing. And really, it’s not mandatory that you have to do it in that way… the real idea of a stack is that we are creating modular elements, DPGs that can be layered on top of each other in whichever order you want for whichever solution you want.

Speaker

Rahul Matthan


Reason

This comment is insightful because it challenges a common misconception about DPI implementation – that there’s a prescribed sequence that must be followed. It reframes the ‘stack’ concept from a rigid hierarchy to a flexible, modular approach, which is crucial for countries with different starting points and priorities.


Impact

This comment set the tone for the entire discussion by establishing that DPI approaches should be flexible and context-specific rather than one-size-fits-all. It influenced subsequent speakers to emphasize regional variations and the importance of tailoring solutions to local needs rather than copying India’s exact approach.


The traditional lawyer in me says, you’ve got to write laws, and you’ve got to build policies… But the power of digital, and the power of digital where everything is digital, is that you can actually build some of that governance into the design of the architecture… we’re building an infrastructure on which regulators can regulate and innovators can innovate.

Speaker

Rahul Matthan


Reason

This comment introduces a paradigm shift from traditional regulatory approaches to ‘code as law’ – embedding governance directly into digital infrastructure design. It presents a novel solution to the tension between innovation and regulation.


Impact

This concept of techno-legal approaches became a recurring theme, with Henri Verdier later building on it by discussing ‘rules as code’ and the need for technical implementation of regulatory decisions. It shifted the conversation from viewing regulation as a constraint to seeing it as an integrated part of infrastructure design.


Innovation needs to meet serenity… When we’re building DPI, especially specifically when we’re building DPI with DPGs, how do we ensure that countries have the local ecosystem to not only own, but run, maintain and develop new use cases?

Speaker

Desire Kachenje


Reason

This comment highlights a critical gap often overlooked in DPI discussions – the sustainability and local ownership beyond initial implementation. It introduces the concept that technical deployment is insufficient without local capacity for long-term stewardship.


Impact

This comment redirected the discussion toward capacity building and local agency, which became central themes throughout the session. It influenced the Mentimeter poll results where ‘local talent development and training’ emerged as a top priority, and shaped subsequent discussions about the importance of community engagement and local ownership.


There is probably an ideological alignment because we have a strong culture of public services… we need this layer of DPI to implement some political decision and collective decision on the big, the open internet… Can we still empower the people through infrastructure and good governments to respect dignity and innovation and everything, or do we have to let our lives within big corporations and their infrastructure?

Speaker

Henri Verdier


Reason

This comment reframes the entire DPI discussion as fundamentally about democratic sovereignty versus corporate control. It elevates the conversation from technical implementation to existential questions about the future of democratic governance in the digital age.


Impact

This comment created a philosophical anchor for the discussion, with other speakers referencing the tension between public and private control. It influenced Renata Avila’s response about communities being ‘squeezed without options’ and shaped the conversation about sovereignty as a non-negotiable element in DPI development.


What I will say that is different is that in Latin America we are very good at the digital public goods and but we haven’t jumped yet to the big digital public infrastructure plans… the highlight of the continent is the communities around open source… it’s what you do after work it was what you do on weekends you edit a Wikipedia article you code and contribute to a collaborative platform.

Speaker

Renata Avila


Reason

This comment introduces a crucial distinction between having strong DPG communities and implementing comprehensive DPI strategies. It highlights the role of grassroots community engagement as a foundation for sustainable digital infrastructure, contrasting with top-down government-led approaches.


Impact

This observation about community-driven development influenced the later discussion about governance models, leading to Renata’s proposal for ‘commons-based governance approach’ rather than traditional multi-stakeholder models. It also reinforced the importance of local capacity and community engagement that emerged in the poll results.


I strongly urge all of us to rethink the way we go about [climate change] because DPI has shown a way for very, very big challenges like financial inclusion. I see no reason why that can’t be applied to something as important as climate change.

Speaker

Rahul Matthan


Reason

This comment dramatically expands the scope of DPI applications beyond traditional use cases to global challenges like climate change. It suggests DPI could revolutionize how we approach complex, multi-stakeholder problems by connecting previously siloed data and systems.


Impact

While this comment came later in the discussion, it opened up new possibilities for thinking about DPI applications and influenced Renata’s response about Latin America’s geospatial infrastructure cooperation. It demonstrated how DPI thinking can be applied to cross-border, global challenges beyond traditional government services.


Overall assessment

These key comments fundamentally shaped the discussion by establishing several critical frameworks: the flexibility and modularity of DPI approaches (challenging rigid implementation models), the integration of governance into technical architecture (moving beyond traditional regulation), the centrality of local capacity and community engagement (ensuring sustainability), and the broader democratic implications of infrastructure choices (framing DPI as essential for sovereignty). The comments created a progression from technical implementation details to philosophical questions about democratic governance in the digital age. They also established common ground among diverse regional approaches while respecting local contexts and priorities. The discussion evolved from describing what different regions are doing to exploring why these approaches matter for democracy, sovereignty, and global cooperation. The interactive polls reinforced these themes, with participants prioritizing local capacity development and open-source approaches over funding, validating the speakers’ emphasis on community engagement and sustainable development over purely technical or financial solutions.


Follow-up questions

How do we ensure that countries have the local ecosystem to not only own, but run, maintain and develop new use cases when building DPI with DPGs?

Speaker

Desire Kachenje


Explanation

This addresses the critical challenge of local capacity and agency in DPI implementation, ensuring sustainability beyond initial deployment


How do we bring policy makers and enable countries to harmonize digital data governance frameworks, especially when bringing more than one country together for data exchange platforms?

Speaker

Desire Kachenje


Explanation

This highlights the fragmented nature of data governance frameworks across countries and the need for policy harmonization for cross-border DPI initiatives


How can we use digital public infrastructure to solve the climate crisis as the next step of DPI innovation?

Speaker

Rahul Matthan


Explanation

This represents a new frontier for DPI application, suggesting that DPI could unlock solutions to climate challenges by connecting data silos and enabling new approaches beyond traditional consensus-building


How can we implement rules as code and techno-legal approaches in DPI governance?

Speaker

Henri Verdier


Explanation

This addresses the need for embedding regulatory compliance directly into digital infrastructure design rather than relying solely on traditional legal frameworks


How can we develop a better economic theory of the economic role of DPI to understand its financing and value creation?

Speaker

Henri Verdier


Explanation

This seeks to establish theoretical foundations for understanding how DPI creates value across entire ecosystems and how this should inform funding models


How do we ensure sustainability of DPGs long-term, both internally for the DPGs themselves and for countries implementing them past donor-funded project periods?

Speaker

Desire Kachenje


Explanation

This addresses the critical challenge of maintaining DPI systems and DPG platforms beyond initial implementation phases and donor funding cycles


How do we address privacy and security issues for specific populations like children in digital ID and birth registration systems?

Speaker

Desire Kachenje


Explanation

This highlights gaps in privacy frameworks for vulnerable populations in DPI implementations


How can we overcome the digital divide for 2.6 billion people not using the internet while still advancing DPI development?

Speaker

Online participant (via chat)


Explanation

This addresses the fundamental challenge of inclusion in DPI when a significant portion of the global population lacks internet access


What would be the implications of adopting truly multi-stakeholder governance approaches for DPI instead of just decentralized governance models?

Speaker

Israel Rosas (Internet Society)


Explanation

This explores different governance models for DPI and their potential impacts on implementation and sovereignty


How can we make different DPI systems interoperable across countries (e.g., using UPI in Brazil or PIX in other countries)?

Speaker

Rahul Matthan


Explanation

This addresses the practical challenge of cross-border interoperability between different national DPI systems


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #187 Bridging Internet AI Governance From Theory to Practice

WS #187 Bridging Internet AI Governance From Theory to Practice

Session at a glance

Summary

This joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality explored how the internet’s foundational principles can guide AI governance as artificial intelligence becomes increasingly central to digital interactions. The discussion centered on two key questions: how internet principles of openness and decentralization can inform transparent AI governance, and how network neutrality concepts like generativity and fair competition can apply to AI infrastructure and content creation.


Vint Cerf emphasized that while the internet and AI are “different beasts,” AI systems should prioritize safety, transparency, and provenance of training data. He highlighted emerging standards like agent-to-agent protocols that could enable interoperability between AI systems. Sandrine Elmi Hersi from France’s ARCEP outlined three areas for applying internet values to AI: accelerating transparency in AI models, preserving distributed intelligence rather than centralized control, and extending non-discrimination principles to AI infrastructure and content curation.


Renata Mielli from Brazil’s CGI noted that while some internet governance principles like freedom and interoperability can transfer to AI, others like net neutrality may not directly apply since AI systems are inherently non-neutral. Hadia Elminiawi discussed Africa’s AI strategy and raised practical questions about implementing transparency requirements, suggesting that requiring open-source safety guardrails might be more feasible than full model transparency.


Several participants emphasized the challenge of market concentration in AI, contrasting it with the internet’s originally decentralized architecture. The discussion revealed tensions between promoting innovation and ensuring accountability, with speakers noting the need for risk-based approaches, liability frameworks, and multi-stakeholder governance. The session concluded with calls for transforming these principles into technical standards and regulatory frameworks while maintaining the collaborative spirit that made internet governance successful.


Keypoints

## Major Discussion Points:


– **Fundamental architectural differences between Internet and AI**: The discussion emphasized that while the Internet was built on open, decentralized, transparent, and interoperable principles, AI systems (particularly large language models) operate through centralized, opaque, and proprietary architectures controlled by a handful of major companies, creating tension between these two paradigms.


– **Applying Internet governance principles to AI governance**: Speakers explored how core Internet values like openness, transparency, non-discrimination, and net neutrality could be translated into AI governance frameworks, while acknowledging that some principles (like technical neutrality) may not directly apply since AI systems are inherently non-neutral.


– **Market concentration and gatekeeper concerns**: Multiple speakers highlighted the risk of AI systems becoming new gatekeepers that could limit user choice and content diversity, drawing parallels to earlier Internet governance challenges around platform dominance and the need for regulatory oversight to preserve competition and openness.


– **Global South representation and digital equity**: The discussion addressed how AI governance frameworks must include diverse global perspectives, particularly from Africa, Latin America, and Asia, to avoid replicating the digital divides and power imbalances that have characterized Internet development.


– **Practical implementation challenges**: Speakers debated the realistic prospects for international cooperation on AI governance, questioning whether major AI companies and governments have sufficient incentives to participate in multilateral governance frameworks, and emphasizing the need for risk-based approaches, liability frameworks, and technical standards.


## Overall Purpose:


The discussion aimed to bridge Internet governance principles with emerging AI governance challenges, exploring how decades of experience regulating Internet infrastructure and services could inform approaches to governing artificial intelligence systems. The session sought to move beyond theoretical frameworks toward practical implementation strategies for ensuring AI development remains aligned with values of openness, transparency, and user empowerment.


## Overall Tone:


The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers drawing encouraging parallels between Internet and AI governance challenges. However, the tone became more realistic and somewhat pessimistic as participants acknowledged significant obstacles, including corporate resistance to regulation, geopolitical tensions, market concentration, and the fundamental differences between Internet and AI architectures. Despite these challenges, the session concluded on a pragmatic note, with calls for continued collaboration and specific next steps for the working groups involved.


Speakers

**Speakers from the provided list:**


– **Olivier Crepin-Leblond** – Co-chair of the session, moderator for remote participation


– **Pari Esfandiari** – Co-chair for the Dynamic Coalition on Core Internet Values


– **Luca Belli** – Co-chair for the Dynamic Coalition on Network Neutrality


– **Vint Cerf** – Joining remotely from the US, works with a company that has invested heavily in AI and AI-based services, co-creator of internet networking protocols


– **Sandrine ELMI HERSI** – Representative from ARCEP (French regulatory authority for electronic communications), involved in shaping digital strategies within government


– **Renata Mielli** – Coordinator of CGI.br (Brazilian Internet Steering Committee), leading debates on net neutrality, internet openness and AI issues in Brazil


– **Hadia Elminiawi** – Representative from the African continent, discussing AI governance from African perspective


– **William Drake (Bill Drake)** – Commenter/additional speaker


– **Roxana Radu** – Commenter/additional speaker (participating remotely)


– **Shuyan Wu** – Representative from China Mobile (world’s largest telecom operator), commenter/additional speaker


– **Yik Chan Ching** – Representative from PNAI (Policy Network on Artificial Intelligence), intersectional process of IGF


– **Alejandro Pisanty** – Online participant, previously involved in core internet values dynamic coalition discussions


– **Audience** – Various audience members who asked questions (including Dominique Hazel Monsieur from W3C, and Andrew Campling – internet standards and governance enthusiast)


**Additional speakers:**


– **Dominique Hazel Monsieur** – Works for W3C (World Wide Web Consortium), oversees work around AI and its impact on the web


– **Andrew Campling** – Internet standards and internet governance enthusiast


Full session report

# Bridging Internet Core Values and AI Governance: A Comprehensive Report


## Executive Summary


This joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality examined how established internet governance principles might inform emerging AI governance frameworks. Moderated by Olivier Crepin-Leblond and co-chaired by Pari Esfandiari and Luca Belli, the discussion brought together international experts to explore the intersection between internet governance and AI systems.


The session revealed both opportunities and challenges in applying internet principles to AI governance. While speakers agreed on the importance of values like transparency and safety, they identified fundamental differences between the internet’s distributed architecture and AI’s centralized model. The discussion produced practical recommendations including risk-based governance approaches, technical standards development, and targeted interventions at AI-internet intersection points.


## Opening Framework and Central Questions


Pari Esfandiari opened by establishing the session’s premise: as generative AI becomes a primary gateway to content, internet core values must guide AI governance. She posed two key questions: how can internet principles of openness and decentralization inform transparent AI governance, and how can network neutrality concepts apply to AI infrastructure and content creation.


Luca Belli immediately introduced a fundamental tension, observing that “the Internet and AI are two different beasts.” He noted that while celebrating 51 years since foundational internet work, the internet was built on open, decentralized, transparent, and interoperable architecture, whereas AI operates through highly centralized architecture controlled by major companies. This architectural difference became a recurring theme throughout the session.


## Expert Perspectives


### Vint Cerf: Technical Standards and Safety


Vint Cerf, joining remotely, emphasized that AI systems should prioritize safety, transparency, and provenance of training data. He highlighted ongoing work on agent-to-agent (A2A) protocols and model context protocols (MCP) to ensure interoperability between AI systems, drawing parallels to internet protocols.


Cerf challenged purely centralized views of AI, noting that “every time someone interacts with one of those [large language models], they are specializing it to their interests and their needs.” He advocated for risk-based approaches focusing on user risk and provider liability, with higher safety standards for high-risk applications like medical diagnosis and financial advice.


### Sandrine Elmi Hersi: Regulatory Framework


Representing ARCEP (French regulatory authority), Elmi Hersi outlined a three-pronged approach: accelerating transparency in AI models to make “black boxes” more auditable; preserving distributed intelligence by ensuring plurality of access to AI development inputs; and extending non-discrimination principles from network neutrality to AI infrastructure and content curation.


She raised particular concerns about content diversity, questioning how to ensure diversity when AI chatbots provide single answers instead of hundreds of web pages traditionally offered by search engines.


### Renata Mielli: Brazilian Perspective


Coordinator of CGI.br (Brazilian Internet Steering Committee), Mielli noted that while some internet governance principles like freedom and interoperability can transfer to AI, others like net neutrality may not directly apply since AI systems are inherently non-neutral, unlike internet infrastructure.


She emphasized transforming principles into technical standards while distinguishing between governance and regulation, and highlighted the need to reduce asymmetries and empower Global South voices in AI governance discussions.


### Hadia Elminiawi: African and Practical Perspective


Elminiawi provided insights from the African continent, noting that African countries’ AI capabilities vary significantly due to infrastructure, electricity, connectivity, and resource differences. She challenged idealistic transparency approaches, asking whether it is “realistic or even desirable to expect that all AI models be made fully open source.”


She suggested requiring open-source safety guardrails rather than full model transparency, proposing a more pragmatic approach balancing openness with security and investment concerns.


## Additional Interventions and Perspectives


### William Drake: Critical Analysis


Drake provided a critical intervention emphasizing the need to define precisely what aspects of AI require governance rather than applying generic principles. He questioned whether there is genuine functional demand for international AI governance, noting that “we simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand.”


He suggested developing a detailed mapping matrix of which internet properties apply to specific AI contexts and applications.


### Andrew Campling: Social Media Lessons


Campling suggested looking at social media governance lessons rather than internet governance, emphasizing duty of care and precautionary principles. He noted the importance of learning from past failures in social media regulation.


### Dominique Hazel Monsieur: W3C Standards Work


Representing W3C, Monsieur highlighted ongoing work on AI and web standards, focusing specifically on the intersection of AI and internet technologies rather than broader AI governance.


### Yik Chan Ching: Policy Network Perspective


From the Policy Network on Artificial Intelligence (PNAI), Ching mentioned ongoing research on liability, interoperability, and environmental protection in AI systems, noting significant progress in AI standards development across regions.


### Shuyan Wu: Digital Equity Focus


From China Mobile, Wu emphasized ensuring equal access, protecting user rights, and bridging digital divides in the AI era.


### Alejandro Pisanty: Commercial Reality


Participating online, Pisanty questioned fundamental incentive structures, asking “Why would OpenAI, Google, Meta, et cetera… why would they come together and agree to limit themselves in some way?” He advocated for applying existing rules for automated systems rather than creating entirely new frameworks.


## Key Themes and Challenges


### Architectural Differences


The fundamental difference between internet and AI architectures emerged as a central challenge. The internet’s distributed design contrasts sharply with AI’s concentrated ownership and control, creating new governance challenges.


### Market Concentration Concerns


Multiple speakers highlighted concerns about AI market concentration and the emergence of new gatekeepers that could limit user choice and content diversity, drawing parallels to earlier internet governance challenges.


### Transparency vs. Practicality


A significant tension emerged between calls for maximum transparency and practical constraints including investment protection and security concerns. Speakers debated appropriate levels and mechanisms for AI transparency.


### Global South Inclusion


Several speakers emphasized including Global South perspectives and addressing existing digital divides to prevent their reproduction in AI governance frameworks.


## Areas of Convergence


Despite disagreements, several areas of consensus emerged:


– **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application contexts


– **Technical standards importance**: Strong agreement on the need for AI interoperability standards


– **Safety and transparency needs**: General agreement that AI systems require more transparency than currently provided


– **Stakeholder inclusion**: Consensus on the importance of diverse participation in governance discussions


## Implementation Recommendations


The session produced several concrete recommendations:


### Continued Collaboration


Participants agreed to continue discussions through Dynamic Coalition mailing lists to address unresolved issues.


### Detailed Mapping Exercise


Drake’s suggestion for developing a mapping matrix of internet properties applicable to specific AI contexts was endorsed as a practical next step.


### Regulatory Development


ARCEP committed to completing its technical report on applying internet core values to AI governance.


### Focused Interventions


Rather than generic AI governance, speakers recommended focusing on AI-internet intersection points where governance needs and stakeholder incentives may be clearer.


## Unresolved Questions


The discussion concluded with acknowledgment of fundamental questions requiring further work:


– How to balance innovation incentives with transparency and accountability requirements


– Whether binding international AI agreements are feasible given current political realities


– How to address liability and responsibility in multi-agent AI systems


– What constitutes genuine functional demand for AI governance versus assumed need


## Conclusion


This session revealed both promise and challenges in applying internet governance principles to AI systems. While there was agreement on core values like safety and transparency, fundamental tensions emerged between internet and AI architectures, transparency ideals and practical constraints, and governance aspirations and commercial realities.


The discussion produced pragmatic recommendations focusing on risk-based approaches, technical standards development, and targeted interventions. However, unresolved tensions around transparency requirements, stakeholder participation, and international cooperation indicate significant work remains to develop effective AI governance frameworks that preserve internet values while addressing AI’s unique characteristics.


The session demonstrated the value of diverse international perspectives while highlighting the need for continued dialogue and practical experimentation to bridge the gap between principles and implementation in AI governance.


Session transcript

Olivier Crepin-Leblond: Right, welcome everybody to this session, this joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality. I’m Olivier Crepin-Leblanc, and co-chair of this session is going to be Luca Belli for the Dynamic Coalition on Network Neutrality and Pari Esfandiari for the Core Internet. It’s great to see such a lot of you here. As Luca said, if anybody wants to step up over to the table here, they’re very welcome to do so. We are going to have a session that’s going to be quite interactive. So we’ll have the speakers speak and so on, and then we’ll see if we can have a good discussion in the room about the topic. I’m just going to do just a quick introduction of the speakers that we have. And so we’ll start with four speakers, each providing their angle on the topic. We’ll have Vint Cerf, who’s joining us remotely. Unfortunately, he couldn’t make it in person at this IGF. So he’s over in the US and he will actually let us know at some point when he will be online, because he is also as often doing more than one session at the same time. Actually, I am. I am online. He’s already there. Goodness gracious. OK, sorry, Vint. I have two eyes, but they both look in the same direction. I don’t know why. I should have also checked the screen. So Vint Cerf, Hadi Alminiawi, and then we’ll have Renata Mielli also here and Sandrine ELMI HERSI, who’s sitting next to me. After that, we’ll have what we call additional speakers. They’ll be commenting on what they’ve heard from the original, from the first set of speakers. There’s three commenters, William Drake, Bill Drake, Roxana Radu and Shouyuan Wu, who’s just arrived from China. So the very last minute managed to make it here. So welcome to all of you. And then after that, we’ll open it to a wider discussion. But I’m kind of wasting time. We’ve only got 75 minutes, so I’m going to hand the floor straight over to Luca and to Pari for the next stage. Thank you.


Pari Esfandiari: Thank you very much Olivia and welcome everybody. It’s great to be here with all of you So we convene this session bridging internet and AI governance from theory to practice Not just because things are changing fast but because the way we think about digital governance is being fundamentally reshaped. As technologies converge and accelerate, our governance systems haven’t kept up and at the center of this shift is artificial intelligence Let’s start with theory, the internet’s core value global, interoperable, open, decentralized, end-to-end, robust and reliable and freedom from harm These were not just technical features, but deliberate design choices that made the internet a global common for innovation, diversity and human agency. Now comes generative AI It doesn’t just add another layer to the internet It introduces a fundamentally different architecture and logic. We are moving from open protocols to centralized models gated, opaque and controlled by a handful of actors. AI shifts the internet pluralism towards convergence replacing inquiry with predictive narration and reducing user agency This isn’t just a technical shift. It’s about who gets to define knowledge, shape, discourse and influence decisions. It’s a profound governance challenge and a societal choice about the kind of digital future we want. If we are serious about preserving user agency, democratic oversight and an open, informative ecosystem, the core internet values can serve as signposts to guide us, but it needs active support, updated policies and cross-sector commitment. This is where the practice begins. The good news is we are not starting from scratch from UNESCO’s AI ethics framework to the EU AI Act, the US AI Bill of Rights and efforts by Mozilla and others. We are seeing real momentum to root AI governance in shared fundamental values. So yes, there is a real divergence, but also real opportunities to shape what comes next. And that’s our focus today. With that, I will hand it over to my co-moderator, Luca Belli. Thank you.


Luca Belli: Thank you very much, Pari and Olivier. And also, let me hold this. Is this working? Yes. Yes. Okay. Thank you. Are you sure? Because I’m not hearing myself. Is this working? I am here. Can you hear us? Okay. I’m sorry. It’s my headphone. It’s not working. It’s not useful when I have to hear myself anyway. All right. So thank you very much, Olivier and Pari, for having organized this and for having been the driving force of this session that actually builds upon what we have already done last year in our first joint venture that was already quite successful. And I think that what’s already emerged, I always say that it’s good to build upon the sessions and building blocks and reports that we have already elaborated so that we move forward, right? And actually something that already emerged as sort of consensus last year in Riyadh are two main points. First is that we have already discussed for pretty much 20 years, at least here at IGF, internet governance and internet regulation. And so we can start to distill some of those teachings and lessons into what we could apply to regulate the evolution of AI and AI governance. And second, and to quote what Vint, the expression Vint used last year, the Internet and AI are two different beasts. So we are speaking about two things that are two digital phenomenon, but they are quite different. And the Internet, as Pari was reminding us very eloquently, has been built on an open, decentralized, transparent, interoperable architecture that made the success of the Internet over the past 70 years, 50 years, 50 years at least since Vint penned it in 1974. And yeah, but the question here is how we reconcile this with a highly centralized AI architecture. And I think that here there is a very important point we have been working on, on net neutrality and Internet openness debate over the past year, that is the concept of Internet generativity that we have enshrined in the documents, the report we have elaborated here over the past years, which is the capacity of the Internet to evolve thanks to the unfiltered contributions of the users, is the consequence of the fundamental core Internet values. Openness, transparency is to create a level playing field, a capacity to innovate, to share and use application services content and to make the Internet evolve according to how the users want to do. So users, not only users, passive users are prosumers. They create the Internet. Now, this is in fundamental tension with an AI that is frequently proprietary, non-interoperable, very opaque, both in the data sets that are used for training, that usually are the result of massive scraping of both personal data and copyrighted content in very peculiar ways that might be considered illegal in most countries with data protection or copyright legislation. And then the training and the output of it is very much opaque for the user. And very few companies can do this and supply this. So there is an enormous concentration phenomenon ongoing, which is quite the opposite of what the original internet philosophy was about. Now, to discuss this point, we have a series of fantastic speakers today. I think that, as I was mentioning before, as we are celebrating 51 years of the paper by Vint and Bob Kahn on the internet networking protocol, and a protocol for internet working networks, right, if I’m not mistaken. I think the first person that should go ahead should be Vint. So Pari, please, the floor is yours to present Vint.


Pari Esfandiari: Thank you very much. We have two actually overarching questions. And we would like our speakers to focus on those two overarching questions. I would read it for you. How can the internet’s foundational principles of openness and decentralization guide transparent and accountable AI governance, particularly as generative AI becomes a main gateway to content? And the second question, how can fundamental network neutrality principles such as generativity and competition on a level playing field apply to AI infrastructure, AI models, and content creation? So Vint, drawing on your unique experiences in both funding architecture of the internet and your work with the private sector, we are curious to hear your comments on these questions. Over to you.


Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial intelligence. I barely manage my own intelligence, let alone artificial. But I work with a company that has invested very heavily in AI and in AI-based services. So I can reflect a little bit of that in trying to respond to these very important questions. The first thing that I would observe is that the Internet was intended to be accessible to everyone. And I think the AI efforts are reflective of that as well. The large language models, well, let me distinguish between large language models and machine learning tools for just a moment. All of you are well aware that AI has been an object of study since the 1960s. It’s gone through several iterations of phases, the most recent of which is machine learning, reinforcement learning, and then large language models. The reinforcement learning mechanisms have given us things like programs that can beat the best players of Go, programs that can tell you how proteins fold up, and that tells us something about their functionality. And more recently, there’s something at Google called Alpha Evolve, which is an artificial intelligence system that will invent other software to solve problems for you. The large language models that we interact with embody huge amounts of content, but they are specialized when they interact with the users. You use the term prompting to elicit output from these large language models. And the point I want to make here is that every time someone interacts with one of those, they are specializing it to their interests and their needs. So in a sense, we have a very distributed ability to adapt a particular large language model to a particular problem or to respond to a particular question. And that’s important, the fact that we are able to personalize. Our interactions with these sources of information is a very important element of useful access. The question about interoperability of the various machine learning systems is partly answered by the agent model idea. That is to say, the large language models are becoming mechanisms by which we can elicit not only responses, but also actions to be taken. So the so-called agentic generative AI is upon us. And consonant with that are two other standards that are being developed. One is called A2A, or agent-to-agent interaction, and the second is called MCP, which is a model context protocol to give these artificial intelligence agents a concept of the world in which they’re actually operating. The reason these are so important, and they create interoperability among various agentic systems, is that it’s very important for precision. It’s important that the agents, when they interact with us, and when they interact with each other, to have a well-defined context in which that interaction takes place. And we need clarity, and we need confidence that the semantics are matched between the two agents. If anyone has ever played that parlor game called telephone, where you whisper something in someone’s ear, and then they whisper in the next person’s ear, and you go down the line, and whatever comes out on the other end is almost never what started out at the beginning. We don’t want chains of agents to get confused, and so the A2A and MCP are mechanisms. to try to make that work a lot better. So I think this is a very important notion for us to ingest into the work of the core internet values, except they will have to become core AI values, which is clarity in interaction among the various agents, of course, among other things. Last point I would make is that as you interact with large language models, the so-called prompting exchanges, one of the biggest questions that we always have is how accurate is the output that we get from these things? We all know about hallucination and the generation of counterfactual output coming from agents. It’s very important that provenance of the information that is used by the agents or by the large language models and references be available for our own critical thinking and critical evaluation of what we get back. And so once again, that’s a kind of core internet value. How do I evaluate or how can I evaluate the output of these systems to satisfy myself that the content and the response is accurate? So those are just a few ideas that I think should inform the work of these dynamic coalitions as we project ourselves into this online AI environment. But I’ll stop there because I’m sure other people have many more important things to say in response to these questions.


Pari Esfandiari: Thank you very much, Brent, for that very informative discussion. And with that, I would go to Sandrine. Sandrine, based on your experience shaping digital strategies within government, how would you see this? Thank you.


Sandrine ELMI HERSI: Thank you. And let me first start to say that it’s a real pleasure to… Thank you all for joining this session today and to discuss this important topic with partners from the Net Neutrality and Core Internet Values Coalition. And before we ask how to apply openness and transparency to AI governance, I would like to insist on the why and why this application has become essential. So as it was already covered by Vint, LLMs, notably Generative AI tools, are becoming a new default point of entry to online content and services for users. Since our conversation at the last IGF at Riyadh, we’ve been seeing this trend accelerating through the development of the use of individual chatbots, but also the establishment of response engines integrating into mainstream search tools. Generative AI is also increasingly embedded directly in end-users’ devices. And we are also seeing a shift from early-generation LLMs to new RAG, so Retrieval Augmented Generation systems that are now included in AI tools and that can directly draw from the web. And looking ahead, agentic models could also centralize a wide range of users’ actions into a single AI interface. So the question is really, will tomorrow’s Internet still be open, decentralized and user-driven if most of our online action is mediated by a handful of AI tools? So now, regarding the how, ARCEP, the French regulatory authority for electronic communications, is currently conducting technical hearings and file testing with a team of data scientists to explore this very question. All through our report is currently in development, we can already identify three main areas for action to apply internet core values to AI governance. The first area is accelerating on AI transparency, understanding generative AI models, what data they use, how they process information, and what limits they have is a prerequisite for trust. There is some progress, more and more players are now engaging with researchers and through sectorial initiatives such as standards and code of conduct, but many models remain black boxes. We need greater openness, especially to research, to the research community, to improve auditability and explainability, but also the efficiency of models. The second area is preserving the notion of intelligence at the age of networks, which is the original spirit of the internet, intelligence distributed among users and applications, not centralized in platforms or infrastructure. We must notably ensure that users remain able to choose among diverse services and sources. This may require working on the technical and economic conditions that shape AI outputs to guarantee a certain level of neutrality, plurality of views, and openness to a diverse range of content creators and innovators. Last but not least, regarding the principle of non-discrimination that is also at the center part of net neutrality. So net neutrality, non-discrimination principle, was originally applied to prevent… Internet Services Providers from privileging their own services or partners in vertical markets. But today’s ISPs are not the only digital digital gatekeepers that can narrow the perspective and freedom of choice of end-users. So at ARCEP, we are now working on assessing to what extent this principle of non-discrimination and openness can be extended to AI infrastructure, AI models, but also how AI curates and presents contents. On this, very shortly, we notably investigate two questions. The first one is how to preserve the openness of AI markets, notably through ensuring that plurality of economic players have access to key inputs necessary for LLM developments, including data, computing resources, but also energy. And the second question we are also diving into is ensuring that we keep a diversity of content on the internet, knowing that when they use AI chatbots and response engine, end-users only have access to one answer instead of hundreds of web pages. So we must ensure that generative AI is not just simply amplifying already dominant sources, but is open to smaller and independent content creators and innovators. That might mean in the future working on defining sector-wide frameworks or interconnection standards on fair contractual conditions, as it was done for IP interconnection. And to end, the goal is not, of course, to block innovation, but on the contrary, to make sure that innovation and AI are compatible with preserving internet as a common good.


Luca Belli: Thank you very much Sandrine for these very excellent thoughts and I think it’s very good to see how you are illustrating that what has been done in terms of internet openness regulation or net neutrality debates over the past 15 years is precisely trying to enshrine into law the original philosophy of openness transparency and decentralization and priority of the internet and make sure that when what we can call gatekeepers or point of controls emerge they behave correctly and if necessary a law protects the rights of the users and the regulator oversees the laws to make sure that the obligations are implemented. Now what is very difficult now is to understand who are the new gatekeepers and how to implement the law that maybe still does not even exist in these terms. So I would like to now give the floor to Renata Mielli who is the coordinator of the CGI in this moment and CGI has been also leading the debate on net neutrality internet openness and now also many AI issues in Brazil. So Renata the floor is yours.


Renata Mielli: Thank you Luca and thank you all for inviting me for this session especially because I believe we are establishing a continuity and we are deepening the debate we started in Riad when we discuss AI from a perspective of sovereignty and the empowerment of the global south last year and how to reduce the existing asymmetries in this field and now we are talking about how to bridge the internet governance and principles to AI principles and governance and to contribute to this session I choose to look at the world We have considered the work we have done in CGI.br on principles for the Internet and try to reflect on what makes sense and what does not make sense when we’re thinking about AI in a perspective of establishing a set of principles for development, implementation and use of AI technologies taking into account what Luca just said about the differences, the high economic concentration, the opacity of the systems and taking into account also what Vint said, there are two different principles. In this sense, I would like to start by looking at what is not covered when we are talking about AI in these ten principles. The first thing I see and a lot of people are mentioning a lot is transparency and explainability because these two principles are very essential when we talk about AI because it involves a series of procedures that are not in the same way when we are dealing with the Internet. Internet is open, Internet is decentralized, all the protocols are built in a very collaborative way, but this is not the case of AI. So AI system governance and deployment and development needs to ensure high levels of transparency especially for the social impact assessment of this type of technology as well as for the creation of compliance process that ensures other principles like accountability, fairness and responsibility. We are discussing a series of specific principles for AI that were not necessarily conceived in the context of internet governance. In terms of CJIS Decalogue, I’d like to point out which ones can be, in some way, interoperable with AI principles. In this case, I think, of course, freedom, human rights, democratic and collaborative governance, universality in terms of access and the benefits of AI for all, diversity when talking about language, culture, the necessity of inclusion of all kinds of expressions, also standardization and interoperability between the various models and, of course, we need legal and regulatory environment for these systems. We can think that the perspective used for the internet governance is applicable to AI principles in context. From another perspective, principles like security need to be addressed with two other principles, safe and trustworthy, and ethical, I point another one, so they can be answered with discussion about impacts on rights like privacy and data protection. Finally, an important part of this exercise of evaluating internet governance principles and their possible alignment with AI governance principles is to identify what was conceived for the internet that is not applicable in the AI context. In this aspect, only to mention because I don’t have more time, I point to the principles of net neutrality because the proposed here is to present have observed net neutrality in relation to telecommunications infrastructure and this is not applicable to AI. And there is neutrality in the technology itself. AI is not neutral. And I think in inputability also is another principle that is not easily transferred from the internet to AI because here we have to understand the responsibility in the AI chain. So these are some thoughts I have to share in the beginning of this panel. Thank you very much.


Luca Belli: Thank you very much, Renata. And actually you also bring into the picture something extremely relevant I think for which the IGF is also an appropriate forum, being a UN forum. The fact that we have been debating this for 20 years. There are a lot of debates also going on in the Global South about this since at least 20 years. But what we see in terms of mainstream debates and policymaking and even construction of AI infrastructure, especially cloud infrastructure is an enormous predominance of what we could call the Global North. So it’s very interesting to start to bring into the debate the Global South voices. We’ve started with Brazil. Now we are continuing with Ms. Adia Elminaoui that is here representing the African continent, which is an enormous responsibility. So please, Adia, the floor is yours.


Hadia Elminiawi: Thank you. Thank you so much. And I’m happy to be part of this very important discussion. So let me first start to highlight the similarities between AI and the Internet that make the Internet’s core values well suited as a foundation for AI governance. AI can be considered one of those general-purpose technologies impacting economic growth, maybe quicker than any other general-purpose technology that has emerged in the past, such as steam engines, electrifications, computers. AI is driving revolutionally changes in all aspects of life, including healthcare, education, agriculture, finance, services, policies, and governance. By definition, AI isn’t just one technology, but it’s a constellation of them, including machine learning, natural language processing, and robotics that all work together. Similarly, the Internet stands as another powerful general-purpose technology that has fundamentally changed the way we live, work, and interact, enabling new ways of communication, education, services, provision, and conducting businesses. The Internet infrastructure is foundational to artificial intelligence, enabling cloud services, including managing on-site data centers and real-time applications. In addition, many of the services and applications that are being delivered over the Internet infrastructure are using AI to deliver better experiences, services, and products to users. So when it comes to Africa, the capabilities of African countries regarding AI vary significantly across the continent due to differences in the availability of resources, infrastructure, including reliable and efficient electricity, broadband connectivity, data infrastructure like data centers and cloud services, accesses to sets of quality data, AI-related education and skills, research and innovation, and investment. So last year in July 2024, the African Union Executive Council endorsed the African Union Continental AI Strategy. The Continental AI Strategy is considered pivotal to achieving the aspirations of the Sustainable Development Goals. And likewise, the internet plays a critical role in achieving the Sustainable Development Goals. No poverty, good health and well-being, quality of education, digital industry, innovation and infrastructure. Other relevant regulatory approaches around the globe include EU’s AI Act adopted in 2024, the Executive Order for Removing Barriers to American Leadership in AI in January 2025, sectoral oversight in the US, the UK Framework for AI Regulation, the 2023 G7’s Guiding Principles and Code of Conduct, and China also has developed some rules, Egypt also has its second edition of the National Artificial Intelligence Strategy in 2025. So in all those strategies, we see some of the core principles that have shaped the internet, such as openness, interoperability and neutrality, guiding various AI governance strategies. So the question now becomes, how do we translate those agreed principles and frameworks into actions? And in some cases, what do those principles in practical terms mean or look like? So let’s look at openness and transparency. What does this mean?


Luca Belli: Hadia, may I ask you to wrap up in 30 seconds?


Hadia Elminiawi: Yes, sure. So that would be very quick. I’m almost done. So open access to research and requiring AI model, maybe. It means open access to research and requiring AI models to include components for full understanding and auditing. But what does ensuring transparent algorithms in practical terms mean? Is it realistic or even desirable to expect that all AI models be made fully open source? Given the amount of capital investment in these models, requiring complete openness could discourage investment in AI models, destroying a lot of economic value and hindering innovation. At the same time, transparency and openness raise some important ethical and security concerns. Is it truly responsible or logical to allow unrestricted access to tools that could be used to build weapons or plan harmful disruptive actions? We may need layered safeguards. AI algorithms on top of other AI algorithms to ensure responsible and secure use. So what alternative solutions can we consider? One possibility could be requiring all AI developers to implement robust safety guardrails and have these guardrails open source rather than the models themselves. In addition, AI developers could be required to publish the safety guardrails that they have put in place. I guess this is an open discussion. And with that, I would like to wrap up and thank you.


Pari Esfandiari: Thank you very much, Hadia. And on that, I think that I want to thank all the panelists for their insightful contribution. And now I want to invite our invited community members to comment on what they have heard. So you are also welcome to share your own views on the broader issues we have touched upon. And on that, I would start with Roxana. Roxana Pardue, you have five minutes. Please start.


Roxana Radu: Thank you very much. I’m sorry for not being able to join you in person. I would just like to start. Let me start by saying that there is a flourishing discussion now around ethics and principles in AI governance. In fact, it’s what we’ve seen developed over the last five or six years. It’s a plethora of ethical standards and guidelines and values to adhere to. But the key difference with internet governance is the level of maturity in these discussions and also the ability to integrate those values that are newly identified into technical policy and legal standards. What we’ve done in internet governance over the last 30 years is much more than identifying core values. We apply them, we’ve embedded them into core practices, and we are continuing to refine these practices day by day. I think there are four key areas that require attention at this point in time where we can bridge the internet governance debates and the AI governance discussions. First is the question of market concentration. Look, I was already alluding to gatekeepers, how do we define them in this new space? Highly concentrated ownership of the technology, of the infrastructure, and so on and so forth. Second is the diversity and the equity in participation in engaging different stakeholders, but also stakeholders from parts of the world that are not equally represented. Thirdly, there is the hard-learned lesson of personal data collection, use, and misuse. We have more than 40 years of experience with that in the internet governance space, and we’ve placed emphasis on data minimization, to not collect more than what you need. This lesson does not seem to apply to AI, in fact it’s the opposite. Collect data even if you are not sure about its purpose currently, machines might figure out a way to… to use that data in the future, is the opposite of what we’ve been practicing in recent years in internet governance. And fourthly, and very much linked to these previous points, there’s a timely discussion now around how to integrate some of these core values into technical standards. With AI, there seems to be a preference for unilateral standards, the giants developing their own standards, sharing them through APIs, versus globally negotiated standards, where a broader community can contribute. And those voluntary standards could then be adopted by companies and by participants in those processes more broadly. I think we need to zoom in on some of these ways of bringing those core values into practices. And it’s very opportune to do that now at the IGF. Thank you.


Luca Belli: Thank you very much, Roxana. And I think that there are some interesting points that are emerging here. Also something that I want to very briefly comment on, because it was raised before, is that we are discussing here how core internet values can apply to AI. And I think it’s interesting to do this in joint venture with the Coalition on Net Neutrality, because net neutrality is actually the implementation of core internet values into law. And as any lawyer that has studied Montesquieu would tell you, what counts in the law is the spirit of the law, right? I remember 10 years ago writing an article on the spirit of the net, where I was mentioning precisely net neutrality was the enshrining into law, the spirit of the net, the core internet values, right? And so we now have to understand a way to translate this into an applicable way to AI. And I think that is the huge challenge we have here today. And I’m pretty sure that our friend Bill Drake knows how to solve this challenge. Believe the floor is yours.


William Drake: Obviously I do not. Thank you. Okay well first of all I congratulate the organizers of this session on putting together an interesting concept. I mean trying to figure out how you map internet properties and values into the AI space I think is definitely a worthwhile activity. As Roxana noted it kind of builds on all the discussions at international level in recent years about ethics whether in UNESCO and other kinds of places and I think that you know it’s it’s worth carrying this forward but I would start by noting just there’s a few constraining factors. Three in particular. First conceptually let’s bear in mind again going back to what Vint said we’re talking different beasts you know we’re not talking here about a relatively bounded set of network operators and so on we’re talking about a vast and diverse range of AI processes and services in an unlimited range of application areas from medicine to environment and beyond so which internet properties will apply generally or in specific context simply can’t be assumed. We need to do close investigation and mapping and I think there’s a great project there for somebody who wants to develop that matrix. I look forward to reading whoever does that first. There are reasons to wonder whether some of these things really do apply clearly. Renata suggested that net neutrality for example might not be so directly applicable. There’s a lot of other challenges I think there intellectually. Secondly of course is the material interests of the private actors involved. Luca referred to the concentration issues. It’s nice to think about values but I wouldn’t expect all the US and Chinese companies that are involved in this space to join an AI engineering task force and hum their support for voluntary international standards. To the contrary they’ve kind of demonstrated that they’ll do pretty much anything to promote their interests at this phase including sponsoring military parades for dear leaders in Washington. and so on. So it’s unclear how much they would embrace any kind of externally originated constructs like neutrality, openness, transparency, etc. that don’t really fit well into their immediate profitability profile and how well these things would apply to very large online platforms and search engines, etc. Again, real challenges there. And lastly, of course, the material interests of states. Net neutrality, of course, is verboten in the United States now. Applying it to AI, of course, would be too. Generally speaking, multilateral regulatory interventions are impossible to contemplate in the Trump era, at least for those of us who are in North America. And I’m not sure what China would sign on to in that context. So in principle, you would like to think, though, that transparency and openness with regard to governance processes, especially international governance processes, could be pursued. And there, you know, I would just like to flag a couple of quick points before I run out of time. Lessons from Internet governance, I think, that are relevant. One, first, we have to be real clear about where there’s an actual demand for international governance and regimes and the application of these kinds of values and so on. We simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand. You know, often people point to things and say, oh, there’s some new phenomena. We must have governance arrangements. But very often the demand for governance arrangements is not equally distributed across actors, and those highfalutin aspirations don’t get fulfilled. So, you know, I mean, we used to talk about safety, right? There was a lot of international discussion around safety. Now suddenly safety is out the window, and we’re all talking about, well, we want to promote innovation and investment. So, you know, it’s easy to say that we have this kind of this demand to do all these new wonderful normative things, but in reality, you know, when push comes to shove, we have to look at where there’s a real functional demand. Where do you actually need international governance, interoperability or harmonization of rules. In the telecom space, if you look historically, right, radio frequency spectrum, we had to have non-interference. Telecom networks had to be interconnected and have standards to allow the network to pass traffic between them. So there was a strong incentive for states to get on board and do something even if they had different visions of how to do that and they could fight over it. What do those aspects of the AI process that absolutely require some kind of coordination or harmonization, it’s not entirely clear and I think we can’t just assume that. Other last point, I don’t want to run out of time, is just to say, going as somebody who was around 20 years ago and remembers all the fights over internet governance and what is internet governance and so on, we are in a liminal moment like we were 20 years ago where people are not clear what is the phenomena, how do we define it, what does governance mean in this context, etc. This requires a great deal more thinking about when you’re applying it to the specificities of the AI space. I think I hear a lot of these discussions in the UN where people seem to be just grafting constructs from other international policy environments onto AI and saying, well, we’ll just apply the same rules that apply elsewhere. And this is like saying that we’ll apply the rules from the telegraph to the telephone and from the telephone to television and, you know, every new technology, we’re going to look at it through the lens of previous technologies, but often that doesn’t work so well. And my last point, and then I’ll stop, multilateral action I’d be very careful in thinking about. I noticed that the G77 in China in a reaction to the co-facilitators’ stuff on the AI is saying that they want binding international commitments coming out of the UN process, that they will not accept purely informal agreements coming out of the UN process. I look at what’s going on in the AI space, I’m thinking, seriously, what kind of binding international agreements are we going to begin negotiating how in the United Nations? in the near term. And if you set that up at the front end is the object that you’re trying to drive towards, you can just see how difficult all this is going to become very quickly. So I probably went over five minutes, so I’ll stop. Thank you.


Pari Esfandiari: Thank you very much, Bill. And for the sake of time, I’m not going to reflect. You packed an awful lot of information on that that section, but we don’t have enough time. So therefore, I directly go to Shuyan Wu. Shuyan Wu, floor is yours.


Shuyan Wu: Okay, thank you. Hello, everyone. It’s a great pleasure to hear, to attend this important discussion. I am from China Mobile, one of the world’s largest telecom operator. So I’d like to share the practices and experiences from China Mobile to bridge internet and AI governance. In the age of internet, we continue to promote the development of the internet ecosystem towards fairness, transparency, and inclusiveness. This commitment is reflected in our efforts across infrastructure development, user rights protection, and bridging the digital divide. Firstly, in terms of infrastructure development, we strive to ensure equal access and tool and inclusive use of internet services. China Mobile’s mobile and broadband networks now cover all villages across the country. We’ve also built the world’s largest and most extensive 5G network. Second, when it comes to protecting users’ rights and interests, we work actively to create a transparent and trustworthy online environment. We provide clear user-friendly service mechanisms and have introduced quality management tools to ensure users’ right to information and independent decision-making. For specific groups such as elderly and minors, We focus on fraud prevention education and offer customized services to build a safer and greener digital space. Third, to bridge the digital divide and support inclusive growth, we’ve implemented targeted solutions. For elder users, we offer dedicated discounts and have tailored our smart services to their needs. For minors in rural areas, our 5G smart education cloud network services are helping to reduce the gap in education resources between urban and rural communities. As the transition from the internet era to the age of AI, China Mobile is actively adapting its experience and capabilities to the evolving needs of AI governance. We’re striving to build a digital ecosystem featuring universal access, decentralization, transparency, and inclusiveness. We are investing in AI infrastructure to promote resource sharing and encourage decentralized innovation, backed by our strong computing power, data resources, and product solutions such as large language models and AI development platforms. At the same time, we continually leverage AI capabilities to build a transparent and trustworthy digital environment, effectively safeguarding user rights. For instance, China Mobile applies AI-powered information detection technologies in scenarios like video calls and financial services to help users identify false or harmful content. Moreover, we are committed to ensuring that the benefits of AI are shared by all. For minors, we launched personalized education and eminence education scenario interaction solutions. For the elderly, we offer AI-powered entertainment, health monitoring, and safety services and for rural areas our smart village doctor system delivers quality health care to remote communities. That’s all for my sharing. Thank you.


Olivier Crepin-Leblond: As everyone points over to me thank you very much and now we’re going to open the floor for your input and for your feedback on what we’ve had so far. I’m the remote participation or online participation moderator as well and there’s been a really interesting debate going on online. I’m not sure how many of you have been following this. I was going to ask whether we could have the two main participants that were speaking back and forth online Alejandra Pisanti and Vint Cerf as well because Vint of course is always active both online and with us. So and then we’ll have after those two then we’ll start with the queue also in the room. Yeah all right let’s get going Alejandro you have the floor. Thank you.


Alejandro Pisanty: Good morning. Can you hear me well? Yes very well. Thank you. Thank you. I was making these points also in previous discussions of the core internet virus dynamic coalition. If you are trying to look at translating what the experience of governance from the internet to AI to artificial intelligence I think there’s a few points that are valuable to take into account and many of them have been made already so I’m trying to just group them. First is you have to define pretty well what you want to govern. What branch of the enormous world of artificial intelligence you actually want to apply some governance. Otherwise you’ll have some more serious ill effects. Using AI for molecular modeling, protein folding and so forth that kind of problem or using it as a back office system for detecting fraud in credit cards and so forth are very different beasts in turn. So it’s very important not to regulate one of them let’s say regulates with such generality that rules from one of them will impede progress in other ones where they are absolutely not necessary. Second, what I think is very important, and we learned this from internet governance, from 30 years of internet governance, is make sure you are governing the right thing in the following sense. What does AI, as the internet in turn, bring new to things that we already know? What rules do we already have that we can just apply or modify for taking into account AI? For example, we have purchasing rules, especially in governments, where you know the constraints that you have on systems that you can buy for government, like they cannot be discriminatory, they cannot be harmful, and so forth. So you can apply those rules instead of creating a whole new world. It’s like medical devices, for example, you already have so many rules for automated medical devices, you can extend those to artificial intelligence, the harms and the consequences of the harms. So these will be different, they will be amplified, there’s probability, there’s uncertainty, but we know how to deal with that and we just need to change the scale and a better understanding of these factors. Next, what do you expect to obtain from governance? Do you want more competition? Do you want a reduction of discrimination and bias? Do you want more respect for intellectual property? Do you want more access to global resources for the global south, and so forth? Because this will determine the institutional and organizational design. And next and most important, and this is something that a NetMundial plus 10 meeting, for example, does with other good that it has, is how do you actually bring these different stakeholders? Who are the stakeholders and how do you bring them to the table? If you want to regulate large language models provided over the internet for chatbots, like are the dominant aspect of public discussion these days. Why would they come to the table? Why would OpenAI, Google, Meta, et cetera? not to speak about Mistral and certainly the providers in China and other countries which are operating under completely different sets of rules, why would they come together and agree to limit themselves in some way? Also to sit at the table with people who are their users or their clients, potentially their competitors if something arises from their innovation. And especially, how do you bring them together to put some money into the operation of the system? To agree to have a structure, to agree to have their hands tied in some extent. What has happened, for example, in internal governance is very different things for, let’s say, the domain name system and fighting phishing and scams. For the domain name system, you have companies fearing that strong rules for competition would come from the US government and they agreed finally to come together with civil society and the technical community, which is also a key point. The experts have always to be at the table. As the ICANN paper has stated very recently for intergovernance, the technical community is not one more participant. It’s a pillar and you need to know what the limitations and the capabilities of that technology are. I’ll stop there. Thank you.


Olivier Crepin-Leblond: Thank you, Alejandro. OK, next, Vint Cerf.


Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a couple of small points. The first one is that with regard to regulation of AI based applications, I think a focus of attention should be on risk to the users of those technologies and of course, potential liability for the provider of those applications. So a high risk application such as medical diagnosis or recommended medical treatment or maybe financial advice ought to have a high level of safety associated with it, which suggests that if there is regulation. the provider of the service has to show due diligence that they have taken steps that are widely agreed to reduce risk for the user. So risk is probably a very important metric here. And concurrently, liability will be a very important metric for action by the providers of AI-based services. I think that another thing which is of significance is the provenance of the materials used to train these large language models, for example, and explainability chain of reasoning, chain of thought, those sorts of things to help us understand the output that comes from interacting with these large language models. And finally, I mentioned this earlier, but let me reiterate that the agent-to-agent protocol and the model context protocols are there, I think, partly to make things work better, more reliably, but they might also be important for limiting liability. In other words, there’s a motivation for implementing these things with great care and designing them with great care so that it’s clear, for example, in a multi-agent interaction, which agents might be responsible for what outcomes. Again, something that relates to liability for parties who are offering these products and services. So I’ll stop there. I hope that others who are participating in this will be able to elaborate on some of these ideas.


Olivier Crepin-Leblond: Thank you, Vin. Just one point. Earlier in the chat, you mentioned, I’m seeing here, indelible ways to identify sources of content used to train AI models. Could you explain a bit?


Vint Cerf: Yes, I was trying to refer to provenance here. The thing that people worry about is that the material used to train the model may be of uncertain origin. And if someone says, well, how can I rely on this model? How do I know what it was trained on? Here, I think it should be possible to identify what the sources were in a way that is incontrovertible. Digitally signed documents or materials that whose provenance can be established is important, because then we can go back and say to the parties providing those things or ask them questions about the verifiability of the material that’s in that training data.


Olivier Crepin-Leblond: OK, thanks very much for this and apologies for the for the wait. But please over to the gentleman standing at the microphone and please introduce yourself in your intervention.


Audience: Thank you. Yes. Hi. Thank you for the excellent panel. I’m Dominique Hazel Monsieur. I work for WCC where among other things, I oversee our work around AI and its impact on the web. So this is a place where a lot of web standards are being developed. I guess I wanted to make two remarks, one on scope and one maybe on incentives for governance. One of the topics that was brought up in terms of scope, we are at the IGF and it’s been mentioned a number of times. AI is extremely broad. One, I think, useful way to segment the problem is to look at the intersection of AI and Internet. And there are a number of those like AI has been fed from a lot of web content. A lot of web content is now being produced through AI. AI is starting, as Vint was describing, to be used as agents on the web and on the Internet. So looking exactly at these intersections and what AI changes to the existing. and they are all critical components of their strategy about both building their tools and distributing the tools. And they can only make that remain true if they don’t impoverish the ecosystem to a point where there is no more content they can feed on or no more services that could accept to reuse or integrate with the system. So at the end of the day, I think it’s really a matter of in particular in this emerging agents architecture that Vint was describing that we understand what are the expectations for these agents learned from rules that already exist. For instance, in the web space, we have a number of very clear expectations as to what you ought to do if you’re a browser, literally a user agent. And understanding how they apply to AI-based agents I think is going to be hopefully very illuminating about what kind of governance we should put in place around that.


Olivier Crepin-Leblond: Thank you very much for your intervention. And the next person in line, please introduce yourself.


Audience: Yeah. Hi, sorry. My name is Andrew Campling. In this context, I’m an internet standards and internet governance enthusiast. To build on Bill’s comments, I probably wouldn’t start from here either. But here we are, and we’re probably too late to be somewhat pessimistic. But if I was going to look anywhere to start, it wouldn’t be the internet. I’d probably look closely at lessons from social media specifically, where we’ve got, in my opinion, a small number of highly dominant players who are disinterested in collaborative multi-stakeholder initiatives, unless they’re commercially worthwhile to them. If we look to the internet model, and we try to collaborate, build a multi-stakeholder governance model, I don’t think there’s a commercial imperative for the players to do that. It’s far too easy to game the system and take a long time, and by the time something’s agreed, it will be irrelevant. So if I was to start anywhere, I’d look closely at duty of care as a key requirement, and also explore why we wouldn’t apply the precautionary principle widely, and use those as two foundational building blocks. I wouldn’t start with internet governance. So apologies for the pessimism, but I think we have to be pragmatic and realistic where we are. Thank you.


Olivier Crepin-Leblond: I should say, this is quite a British intervention. Okay, thank you so much. Pass it over to Luca, or should we go for the conclusions, because there’s only about six minutes. Yeah, I think we can go.


Luca Belli: We have six minutes. Do we have any other comments or questions in the room? I don’t see any hands. We have exhausted the comments from the online participants. I think we can go for a round of very quick conclusions, like very prehistoric tweets of 240 characters. We don’t have time, because we’ve got to go now to Jek. Well, Jekshan is the person. Oh, Jekshan. Yeah, sorry. Sorry. We have already Jekshan that has a chat GPT, will distill all the knowledge in five minutes result.


Yik Chan Ching: Okay and thank you very much for giving me the five minutes to make some comment. Actually I’m from PNAI, you know the police level of artificial intelligence which is also the intersectional process of IGF. So it’s very interesting to have the joint section between the PNAI and the DCs. Yeah I found that discussion is really fascinating and so my observation is that there’s a two observation also based on the PNAI’s past three years research on the AI governance. For example we did a two report on liability and interoperability or these big issues and environmental protections. So I think there’s a two issue I would like to make some comments. The first one is about the institutional secting and because Bill asked how can we you know collaborate at the global level and what are the initiative or interest. I think there’s a first of all we know that there’s a UN you know process going on in terms of the scientific panels and also global dialogues. So we probably give some you know opportunities and a little bit trust to them and hold on to see what are the outcome from the UN level. And secondly I think for my experience what really make a difference between the AI governance and internet governance or social media governance is that we learn from our past experience especially you know social media’s experience. So we have such a vibrant discussion you know intervention, earlier intervention or precautional principle you know as we British said and from different stakeholders from civil society, from academia and you know from the industry. So I think in that sense you know we’re more much more precaution than the social media. Internet Errors. So which probably will make a difference. And certainly, the second one is in terms of which area we should look at. From my experience and also the PNAS experience, I agree with Vint. First of all, it’s risk. Risk is very important, you know. And secondly, the safety issues. And of course, liability, because liability is a mechanism we hold the AI developers and the deployments accountable. So that’s very important, I think. The third one, of course, is interoperability. So when we talk about interoperability, it’s not only about a principle, ethics, norms, but also standard, you know. So the standard will play a significant role in regulator AI. And I’m very glad we see a lot of progress in terms of AI standard making. For example, at the EU level, there’s a lot of standard. So they’re going to have a kind of, you know, announcement of the EU standard in terms of AI act. But there’s also standard, huge progress of standard making in China in terms of safety issue or the Africa issue. So I think the AI standard will be one of the very crucial areas for us to regulate AI in the future. I think I’ll stop here. Thank you very much.


Olivier Crepin-Leblond: Thank you very much, Yig-Chan. And there’s two minutes left, I guess, to just ask any of our co-moderators on their reflections. I was going to say one tweet from each one of our participants, but I don’t know if we can do it in the two minutes. Should we try? One tweet? Yeah, why not? Quick tweet. Okay, let’s start with the table then, with the person the furthest to my right, which is your left. Bill Drake?


Luca Belli: A message of hope in 20 seconds.


William Drake: A message of hope in 20 seconds. Wow.


Luca Belli: Or of disgrace, as you prefer.


William Drake: I was going to say abandon all hope. All right. Well, I just echo again the point about being very clear about exactly. What demand is there for what kind of governance over what kinds of processes? Too much of the discussion around these issues is just too generic and high level to be very meaningful when we get down to the real nitty-gritty of what’s going on in different domains of AI development and application, and so we need a dose of realism there. But I like the idea of the mapping effort that you’re trying to do, and I look forward to seeing you guys develop more.


Olivier Crepin-Leblond: Thank you, Bill. Next, Weixin.


Shuyan Wu: Okay, thank you. It’s my first time to attend this kind of discussion, and it’s very important to share my opinions and discuss with all of you. I hope I have another chance to continue to exchange our ideas with all of you. Thank you. Thank you.


Hadia Elminiawi: Regional and international strategies and cooperations should not be seen as conflicting with national sovereignties. National and international strategies, cooperations and collaboration should go in parallel and hand-in-hand. They should support and strengthen the goal of one another. They need to have aligned objectives and implemented simultaneously.


Olivier Crepin-Leblond: Thank you, Hadia. Sandrine.


Sandrine ELMI HERSI: Yes, so we can no longer think AI governance and internet governance as separate entities. As we noted today, the strong interlinks between LLMs and internet content and services, thus applying internet core principles to AI, is not a whim or accessory. It is the only way to preserve the openness and richness of the internet we spend years to build, and we can and must act now to establish a multi-stakeholder approach with that in mind.


Olivier Crepin-Leblond: Thank you. Renata.


Renata Mielli: Just three words. How to transform these principles into technical standards, we talked about this, and I want to say we need oversight, agency and regulation, and we need to remember that governance and regulation are two different things, and governance needs to be multistakeholder, and we need national regulations for AI systems.


Olivier Crepin-Leblond: Thank you, Roxana.


Roxana Radu: I’ll just say that we need to walk the talk, so now that we’ve done this initial brainstorming session, I look forward to seeing what we can come up together in terms of bridging this gap between what we’ve learned in internet governance, and where we’re starting in AI discussions. This is not to say that everything applies, but we’ve learned a lot, and we shouldn’t reinvent the wheel.


Olivier Crepin-Leblond: Thank you. And finally, Vint.


Vint Cerf: I think my summary here is very simple. We just have to make sure that when we build these systems, we take safety in mind for all of the users. That’s going to take a concerted effort for all of us.


Olivier Crepin-Leblond: Thank you very much, and if anybody in the room is interested in continuing this discussion, which I hope you are after this session, then please come over to the stage and share your details with us. You can get onto the DC’s mailing lists and continue the discussion and participate in future such work. Thank you.


P

Pari Esfandiari

Speech speed

127 words per minute

Speech length

599 words

Speech time

282 seconds

Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content

Explanation

Pari argues that as generative AI increasingly serves as the primary access point for online content, the core values that made the internet successful – being global, interoperable, open, decentralized, end-to-end, robust and reliable – should be applied to govern AI systems. She emphasizes that these were deliberate design choices that made the internet a global common for innovation and human agency.


Evidence

References the internet’s core values: global, interoperable, open, decentralized, end-to-end, robust and reliable, and freedom from harm as deliberate design choices


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Infrastructure | Legal and regulatory


Fundamental network neutrality principles such as generativity and competition on a level playing field should apply to AI infrastructure, AI models, and content creation

Explanation

Pari presents this as one of the two overarching questions for the session, suggesting that the principles ensuring fair competition and innovation in internet infrastructure should be extended to AI systems. This includes ensuring that AI infrastructure and models maintain the same level playing field that network neutrality provides for internet services.


Evidence

Presented as one of two main questions guiding the session discussion


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Infrastructure | Legal and regulatory


L

Luca Belli

Speech speed

159 words per minute

Speech length

1229 words

Speech time

463 seconds

Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary

Explanation

Luca emphasizes the fundamental architectural differences between the internet and AI systems. While the internet was built on open, decentralized, transparent, and interoperable principles that enabled its success over 50 years, AI operates through highly centralized, proprietary, and often opaque systems controlled by few companies.


Evidence

References Vint Cerf’s expression from previous year and contrasts internet’s 50+ year success with current AI centralization trends


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Infrastructure | Legal and regulatory


H

Hadia Elminiawi

Speech speed

122 words per minute

Speech length

721 words

Speech time

351 seconds

Core internet values like openness, interoperability, and neutrality are appearing in various AI governance strategies globally

Explanation

Hadia observes that the fundamental principles that shaped the internet are being incorporated into AI governance frameworks worldwide. She notes that various international strategies and regulatory approaches are adopting these core principles as foundational elements for AI governance.


Evidence

References EU’s AI Act (2024), US Executive Order for AI Leadership (January 2025), UK Framework for AI Regulation, G7’s Guiding Principles and Code of Conduct (2023), China’s AI rules, Egypt’s National AI Strategy (2025), and African Union Continental AI Strategy (July 2024)


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Legal and regulatory | Infrastructure


Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns

Explanation

Hadia questions whether requiring full transparency and open-source access to AI models is practical or desirable. She argues that given the massive capital investments in AI development, complete openness could discourage investment and destroy economic value, while also raising ethical and security concerns about unrestricted access to potentially harmful tools.


Evidence

Points to the substantial capital investment in AI models and security risks of unrestricted access to tools that could be used for weapons or harmful actions


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Sandrine ELMI HERSI
– Renata Mielli
– Vint Cerf

Agreed on

Need for AI transparency and explainability to address opacity challenges


Disagreed with

– Sandrine ELMI HERSI

Disagreed on

Feasibility of complete AI transparency and openness


Alternative solutions like requiring open-source safety guardrails rather than full model transparency should be considered

Explanation

As an alternative to complete model transparency, Hadia suggests that AI developers could be required to implement and make public their safety guardrails and protective measures. This approach would provide transparency about safety measures without exposing the entire model architecture.


Evidence

Suggests requiring AI developers to implement robust safety guardrails and publish information about these safety measures


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory | Cybersecurity


Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them

Explanation

Hadia emphasizes that international cooperation and national sovereignty in AI governance should be complementary rather than competing approaches. She argues that national and international strategies should have aligned objectives and be implemented simultaneously to support each other’s goals.


Major discussion point

Global South Perspectives and Digital Divide


Topics

Legal and regulatory | Development


R

Roxana Radu

Speech speed

144 words per minute

Speech length

504 words

Speech time

208 seconds

Internet governance experience over 30 years provides mature framework for applying values to technical, policy and legal standards that AI governance lacks

Explanation

Roxana highlights the significant difference in maturity between internet governance and AI governance discussions. While internet governance has spent decades not just identifying core values but actually implementing and embedding them into practical standards and practices, AI governance is still primarily focused on identifying ethical principles without the same level of practical application.


Evidence

Contrasts 30+ years of internet governance development with current early-stage AI ethics discussions


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Legal and regulatory | Infrastructure


W

William Drake

Speech speed

175 words per minute

Speech length

1224 words

Speech time

418 seconds

Need to define precisely what aspects of AI require governance rather than applying generic high-level principles

Explanation

William argues that the AI field is too vast and diverse to apply broad governance principles uniformly across all applications. He emphasizes the need for careful investigation and mapping to determine which internet properties apply generally versus in specific contexts, rather than making assumptions about universal applicability.


Evidence

Points to the unlimited range of AI applications from medicine to environment and suggests need for close investigation and mapping


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Legal and regulatory


Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency

Explanation

William expresses skepticism about major AI companies voluntarily adopting governance frameworks that don’t align with their immediate profitability goals. He argues that these companies have demonstrated they will prioritize their business interests over external governance constructs, making multilateral cooperation challenging.


Evidence

References companies’ demonstrated behavior of prioritizing business interests, including ‘sponsoring military parades for dear leaders in Washington’


Major discussion point

Market Concentration and Gatekeeping Issues


Topics

Economic | Legal and regulatory


Must identify where there’s actual functional demand for international governance rather than assuming need based on technology existence

Explanation

William warns against assuming that new technology automatically creates demand for governance arrangements. He argues that successful international governance requires clear functional needs for coordination or harmonization, using historical examples from telecommunications where technical requirements drove cooperation.


Evidence

Uses historical examples of radio frequency spectrum and telecom network interconnection where technical necessity drove international cooperation


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Disagreed with

– Andrew Campling

Disagreed on

Starting point for AI governance frameworks


Multilateral regulatory interventions face political obstacles, and binding international agreements may be unrealistic

Explanation

William points to current political realities that make international AI governance challenging, particularly noting that net neutrality is now prohibited in the US and that the G77 and China are demanding binding commitments from UN processes. He questions the feasibility of negotiating binding international AI agreements in the current political climate.


Evidence

Notes that net neutrality is ‘verboten in the United States now’ and references G77 and China’s demands for binding international commitments from UN AI processes


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Disagreed with

– Yik Chan Ching

Disagreed on

Optimism vs pessimism about multilateral AI governance


S

Sandrine ELMI HERSI

Speech speed

125 words per minute

Speech length

791 words

Speech time

378 seconds

Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability

Explanation

Sandrine argues that despite some progress through sectoral initiatives and codes of conduct, many AI models lack sufficient transparency. She emphasizes that greater openness, particularly to researchers, is essential for improving both the auditability and explainability of AI systems, as well as their efficiency.


Evidence

References ARCEP’s ongoing technical hearings and file testing with data scientists, and notes some progress through sectoral initiatives and codes of conduct


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory


Agreed with

– Renata Mielli
– Vint Cerf
– Hadia Elminiawi

Agreed on

Need for AI transparency and explainability to address opacity challenges


Disagreed with

– Hadia Elminiawi

Disagreed on

Feasibility of complete AI transparency and openness


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services

Explanation

Sandrine argues that the non-discrimination principle originally applied to prevent ISPs from favoring their own services should now be extended to AI systems. She contends that today’s digital gatekeepers include not just ISPs but also AI systems that can narrow user perspectives and freedom of choice.


Evidence

References ARCEP’s work on assessing extension of non-discrimination principles and draws parallel to original ISP regulation


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Infrastructure | Legal and regulatory


Disagreed with

– Renata Mielli

Disagreed on

Direct applicability of net neutrality principles to AI


Need to preserve plurality of economic players’ access to key inputs for AI development including data, computing resources, and energy

Explanation

Sandrine emphasizes the importance of maintaining competitive AI markets by ensuring diverse economic actors can access essential resources for AI development. This includes not just data but also the computational power and energy resources necessary for training and running AI models.


Evidence

References ARCEP’s investigation into preserving openness of AI markets


Major discussion point

Market Concentration and Gatekeeping Issues


Topics

Economic | Infrastructure


Need to ensure diversity of content when AI chatbots provide single answers instead of hundreds of web pages

Explanation

Sandrine highlights a fundamental shift in how users access information – from browsing multiple web pages to receiving single AI-generated responses. She argues this change requires ensuring that AI systems don’t simply amplify dominant sources but remain open to smaller and independent content creators.


Evidence

Contrasts traditional web search results (hundreds of pages) with AI chatbot responses (single answer)


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Sociocultural | Legal and regulatory


R

Renata Mielli

Speech speed

111 words per minute

Speech length

674 words

Speech time

361 seconds

AI systems need transparency and explainability especially for social impact assessment and compliance processes, unlike the naturally open internet protocols

Explanation

Renata argues that AI governance requires specific principles like transparency and explainability that weren’t as critical for internet governance because the internet was built on naturally open, decentralized protocols developed collaboratively. AI systems, being opaque and centralized, require these additional transparency measures for social impact assessment and compliance.


Evidence

Contrasts internet’s open, decentralized, collaborative protocol development with AI’s opacity and centralization


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory


Agreed with

– Sandrine ELMI HERSI
– Vint Cerf
– Hadia Elminiawi

Agreed on

Need for AI transparency and explainability to address opacity challenges


Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure

Explanation

Renata identifies a fundamental difference between internet infrastructure and AI technology in terms of neutrality. While net neutrality was designed for telecommunications infrastructure that could be neutral, AI technology itself is inherently non-neutral, making direct application of net neutrality principles problematic.


Evidence

Distinguishes between neutrality in telecommunications infrastructure versus the inherent non-neutrality of AI technology


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Infrastructure | Legal and regulatory


Disagreed with

– Sandrine ELMI HERSI

Disagreed on

Direct applicability of net neutrality principles to AI


Need to transform principles into technical standards while distinguishing between governance and regulation

Explanation

Renata emphasizes the practical challenge of moving from high-level principles to implementable technical standards. She stresses the importance of understanding that governance and regulation are different concepts, with governance being multistakeholder while regulation requires national-level legal frameworks.


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory | Infrastructure


V

Vint Cerf

Speech speed

133 words per minute

Speech length

1262 words

Speech time

567 seconds

Provenance of information used by AI agents and references must be available for critical evaluation of outputs

Explanation

Vint emphasizes the critical importance of being able to trace and verify the sources of information used by AI systems. He argues that users need access to the provenance of training data and references to conduct their own critical thinking and evaluation of AI outputs, particularly given concerns about hallucination and counterfactual information.


Evidence

References the problem of AI hallucination and generation of counterfactual output, emphasizing need for critical evaluation capabilities


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory


Agreed with

– Sandrine ELMI HERSI
– Renata Mielli
– Hadia Elminiawi

Agreed on

Need for AI transparency and explainability to address opacity challenges


Agent-to-agent protocols and model context protocols are being developed to ensure interoperability among AI systems

Explanation

Vint describes emerging technical standards (A2A for agent-to-agent interaction and MCP for model context protocol) that aim to create interoperability between AI agents. These protocols are designed to provide clarity and confidence in semantic matching between agents, preventing the kind of information degradation that occurs in the telephone game.


Evidence

Explains A2A (agent-to-agent) and MCP (model context protocol) standards and uses the analogy of the telephone parlor game to illustrate communication degradation risks


Major discussion point

Technical Standards and Interoperability


Topics

Infrastructure | Digital standards


Agreed with

– Yik Chan Ching
– Audience

Agreed on

Importance of technical standards and interoperability for AI systems


Focus should be on risk to users and liability for providers, with high-risk applications requiring higher safety levels

Explanation

Vint advocates for a risk-based approach to AI regulation, where the level of safety requirements corresponds to the potential risk to users. He suggests that high-risk applications like medical diagnosis or financial advice should have stringent safety requirements, with providers demonstrating due diligence to reduce user risk.


Evidence

Provides examples of high-risk applications including medical diagnosis, medical treatment recommendations, and financial advice


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Yik Chan Ching
– Alejandro Pisanty

Agreed on

Risk-based approach to AI governance with focus on user safety and provider liability


S

Shuyan Wu

Speech speed

120 words per minute

Speech length

483 words

Speech time

239 seconds

China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era

Explanation

Shuyan describes China Mobile’s comprehensive approach to digital inclusion, covering infrastructure development (5G networks reaching all villages), user protection (transparent services, fraud prevention), and targeted solutions for vulnerable groups (elderly, minors, rural communities). This experience is being adapted for AI governance to ensure universal access and inclusive benefits.


Evidence

Provides specific examples: China Mobile’s 5G network covering all villages, customized services for elderly and minors, 5G smart education for rural areas, AI-powered fraud detection, and smart village doctor systems


Major discussion point

Global South Perspectives and Digital Divide


Topics

Development | Infrastructure


A

Audience

Speech speed

147 words per minute

Speech length

544 words

Speech time

220 seconds

Need to focus on intersection of AI and internet where AI feeds on web content and produces web content

Explanation

The audience member from W3C suggests that rather than trying to govern all of AI, focus should be on the specific intersections between AI and the internet. This includes how AI systems consume web content for training, produce web content as output, and operate as agents within the web ecosystem.


Evidence

Mentions AI being fed from web content, web content being produced through AI, and AI being used as agents on the web


Major discussion point

Technical Standards and Interoperability


Topics

Infrastructure | Digital standards


Agreed with

– Vint Cerf
– Yik Chan Ching

Agreed on

Importance of technical standards and interoperability for AI systems


Duty of care and precautionary principle should be foundational building blocks for AI governance

Explanation

Andrew Campling argues that instead of starting with internet governance models, AI governance should be built on duty of care requirements and precautionary principles. He suggests these would be more practical and realistic foundations given the commercial realities and dominant players in the AI space.


Evidence

Draws comparison to social media governance challenges with dominant players disinterested in collaborative initiatives


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory | Cybersecurity


Disagreed with

– William Drake
– Andrew Campling

Disagreed on

Starting point for AI governance frameworks


A

Alejandro Pisanty

Speech speed

167 words per minute

Speech length

718 words

Speech time

257 seconds

Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks

Explanation

Alejandro argues for leveraging existing regulatory frameworks rather than building AI governance from scratch. He suggests that many rules already exist for automated systems, medical devices, and government procurement that can be extended or modified to address AI-specific concerns like discrimination and harm, rather than creating completely new regulatory structures.


Evidence

Provides examples of existing government purchasing rules requiring non-discriminatory systems, medical device regulations for automated systems, and established approaches to handling uncertainty and probability in regulation


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory


Agreed with

– Vint Cerf
– Yik Chan Ching

Agreed on

Risk-based approach to AI governance with focus on user safety and provider liability


Y

Yik Chan Ching

Speech speed

144 words per minute

Speech length

500 words

Speech time

207 seconds

Risk assessment, safety issues, and liability mechanisms are crucial for holding AI developers accountable

Explanation

Yik Chan emphasizes three key areas for AI governance based on PNAI’s research: risk assessment as a fundamental approach, safety as a critical concern, and liability as the mechanism to ensure AI developers and deployers remain accountable for their systems’ impacts.


Evidence

References PNAI’s three years of research and reports on liability, interoperability, and environmental protection


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory


Agreed with

– Vint Cerf
– Alejandro Pisanty

Agreed on

Risk-based approach to AI governance with focus on user safety and provider liability


AI standards development is progressing significantly in EU, China, and other regions, particularly around safety issues

Explanation

Yik Chan highlights the substantial progress being made in AI standardization efforts globally, with particular emphasis on safety standards. He notes that standards will play a crucial role in AI regulation and points to developments in multiple jurisdictions including the EU’s AI Act standards and China’s safety-focused standards.


Evidence

References EU AI Act standards announcements and China’s progress on safety and other AI-related standards


Major discussion point

Technical Standards and Interoperability


Topics

Infrastructure | Digital standards


Agreed with

– Vint Cerf
– Audience

Agreed on

Importance of technical standards and interoperability for AI systems


Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures

Explanation

Yik Chan argues that the AI governance community is more prepared than previous technology governance efforts because of lessons learned from social media. He suggests that having vibrant discussions and early intervention from multiple stakeholders (civil society, academia, industry) represents a more precautionary approach than was taken with social media.


Evidence

Contrasts current multi-stakeholder AI discussions with past social media governance approaches


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Disagreed with

– William Drake

Disagreed on

Optimism vs pessimism about multilateral AI governance


O

Olivier Crepin-Leblond

Speech speed

144 words per minute

Speech length

774 words

Speech time

321 seconds

Interactive multi-stakeholder sessions are essential for effective governance discussions on bridging internet and AI governance

Explanation

Olivier emphasizes the importance of creating interactive forums where diverse speakers can present different angles on complex governance topics, followed by broader community discussion. He advocates for inclusive participation where attendees can join the discussion table and contribute to the dialogue.


Evidence

Organizes joint session between Dynamic Coalition on Core Internet Values and Dynamic Coalition on Network Neutrality with multiple speakers and open floor discussion


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Time constraints require focused and efficient discussion formats to address complex governance challenges

Explanation

Olivier recognizes that meaningful governance discussions must balance thoroughness with practical time limitations. He structures the session to maximize productive dialogue while acknowledging the need to move efficiently through different perspectives and community input.


Evidence

Notes having only 75 minutes for the session and manages time allocation between speakers, commenters, and open discussion


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Continued engagement beyond formal sessions is crucial for advancing governance frameworks

Explanation

Olivier emphasizes that meaningful governance work extends beyond individual sessions and requires ongoing collaboration through established channels. He encourages participants to maintain engagement through mailing lists and future collaborative work to build on the discussions initiated during formal meetings.


Evidence

Invites participants to join DC mailing lists and continue discussions, emphasizing the importance of ongoing participation in future work


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Agreements

Agreement points

Risk-based approach to AI governance with focus on user safety and provider liability

Speakers

– Vint Cerf
– Yik Chan Ching
– Alejandro Pisanty

Arguments

Focus should be on risk to users and liability for providers, with high-risk applications requiring higher safety levels


Risk assessment, safety issues, and liability mechanisms are crucial for holding AI developers accountable


Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks


Summary

Multiple speakers converged on the importance of implementing risk-based governance frameworks that prioritize user safety and establish clear liability mechanisms for AI providers, particularly for high-risk applications like medical diagnosis and financial advice.


Topics

Legal and regulatory | Cybersecurity


Need for AI transparency and explainability to address opacity challenges

Speakers

– Sandrine ELMI HERSI
– Renata Mielli
– Vint Cerf
– Hadia Elminiawi

Arguments

Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability


AI systems need transparency and explainability especially for social impact assessment and compliance processes, unlike the naturally open internet protocols


Provenance of information used by AI agents and references must be available for critical evaluation of outputs


Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns


Summary

Speakers agreed that AI systems require significantly more transparency than current implementations provide, though they acknowledged practical challenges in achieving complete openness due to investment and security concerns.


Topics

Legal and regulatory


Importance of technical standards and interoperability for AI systems

Speakers

– Vint Cerf
– Yik Chan Ching
– Audience

Arguments

Agent-to-agent protocols and model context protocols are being developed to ensure interoperability among AI systems


AI standards development is progressing significantly in EU, China, and other regions, particularly around safety issues


Need to focus on intersection of AI and internet where AI feeds on web content and produces web content


Summary

There was strong agreement on the critical role of developing technical standards for AI interoperability, with recognition of ongoing global efforts in standardization and the need to focus on AI-internet intersections.


Topics

Infrastructure | Digital standards


Similar viewpoints

These speakers shared the view that while internet and AI are fundamentally different architectures, the core principles that made the internet successful should be adapted and applied to AI governance, particularly through extending network neutrality concepts.

Speakers

– Luca Belli
– Pari Esfandiari
– Sandrine ELMI HERSI

Arguments

Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary


Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Topics

Infrastructure | Legal and regulatory


Both speakers expressed skepticism about the feasibility of applying internet governance models to AI, emphasizing the need for more pragmatic approaches that account for commercial realities and dominant market players.

Speakers

– William Drake
– Andrew Campling

Arguments

Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency


Duty of care and precautionary principle should be foundational building blocks for AI governance


Topics

Legal and regulatory | Economic


Both speakers emphasized the importance of inclusive AI development that bridges digital divides while respecting national approaches, with focus on ensuring benefits reach underserved populations.

Speakers

– Hadia Elminiawi
– Shuyan Wu

Arguments

Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them


China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era


Topics

Development | Legal and regulatory


Unexpected consensus

Limitations of direct application of internet governance principles to AI

Speakers

– Renata Mielli
– William Drake
– Hadia Elminiawi

Arguments

Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure


Need to define precisely what aspects of AI require governance rather than applying generic high-level principles


Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns


Explanation

Despite the session’s goal of bridging internet and AI governance, there was unexpected consensus among speakers from different backgrounds that direct application of internet principles to AI faces significant practical and conceptual limitations.


Topics

Legal and regulatory | Infrastructure


Importance of leveraging existing regulatory frameworks rather than creating entirely new ones

Speakers

– Alejandro Pisanty
– Roxana Radu
– Yik Chan Ching

Arguments

Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks


Internet governance experience over 30 years provides mature framework for applying values to technical, policy and legal standards that AI governance lacks


Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures


Explanation

There was unexpected agreement across speakers that AI governance should build upon existing regulatory experience and frameworks rather than starting from scratch, representing a pragmatic approach to governance development.


Topics

Legal and regulatory


Overall assessment

Summary

The discussion revealed significant agreement on the need for risk-based AI governance, transparency requirements, and technical standards development, while acknowledging fundamental challenges in directly applying internet governance principles to AI systems.


Consensus level

Moderate to high consensus on core governance needs (safety, transparency, standards) but significant disagreement on implementation approaches and the applicability of internet governance models. This suggests that while there is shared understanding of AI governance challenges, the path forward requires careful consideration of AI’s unique characteristics rather than simple adaptation of existing frameworks.


Differences

Different viewpoints

Feasibility of complete AI transparency and openness

Speakers

– Hadia Elminiawi
– Sandrine ELMI HERSI

Arguments

Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns


Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability


Summary

Hadia questions whether requiring full transparency and open-source access to AI models is practical given massive capital investments and security risks, while Sandrine advocates for greater openness particularly to researchers for auditability purposes


Topics

Legal and regulatory | Cybersecurity


Direct applicability of net neutrality principles to AI

Speakers

– Renata Mielli
– Sandrine ELMI HERSI

Arguments

Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Summary

Renata argues that net neutrality cannot be directly applied to AI because AI technology is inherently non-neutral, while Sandrine advocates for extending non-discrimination principles from network neutrality to AI systems


Topics

Infrastructure | Legal and regulatory


Starting point for AI governance frameworks

Speakers

– William Drake
– Andrew Campling

Arguments

Must identify where there’s actual functional demand for international governance rather than assuming need based on technology existence


Duty of care and precautionary principle should be foundational building blocks for AI governance


Summary

William emphasizes the need to identify functional demand for governance before creating frameworks, while Andrew advocates for starting with duty of care and precautionary principles as foundational elements


Topics

Legal and regulatory


Optimism vs pessimism about multilateral AI governance

Speakers

– William Drake
– Yik Chan Ching

Arguments

Multilateral regulatory interventions face political obstacles, and binding international agreements may be unrealistic


Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures


Summary

William expresses pessimism about the feasibility of multilateral AI governance given current political realities, while Yik Chan is more optimistic about early intervention approaches based on lessons learned from social media


Topics

Legal and regulatory


Unexpected differences

Neutrality of AI technology itself

Speakers

– Renata Mielli
– Sandrine ELMI HERSI

Arguments

Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Explanation

This disagreement is unexpected because both speakers come from regulatory/governance backgrounds and might be expected to align on extending internet governance principles to AI, but they fundamentally disagree on whether AI’s inherent non-neutrality prevents direct application of net neutrality principles


Topics

Infrastructure | Legal and regulatory


Feasibility of international AI governance

Speakers

– William Drake
– Hadia Elminiawi

Arguments

Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency


Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them


Explanation

This disagreement is unexpected given both speakers’ extensive experience in international governance – William’s pessimism about private sector cooperation contrasts sharply with Hadia’s optimism about aligning international and national strategies


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The main areas of disagreement center on the practical implementation of AI governance principles, the extent of transparency required, the applicability of existing internet governance frameworks to AI, and the feasibility of international cooperation


Disagreement level

Moderate to high disagreement level with significant implications – while speakers generally agree on the importance of applying internet values to AI governance, they fundamentally disagree on how to achieve this, suggesting that developing consensus on AI governance frameworks will require substantial additional work to bridge these conceptual and practical differences


Partial agreements

Partial agreements

Similar viewpoints

These speakers shared the view that while internet and AI are fundamentally different architectures, the core principles that made the internet successful should be adapted and applied to AI governance, particularly through extending network neutrality concepts.

Speakers

– Luca Belli
– Pari Esfandiari
– Sandrine ELMI HERSI

Arguments

Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary


Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Topics

Infrastructure | Legal and regulatory


Both speakers expressed skepticism about the feasibility of applying internet governance models to AI, emphasizing the need for more pragmatic approaches that account for commercial realities and dominant market players.

Speakers

– William Drake
– Andrew Campling

Arguments

Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency


Duty of care and precautionary principle should be foundational building blocks for AI governance


Topics

Legal and regulatory | Economic


Both speakers emphasized the importance of inclusive AI development that bridges digital divides while respecting national approaches, with focus on ensuring benefits reach underserved populations.

Speakers

– Hadia Elminiawi
– Shuyan Wu

Arguments

Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them


China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era


Topics

Development | Legal and regulatory


Takeaways

Key takeaways

Internet’s foundational principles of openness, decentralization, and transparency can serve as signposts for AI governance, but require active adaptation since Internet and AI are ‘two different beasts’


AI governance faces fundamental tension between Internet’s open, distributed architecture and AI’s centralized, proprietary model controlled by few actors


Risk-based approach to AI governance should focus on user safety and provider liability, with high-risk applications requiring higher safety standards


Transparency and explainability are essential for AI systems but complete openness may be unrealistic due to investment concerns and security risks


Network neutrality principles of non-discrimination should extend to AI infrastructure and content curation to preserve diversity and prevent gatekeeping


Technical standards and interoperability protocols (like agent-to-agent and model context protocols) will be crucial for AI governance implementation


Global South perspectives and capabilities must be included in AI governance discussions to address existing asymmetries


AI governance should build on 30 years of Internet governance experience rather than starting from scratch, while recognizing what doesn’t directly apply


Multi-stakeholder governance approach is essential, but private actors’ commercial interests may limit participation in voluntary international standards


Resolutions and action items

Continue discussion through Dynamic Coalition mailing lists for interested participants


Develop detailed mapping matrix of which Internet properties apply to specific AI contexts and applications


ARCEP (French regulator) to complete ongoing technical report on applying Internet core values to AI governance


Focus on intersection points between AI and Internet rather than trying to govern all AI applications generically


Explore alternative transparency solutions like requiring open-source safety guardrails rather than full model openness


Unresolved issues

How to define and regulate new AI gatekeepers when traditional Internet governance models may not apply


Whether complete AI model transparency is realistic or desirable given investment requirements and security concerns


How to ensure meaningful participation of major AI companies in voluntary international governance frameworks


What specific aspects of AI actually require international coordination versus national regulation


How to balance innovation incentives with transparency and accountability requirements


Whether binding international AI agreements are feasible given current political climate


How to transform high-level principles into actionable technical standards and regulatory frameworks


How to address liability and responsibility in multi-agent AI systems


What constitutes functional demand for AI governance versus assumed need based on technology existence


Suggested compromises

Require open-source safety guardrails and published safety measures rather than full AI model transparency


Apply layered safeguards approach with AI algorithms monitoring other AI algorithms for responsible use


Focus on risk-based regulation where high-risk applications have stricter requirements rather than blanket AI rules


Extend existing regulatory frameworks (medical devices, purchasing rules) to AI applications rather than creating entirely new governance structures


Pursue sector-specific AI governance approaches rather than generic cross-cutting regulations


Combine national AI regulations with aligned international cooperation strategies that support rather than conflict with sovereignty


Start with duty of care and precautionary principles as foundational building blocks rather than comprehensive Internet governance models


Thought provoking comments

The Internet and AI are two different beasts. So we are speaking about two things that are two digital phenomenon, but they are quite different. And the Internet, as Pari was reminding us very eloquently, has been built on an open, decentralized, transparent, interoperable architecture that made the success of the Internet over the past 70 years… but the question here is how we reconcile this with a highly centralized AI architecture.

Speaker

Luca Belli


Reason

This comment crystallized the fundamental tension at the heart of the discussion – the architectural incompatibility between the internet’s foundational principles and AI’s current development trajectory. It moved beyond surface-level comparisons to identify the core structural challenge.


Impact

This framing established the central problematic that all subsequent speakers had to grapple with. It shifted the discussion from whether internet values could apply to AI, to how they could be reconciled with AI’s inherently different architecture. This tension became a recurring theme throughout the session.


Every time someone interacts with one of those [large language models], they are specializing it to their interests and their needs. So in a sense, we have a very distributed ability to adapt a particular large language model to a particular problem… And that’s important, the fact that we are able to personalize.

Speaker

Vint Cerf


Reason

This insight reframed AI from being purely centralized to having distributed elements through user interaction. It challenged the binary view of centralized vs. decentralized systems and introduced nuance about how users can maintain agency even within centralized AI systems.


Impact

This comment provided a counterpoint to concerns about AI centralization and influenced later discussions about user agency and the potential for maintaining some internet-like distributed characteristics in AI systems. It offered a more optimistic perspective on preserving user empowerment.


Is it realistic or even desirable to expect that all AI models be made fully open source? Given the amount of capital investment in these models, requiring complete openness could discourage investment in AI models, destroying a lot of economic value and hindering innovation… Is it truly responsible or logical to allow unrestricted access to tools that could be used to build weapons or plan harmful disruptive actions?

Speaker

Hadia Elminiawi


Reason

This comment introduced crucial practical and ethical constraints that challenge idealistic applications of internet openness principles to AI. It forced the discussion to confront real-world trade-offs between values like openness and safety/security concerns.


Impact

This intervention shifted the conversation from theoretical principle-mapping to practical implementation challenges. It introduced the concept of ‘layered safeguards’ and sparked discussion about alternative approaches to transparency that don’t require full openness, influencing the overall tone toward more pragmatic solutions.


What we’ve done in internet governance over the last 30 years is much more than identifying core values. We apply them, we’ve embedded them into core practices, and we are continuing to refine these practices day by day… With AI, there seems to be a preference for unilateral standards, the giants developing their own standards, sharing them through APIs, versus globally negotiated standards.

Speaker

Roxana Radu


Reason

This comment highlighted a critical difference in governance maturity and approach between internet and AI governance. It identified the shift from collaborative standard-setting to unilateral corporate control as a key challenge, moving beyond principles to examine governance processes themselves.


Impact

This observation redirected attention from what principles to apply to how governance processes differ between domains. It influenced subsequent discussions about stakeholder participation and the challenges of bringing AI companies to collaborative governance tables.


We simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand [for international governance]. You know, often people point to things and say, oh, there’s some new phenomena. We must have governance arrangements. But very often the demand for governance arrangements is not equally distributed across actors.

Speaker

William Drake


Reason

This comment challenged a fundamental assumption underlying the entire session – that AI governance is necessarily needed or wanted by key stakeholders. It introduced a dose of political realism about power dynamics and incentives that was largely absent from earlier idealistic discussions.


Impact

This intervention served as a reality check that sobered the discussion. It forced participants to consider not just what governance should look like, but whether it’s actually achievable given current power structures. This influenced the final discussions toward more pragmatic approaches and acknowledgment of constraints.


If you want to regulate large language models provided over the internet for chatbots… Why would OpenAI, Google, Meta, et cetera… why would they come together and agree to limit themselves in some way? Also to sit at the table with people who are their users or their clients, potentially their competitors if something arises from their innovation.

Speaker

Alejandro Pisanty


Reason

This comment cut to the heart of the governance challenge by questioning the fundamental incentive structures. It moved beyond technical and ethical considerations to examine the political economy of AI governance, highlighting why voluntary cooperation might be unrealistic.


Impact

This comment reinforced the realist turn in the discussion initiated by Drake and others. It contributed to a more sober assessment of governance possibilities and influenced the final recommendations toward focusing on areas where there might be actual incentives for cooperation, such as liability and risk management.


Overall assessment

These key comments fundamentally shaped the discussion by introducing increasing levels of realism and complexity. The session began with an optimistic framing about mapping internet values to AI governance, but these interventions progressively challenged assumptions, introduced practical constraints, and highlighted structural differences between the domains. The comments created a dialectical progression from idealism to realism, ultimately leading to more nuanced and pragmatic conclusions. Rather than simply advocating for applying internet principles to AI, the discussion evolved to acknowledge the fundamental tensions, power dynamics, and implementation challenges involved. This resulted in a more sophisticated understanding of the governance challenge and more realistic recommendations focused on specific areas like risk management, liability, and targeted interventions rather than wholesale principle transfer.


Follow-up questions

How do we define who are the new gatekeepers in AI and how to implement laws that may not even exist yet to regulate them?

Speaker

Luca Belli


Explanation

This addresses the fundamental challenge of identifying control points in AI systems and developing appropriate regulatory frameworks, which is crucial for applying internet governance principles to AI


What alternative solutions can we consider for AI transparency beyond making all models fully open source?

Speaker

Hadia Elminiawi


Explanation

This explores practical approaches to transparency that balance openness with security concerns and investment protection, which is essential for developing workable AI governance frameworks


How can we develop a detailed matrix mapping which internet properties apply generally or in specific AI contexts?

Speaker

William Drake


Explanation

This would provide a systematic framework for understanding how internet governance principles can be applied across different AI applications and contexts


What aspects of AI processes absolutely require international coordination or harmonization?

Speaker

William Drake


Explanation

This is critical for determining where international governance efforts should focus and where there is genuine functional demand for coordination


How do we bring different stakeholders, especially dominant AI companies, to the table for governance discussions?

Speaker

Alejandro Pisanty


Explanation

This addresses the practical challenge of creating incentives for major AI players to participate in governance frameworks that may limit their operations


How can we establish indelible ways to identify sources of content used to train AI models?

Speaker

Vint Cerf


Explanation

This is important for establishing provenance and accountability in AI systems, which is fundamental to trust and liability frameworks


How do existing web standards and expectations for user agents apply to AI-based agents?

Speaker

Dominique Hazel Monsieur


Explanation

This explores how established internet protocols and standards can be extended to govern AI agents operating on the web


How can we transform AI governance principles into technical standards?

Speaker

Renata Mielli


Explanation

This addresses the practical implementation challenge of moving from high-level principles to actionable technical specifications


What does ensuring transparent algorithms mean in practical terms for AI systems?

Speaker

Hadia Elminiawi


Explanation

This seeks to define concrete requirements for AI transparency beyond abstract principles


How can we ensure AI systems remain open to smaller and independent content creators rather than just amplifying dominant sources?

Speaker

Sandrine ELMI HERSI


Explanation

This addresses concerns about AI systems potentially concentrating power and reducing diversity in content and innovation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies

Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies

Session at a glance

Summary

This panel discussion examined the case for local artificial intelligence innovation that serves humanity’s benefit, focusing on three key dimensions: inclusivity, indigeneity, and intentionality. The session was moderated by Valeria Betancourt and featured experts from various organizations discussing how to develop contextually grounded AI that contributes to people and planetary well-being.


Anita Gurumurthy from IT4Change framed the conversation by highlighting the tension between unequal AI capabilities distribution and increasing demands from climate and energy impacts. She emphasized that current AI investment ($200 billion between 2022-2025) is three times global climate adaptation spending, raising concerns about energy consumption and cultural homogenization through Western-centric AI models. The discussion revealed that local AI development faces significant challenges, including limited access to computing infrastructure, data scarcity in local languages, and skills gaps.


Wai Sit Si Thou from UN Trade and Development presented a framework focusing on infrastructure, data, and skills as key drivers for inclusive AI adoption. The presentation emphasized working with locally available infrastructure, community-led data, and simple interfaces while advocating for worker-centric approaches that complement rather than replace human labor. Ambassador Abhishek Singh from India shared practical examples of democratizing AI access through government-subsidized computing infrastructure, crowd-sourced linguistic datasets, and capacity-building initiatives.


Sarah Nicole from Project Liberty Institute argued that AI amplifies existing centralized digital economy structures rather than disrupting them, advocating for radical infrastructure changes that give users data agency through cooperative models and open protocols. The discussion explored various approaches to data governance, including data cooperatives that enable collective bargaining power rather than individual data monetization.


The panelists concluded that developing local AI requires international cooperation, shared computing infrastructure, open-source models, and new frameworks for intellectual property that protect community interests while fostering innovation for the common good.


Keypoints

## Major Discussion Points:


– **Infrastructure and Resource Inequality in AI Development**: The discussion highlighted the significant AI divide, with infrastructure, data, and skills concentrated among few actors. Key statistics showed AI investment doubled to $200 billion between 2022-2025 (three times global climate adaptation spending), and single companies like NVIDIA controlling 90% of critical GPU production.


– **Local vs. Global AI Models and Cultural Preservation**: Participants debated the tension between large-scale global AI systems and the need for contextually grounded, local AI that preserves linguistic diversity and cultural knowledge. The conversation emphasized how current AI systems amplify “epistemic injustices” and western cultural homogenization while erasing local ways of thinking.


– **Data Ownership, Intellectual Property, and Commons**: A significant portion focused on rethinking data ownership models, moving from individual data monetization to collective approaches like data cooperatives. Participants discussed how current IP frameworks may not serve public interest and explored alternatives for fair value distribution from AI development.


– **Infrastructure Sharing and Cooperative Models**: Multiple speakers advocated for shared computing infrastructure (referencing models like CERN) and cooperative approaches to make AI development more accessible to smaller actors, developing countries, and local communities. Examples included India’s subsidized compute access and Switzerland’s supercomputer sharing initiatives.


– **Intentionality and Governance for Common Good**: The discussion emphasized the need for deliberate policy choices to steer AI development toward public benefit rather than purely private value creation, including precautionary principles, public procurement policies, and accountability mechanisms.


## Overall Purpose:


The discussion aimed to explore pathways for developing “local artificial intelligence” that serves humanity’s benefit, particularly focusing on how AI innovation can be made more inclusive, contextually relevant, and aligned with common good rather than concentrated corporate interests. The session sought to identify practical solutions for democratizing AI development and ensuring its benefits reach marginalized communities and developing countries.


## Overall Tone:


The discussion maintained a collaborative and solution-oriented tone throughout, with participants building on each other’s ideas constructively. While speakers acknowledged significant challenges and structural inequalities in current AI development, the tone remained optimistic about possibilities for change. The conversation was academic yet practical, with participants sharing concrete examples and policy recommendations. There was a sense of urgency about addressing these issues, but the overall atmosphere was one of thoughtful problem-solving rather than criticism alone.


Speakers

**Speakers from the provided list:**


– **Valeria Betancourt** – Moderator of the panel session on local artificial intelligence innovation pathways


– **Anita Gurumurthy** – From IT4Change, expert on digital justice and AI democratization


– **Wai Sit Si Thou** – From UN Trade and Development Agency (UNCTAD), participated remotely, expert on inclusive AI for development


– **Abhishek Singh** – Ambassador, Government of India, expert on AI infrastructure and digital governance


– **Sarah Nicole** – From Project Liberty Institute, expert on digital infrastructure and data agency


– **Thomas Schneider** – Ambassador, Government of Switzerland, economist and historian with expertise in digital policy


– **Nandini Chami** – From IT4Change, expert on AI governance and techno-institutional choices


– **Sadhana Sanjay** – Session coordinator managing remote participation and questions


– **Audience** – Various audience members including Dr. Nermin Salim (Secretary General of Creators Union of Arab, expert in intellectual property)


**Additional speakers:**


– **Dr. Nermin Salim** – Secretary General of Creators Union of Arab (consultative status with UN), expert in intellectual property law, specifically AI intellectual property protection


Full session report

# Local Artificial Intelligence Innovation Pathways Panel Discussion


## Introduction and Context


This panel discussion, moderated by Valeria Betancourt, examined pathways for developing local artificial intelligence innovation that serves humanity’s benefit. The session was structured around three key dimensions: inclusivity, indigeneity, and intentionality. Participants included Anita Gurumurthy from IT4Change, Ambassador Abhishek Singh from India, Wai Sit Si Thou from UN Trade and Development, Thomas Schneider (Ambassador from Switzerland), Sarah Nicole from Project Liberty Institute, and Nandini Chami from IT4Change.


The discussion was framed by striking statistics from the UN Digital Economy Report: AI-related investment doubled from $100 to $200 billion between 2022 and 2025, representing three times global spending on climate change adaptation. This established a central tension about democratising AI benefits whilst addressing resource constraints and environmental impact.


## Round One: Inclusivity and the AI Divide


### Infrastructure Inequality


Wai Sit Si Thou highlighted the profound inequalities in AI development capabilities, noting that NVIDIA produces 90% of critical GPUs, creating significant infrastructure barriers. This concentration represents what speakers termed the “AI divide,” where computing resources, data, and skills remain concentrated among few actors.


Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues creating environmental concerns. She noted that efficiency gains are being used to build larger models rather than reducing overall environmental impact.


### Shared Infrastructure Solutions


Ambassador Singh presented India’s approach to addressing infrastructure inequality through public investment. India created shared compute infrastructure with government subsidising costs to less than a dollar per GPU per hour, making expensive AI computing resources accessible to smaller actors who cannot afford commercial cloud rates.


Thomas Schneider described similar initiatives including Switzerland’s supercomputer network and efforts to share computing power globally. Multiple speakers endorsed a CERN-like model for AI infrastructure sharing, where pooled resources from multiple countries could provide affordable access to computing power for developing countries and smaller organisations.


### Framework for Inclusive Development


Wai Sit Si Thou presented a framework for inclusive AI adoption based on three drivers: infrastructure, data, and skills, with equity as the central focus. This approach emphasised working with locally available infrastructure, community-led data, and simple interfaces to enable broader adoption.


The framework advocated for worker-centric AI development that complements rather than replaces human labour, addressing concerns about technological unemployment. Solutions should work offline to serve populations without reliable internet access and use simple interfaces to overcome technical barriers.


## Round Two: Indigeneity and Cultural Preservation


### Epistemic Justice and Cultural Homogenisation


Anita Gurumurthy highlighted how current AI development amplifies “epistemic injustices,” arguing that Western cultural homogenisation through AI platforms erases cultural histories and multilingual thinking structures. She noted that large language models extensively use Wikipedia, demonstrating how AI systems utilise commons-based resources whilst privatising benefits.


The discussion revealed tension between necessary pluralism for local contexts and generalised models that dominate market development. Gurumurthy posed the critical question: “We reject the unified global system. But the question is, are these smaller autonomous systems even possible?”


### Preserving Linguistic Diversity


Ambassador Singh provided examples of addressing this challenge through crowd-sourcing campaigns for linguistic datasets. India’s approach involved creating portals where people could contribute datasets in their local languages, demonstrating community-led data collection that supports AI development reflecting linguistic diversity.


Wai Sit Si Thou emphasised that AI solutions must work with community-led data and indigenous knowledge for local contexts, advocating for approaches that complement rather than replace local ways of knowing.


## Round Three: Intentionality and Governance


### Beyond “Move Fast and Break Things”


Nandini Chami presented a critique of Silicon Valley’s “move fast and break things” approach, arguing that the precautionary principle should guide AI development given potential for widespread societal harm. She emphasised that private value creation and public value creation in AI are not automatically aligned, requiring deliberate policy interventions.


Chami highlighted how path dependencies mean AI adoption doesn’t automatically enable economic diversification in developing countries, requiring intentional approaches to ensure public benefit.


### Data Governance and Collective Approaches


Sarah Nicole challenged mainstream thinking about individual data rights, arguing that data gains value when aggregated and contextualised. She advocated for collective approaches through data cooperatives that provide better bargaining power than individual data monetisation schemes.


This contrasted with Ambassador Singh’s examples of marketplace mechanisms where individuals could be compensated for data contributions, citing the Karya company that pays delivery workers for data contribution. Nicole argued that individual data monetisation yields minimal returns and could exploit economically vulnerable populations.


### Democratic Participation


The discussion addressed needs for public participation in AI decision-making beyond addressing harms. Chami argued for meaningful democratic participation in how AI systems are conceptualised, designed, and deployed.


Sarah Nicole supported this through advocating for infrastructure changes that give users voice, choice, and stake in their digital lives through data agency and cooperative ownership models.


## Audience Questions and Intellectual Property


Dr. Nermin Salim raised questions about intellectual property frameworks and platforms for protecting content creators. Timothy asked remotely about IP frameworks and natural legal persons in the context of AI development.


The speakers agreed that current intellectual property frameworks are inadequate for the AI era. Gurumurthy highlighted how trade secrets lock up data needed by public institutions, whilst large language models utilise commons like Wikipedia without fair compensation to contributors.


## Key Areas of Agreement


### Cooperative Models


Speakers demonstrated consensus on the viability of cooperative models for AI governance, with support spanning civil society, government, and international organisations. There was strong agreement on shared infrastructure approaches and resource pooling.


### Community-Led Development


All speakers agreed on the importance of community-led and contextual approaches to AI development, representing a challenge to top-down, technology-driven deployment approaches.


### Need for Reform


Multiple speakers identified problems with existing intellectual property frameworks, agreeing that current regimes inadequately balance private rights with public interest.


## Unresolved Challenges


The discussion left critical questions unresolved, including the fundamental tension between pluralism and generalised models: how can smaller autonomous AI systems be made economically viable against dominant large-language models with scaling advantages?


The complexity of developing concrete metrics for safety, responsibility, and privacy in AI systems beyond “do no harm” principles remains challenging, particularly for establishing accountability across transnational value chains.


## Recommendations


Speakers proposed several concrete actions:


– Establish shared AI infrastructure models pooling resources from multiple countries


– Create global repositories of AI applications in key sectors that can be shared across geographies


– Develop crowd-sourcing campaigns for linguistic datasets to support AI development in minoritised languages


– Implement public procurement policies steering AI development toward human-centric solutions


– Explore data cooperative models enabling collective bargaining power


## Conclusion


This panel discussion revealed both the urgency and complexity of developing local AI innovation pathways serving humanity’s benefit. The speakers demonstrated consensus on the need for alternative approaches prioritising collective organisation, public accountability, and cultural diversity over purely market-driven solutions.


The conversation highlighted that inclusivity, indigeneity, and intentionality must be addressed simultaneously in AI development. However, significant challenges remain in translating shared principles into practical implementation, particularly the tension between necessary pluralism and economic pressures toward centralisation.


The discussion provides foundation for alternative policy approaches emphasising public interest, collective action, and democratic participation in AI governance, opening space for more deliberate, community-controlled approaches to AI development that could better serve diverse human needs whilst respecting resource constraints.


Session transcript

Valeria Betancourt: Welcome, everybody. Thank you so much for your presence here. This session is going to look at the case for local artificial intelligence, innovation pathways to harness AI for benefit of humanity. We have, I have the privilege to moderate this panel today. As the Global Digital Compact underscores, there is an urgent imperative for digital cooperation to harness the power of artificial intelligence, innovation for the benefit of humanity. Evidence so far produced in several parts of the world, particularly in the context of the Global South, increasingly points to the importance of contextually grounded artificial intelligence, innovation for a just and sustainable digital transition. This session is going to look at three dimensions of local artificial intelligence, inclusivity, indigeneity, and intentionality. Our speakers from the expertise and on viewpoints will help us to get a deeper understanding of how these dimensions played out for local AI that is contextual and that contributes to the well-being of people and planet. So I have the pleasure of having Anita Gurumurthy from IT4Change to just help us to frame the conversation that we will have. And I will invite Anita then to come and please frame the conversation, set the ground for and the tone for the conversation.


Anita Gurumurthy: Thank you, thank you, Valeria, and it’s an honor to be part of this panel. So I think the starting point when we look at a just and sustainable digital transition is to reconcile two things. On the one hand, you have an unequal distribution of AI capabilities, and on the other, you actually have, you know, an increasing set of demands owing to climate and energy and the impacts of innovation on a planetary scale. And therefore, the question is, how do we democratize innovation and look at ideas of scale afresh, because the models we have today are on planetary scale. Both the production and consumption of AI innovation need to be cognizant of planetary boundaries. Essentially, then, what is this idea of local AI? Is it different from ideas of localizing AI? Is there a concept such as local AI? Will that even work? I just want to place before you some statistics, and we have a colleague online who will speak about this from UN Trade and Development, from the Digital Economy Report that was brought out by the UN, and I want to quote some statistics. Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation. So, we’re investing much more on R&D and for AI and much less on what we need to do to, in many ways, look at the energy question and the water question. Supercomputing chips have enabled some energy efficiency, but market trends suggest that this is not going to make way for building or for developing models differently. It’s going to support bigger, more complex large-language models, in turn mitigating the marginal energy savings. And I’m going to talk a little bit about the future of computing and how it’s going to change the way we do things that are possible because chips are becoming more energy efficient. So the efficiencies in compute are really not necessarily going to translate into some kind of respite for the kind of climate change impacts. Now, I want to give you, you know, this is just for shock value. Energy demand of data centers. And this is a very, very vital concern. We also know that around the world there have been water disputes, you know, because of this. So there is this big conundrum, you know, we do need and we do want small is beautiful models. But are they plausible? Are they probable? And while there is the strong case for diversified local models, I want to really underscore that there are lots of people already working on this. And we have some people, you know, governments that are investing in this. And there are communities that are investing in this. And these are very important because from an anglocentric perspective, you know, we think everything is working well enough. You know, LLMs are doing great for us. Chat GPT is very useful. And certainly so, you know, to some extent. But what we ignore is that there is a western cultural homogenization and these AI platforms amplify epistemic injustices. So we are certainly doing more than excluding non-English speakers. We are changing the way in which we look at the world. We are erasing cultural histories and ways of thinking. So we need to retain the structures of our multilingual societies so those structures allow us to think differently and decolonize scientific advancement and innovation in AI. So how do we build our own computational grammar? And this is a question I think that’s really important. And we reject the unified global system. But the question is, are these smaller autonomous systems even possible? And we do this for minoritized communities, minoritized languages. And the second question is, many of the efforts in this fragmented set of communities are really not able to come together. And perhaps there is a way to bring them in dialogue and enable them to collaborate. So this tension between pluralism that is so necessary and generalized models that seem to be the way, the only way AI models are developing in the market, is this tension is where the sweet spot of investigation actually lies. And with that, I revert back to you.


Valeria Betancourt: Thank you, Anita. Thank you for illustrating also why enabling public accountability is a must in the way in which artificial intelligence is conceptualized, designed, and deployed. Let’s go to the first round of the conversation I mentioned that we will be digging into three dimensions of local AI, inclusivity, indigeneity, and intentionality. The first round will focus on inclusion. And the question for Wai Sit Si Thou, the UN Trade and Development Agency, and also Lynette Wambuka from Global Partnership for Sustainable Development Data is, what are the pathways to AI innovation that are truly inclusive? And how can local communities be real beneficiaries of AI? So let me invite our panelists to please address this initial question. So can we go with Wai Sit Si Thou? Yes. It’s online. It’s remotely. Welcome. Thank you. Thank you very much.


Wai Sit Si Thou: Just to double-check whether you can see my screen and hear me well. Yes. Yes. Okay, perfect. So my sharing will be based on this UNTACF flagship publication that was just released two months ago on the title, Inclusive AI for Development. So I think it fits into the discussion very well. And to begin with, I would just want to highlight three key drivers of AI development over the past decades. And they are infrastructure, data, and skills. And we want to look into the questions of eq-fizziness. We need to focus on these three key elements. Because right now we can see a significant AI divide. For example, in terms of infrastructure, one single company, NVIDIA, actually produced 90% of the GPU, which is a critical component for computing resources. And we witness the same kind of AI divide in data, skills, and also other areas like R&D, patent, scientific publication on AI, etc. So this is the main framework that helps us to dive into the discussion on how to make AI inclusive. And the first message that I have is on the key takeaway to promote inclusive AI adoption. This is featured in our report on many successful AI adoption cases in developing countries. And based on the framework that I just shared, on infrastructure, one very important takeaway is to work around the local available digital infrastructure. Right now, over the world, we have still one third of the publication population without access to the Internet. So some kind of AI solution that is able to work offline would be essential for us to promote this adoption. And that is what I meant by working around the locally available infrastructure. And the second point on data, it is essential to work with community-led data and also indigenous knowledge so we can really focus on the specific problem, on the issue in the local context. And the third key takeaway is the skills that I mentioned. We should use simple interface that help a user to use all this AI solution. And the last one is on partnership, because from what we investigate, many of this AI adoption at the local level, The second message that I have is on the worker-centric approach of AI adoption. From the previous technological evolution, we understand there are four key channels where AI may also impact this productivity and workforce. On the left-hand side, we have on the top left, starting with the automation process that AI could substitute human neighbor. And then on the top right-hand side, we have AI complementing human neighbor. And the other two channels are deepening automation and creating new forms of jobs. And from the previous experience, automation or this technology adoption actually focuses on the left two bubbles, that is, replacing human neighbor. But if we really want to have an inclusive AI adoption that benefits everyone, we should focus on the right-hand side, on how AI can complement human neighbor and creating meaningful new jobs. And with that, we need to focus on three areas of action. The first one is, of course, empowering the workforce that include basic digital literacy to re-skilling and up-skilling, so to make them adapting to this new AI approach of work progress. And the second very important point is what I also mentioned before, with the engagement with the worker. So we work with the community, we work with the workers, with the design and implementation of AI to make sure that you fit the purpose and also gain the trust of this whole AI adoption process. And the last point is about fostering the development of human-centric AI solution. That would be the major responsibility of the government through our endeavoring public procurement. and other tax and credit incentives that steer this AI adoption to an inclusive and worker-centric approach. And the last thing that I want to highlight is at the global level, there are also four key areas that we can work on. As Anita mentioned, accountability is key. What we want to advocate here is to have a public disclosure accountability mechanism that could reference the ESG reporting framework that is really mature nowadays in the private sector. So an AI equivalent could happen with public disclosure on how this AI works and its potential impact. So this is the accountability period. And second one is on digital infrastructure. To provide equitable access to AI infrastructure, a very useful model that we can learn from is the CERN model, which is the world’s largest particle physics laboratory right here in Geneva that I am working at. And this model could help pool the resources to provide shared infrastructure for every stakeholder. And the third one is on open innovation, including open data and open source that can really democratize long-range resources for AI innovation. And what we need is to coordinate all these fomented resources for better sharing and better standard. And the last point that I want to highlight is on capacity building. We think that an AI-focused center and network model after the UN-CIMEC technology center and network could help in this regard to provide the necessary technology support and capacity building to developing countries. And of course, the self-serve cooperation could help us address common challenges. Like in East Africa, Rwanda may not have enough data source to change AI with the local language of Swahili. But grouping the East African countries together, then And we can put this Swahili common language in the region to have better AI training. So these are some of our recommendations that I have, and I am happy to engage in further discussion.


Valeria Betancourt: Thank you. Thank you. Thank you very much, Jackie. So obviously, multidimensional approach is needed for the dividends of AI to be distributed equally. With that, I would like to give the floor to Lynette Wamboka, Global Partnership for Sustainable Development Data, to also help us to… It’s not here. It’s not here? Yeah. OK, sorry. So is anyone in the panel willing to contribute to this part of the conversation in relation to how to bring the benefits of AI to local communities before we move to the other round? OK, if not, we can check whether there are any reactions from the remote participants, any questions in relation to this point, or from here from the audience. You are also welcome to comment and provide your viewpoint. OK, if not, we can move to the second round, which is going to look at indigeneity. With radical shifts, do we need an artificial intelligence infrastructure for an economy and society attentive and accountable to the people? And I will invite Ambassador Aridha Ambaki, Government of India, to comment, and Sarah Nicole from Project Liberty Institute to also help us to address this dimension of local AI. Please, Ambassador. Thank you.


Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we balance between wider AI adoption, building models, building applications, vis-a-vis the energy challenges that we are there, which hampers in some ways our goals towards sustainable development that we had all agreed. So, it’s not an easy challenge for governments across, because on one hand we want to take advantage of the benefits that are going to come, and on the other hand we want to limit the risks that are coming on climate change and sustainable development. So, the approach towards local AI sounds, seems to be good, but to make that happen there will be several necessary ingredients to that. Many of it was highlighted by our speaker from UNCTAD very succinctly, but I would like to just mention that what we observe in India, in many ways, given the diversity that we have, given the languages that we have, linguistic diversity, cultural diversity, contextual diversity that we have, is kind of a microcosm of the whole world. How do we ensure that whatever we build in a country of our size and magnitude applies to all sections of society, everybody becomes included in that. So, in that, one key challenge of course relates to infrastructure, because AI compute infrastructure is scarce, it’s expensive, it’s not easy to get, very few companies control it, and to do that, and if you democratize access to compute infrastructure, the model that we adopted in India was to ensure that we create a common central facility through which, of course provided through private sector providers, but this compute should be available to all researchers, academicians, startups, industry, those people who are training models or doing inferencing or building applications, and this compute, we worked out a mechanism so it becomes available at an affordable cost. We underwrite to the tune of almost 40% of the compute cost from the side of the government, so end user gets it at a rate which is less than a dollar per GPU per hour. So, this model has worked and I do believe the solution that was proposed earlier, building a CERN for AI, so if we can create a global compute infrastructure facility across countries or several foundations, multilateral bodies joining in and creating this infrastructure, making it available, it can really and Nisha. So, we have to make sure that we really solve the access to infrastructure challenge that we have. The second key ingredient for building AI applications and models is, of course, about data. How do we ensure that we have data sets available? So, it’s okay to desire to have local AI models, contextual models, but until we have necessary data sets in all languages and all contexts, all cultures, it will not really happen. So, we have to make sure that we have data sets in all languages and all contexts. So, we have to make sure that we have data sets in English and maybe major Indian languages, but when it came to minor Indian languages, we had very limited data sets. We launched a crowd-sourcing campaign to get linguistic data across languages, across cultures, in which people could kind of come to a portal and contribute data sets. So, that has really helped. So, that model can, again, be kind of made global, and that’s what we are trying to do. So, we have to make sure that we have data sets in all languages, as well as contextual and linguistic data sets. That can be, again, an innovative solution towards making the data sets more inclusive and more global. The third key ingredient on which we need to kind of enable, even if we have to push local AI, is about capacity-building and skills. Like, AI talent is also rare and scarce, it’s limited. So, we need to make sure that we have capacity-building and skills, and we need to make sure that we have capacity-building and skills, and we need to make training to students and to AI entrepreneurs with regard to how to train models, how to wire up even 1,000 GPUs. It requires necessary skills. If we can take up a capacity-building initiative driven by a central initiative through the UN body or the global partnership on AI, and ensure that all those capacity-building initiatives are implemented, it can really, really help. So, if we can take up capacity-building initiatives and training, and doing inferencing and building models and using AI for solving societal problems, it can really, really help. of course, build use cases, AI use cases in key sectors, whether it’s healthcare, whether it’s agriculture, whether it’s education, and create a global repository of AI applications which can be shareable across geographies. If we are able to take these three steps across the infrastructure, data sets, you know, training, capacity building, and building use cases, repository of use cases, I think we’ll be able to push forward the agenda of adoption of AI and building local AI at some stage. Absolutely, Ambassador. Definitely, AI models have to reflect contextually grounded innovation norms and ethics. Then I would like to invite Sarah Nicol from Project Liberty Institute.


Sarah Nicole: Please share your thoughts with us on this issue. Yeah, thank you very much for the invitation to give this short lightning talk. And thank you for the first insight as well. I will be a little bit controversial and really appreciate the way the question was framed. So I really appreciate the radicality aspect in it. Because the mainstream view is really that AI is a completely disruptive technology, that it changes everything in our societies, in our economies, in our daily life. But I would argue quite the contrary. AI is essentially a neural network, right, that replicates the way the brain works. It analyzes specific data sets. From those data sets, it finds connection, creates patterns, uses those patterns to respond to certain tasks like prompt, search, and so on. So overall, AI is an automation tool. It is a tool that accelerates and amplifies everything that we know. So necessarily, the current structure that is highly centralized and that strips users’ data out of their control is reinforced by AI. and it also reinforced the big tech companies and everything that we’ve been knowing for decades. It benefits from the centralization of the digital economy that is necessary to train its model. So, AI is very much so the result of the digital economy that has been in place for multiple, multiple years. So, if AI is a continuity and an amplification of what we already know, then the radicality needs to come from the response that we’ll bring to it. And at Project Liberty Institute, we believe that every people, users, citizen, call it what you want, deserves to have a voice, a choice and stake in their digital life. And this goes first by giving users data agency. This requires infrastructure design changes, profound one. In digital economy, data is not just a byproduct. It is a political, social and economic power that is deeply tied to our identities. And most of the network infrastructure that is currently in place has been captured by a few dominant tech platforms. So, necessarily everything that is built under falls under this proprietary realm. Scrapping, of course, empowerment of users, transparency, privacy and so on. So, AI rapidly shapes everything that we’re doing in our life. So, we need to rethink this infrastructure model because it shapes data agency. And Anita, you’ve been great to launch this report with us in Berlin last month. So, I’ll be happy to share also this report that we wrote for policymakers specifically to equip them with thinking how to digital infrastructure questions. But infrastructure for agency is really what we’re focusing on at the Institute. So, we are the steward of an open source protocol that is called a DSNP, it builds directly on top of TCPIP and allow users . And finally, DSNP allows users better control of their own data by enabling them to interact with a global, open, social graph. What this means is that your social identity on DSNP is not tied to one specific platform like it is today in most tech platforms, but it exists independently, and so it allows transportability of your data, but also interoperability. So, this is a core part of infrastructure that represents a radical shift for an economy and society, attentive and accountable to the people. But unfortunately, this would be a little bit too good to be true if all that was needed was a few lines of code and some spec and protocols. As important is the business model, and there’s a lot of work to be done here, because to this day, the most lucrative business model is the one that scraps data, users’ data, and then uses it for advertising, and we have yet to find a scalable alternative to this. And in order to build what we call the fair data economy, we are in need of metrics. We need to be better at articulating what we mean by safety, responsibility, privacy, what exactly do we mean behind this beautiful world? So, we need qualitative and quantitative metrics to define all this. Likewise, we need to go beyond the do-no-harm principle and also go beyond the do-no-harm principle to really shape a positive vision of technology that is socially and financially benefiting everyone. And one of the approaches that we are exploring at the institute is the one of data co-operative. The co-operative model has a legacy of hundreds of years, and it’s actually pretty well fit for the age of AI. Here’s a recent audio clip that was produced by an astrophysicist at the with those who wanted to, but let me extract two points from this report that I think is interesting for the sake of this discussion. Data cooperative allows to rethink the value of data in a collective manner, and I think that’s very important because the debate is very much structured around personal data and individual data, but the issue is so structural that we need to empower users with collective bargaining tools against suspected cooperation. And the second point is, in the age of AI, data needs to be of high quality, and data cooperatives provide the right incentive for data contributors to improve the quality of their data, because then it contributes to greater financial sustainability of their own co-op, so it’s also for data-pulling purposes. And of course there are many other models that exist, data commons, data trust, you name it. A radical shift for a better economy anyway will need many tries, many stakeholders to be involved, and we are already seeing this every day in multiple communities across the world. But one last thing that I wanted to mention here today is, what I just said, I don’t think this should be considered as radical at all. We own our identity in the analog world, we don’t accept others to make billions on top of our own identity, so why should it be that different in the online world? So all in all, the goal is really to have a voice, a choice, and stake online, and I don’t think this is radical, I think this is pretty much common sense.


Valeria Betancourt: Thanks. Thank you, thank you Sara, I think you have helped us to pave the way very nicely to the next round of conversation, because if we want AI to be meaningful to people, the intention behind it is absolutely crucial. And with that, I would like to invite Ambassador Thomas Schneider from the government of Switzerland and Nandini Chami from IT4Change to address the question on how should AI innovation… And now, I would like to ask you to share your views on how we can make the transition pathways be steered from the common good, with that intention of the common good.


Thomas Schneider: Please share your views on that. Ambassador, welcome. Thank you, and thank you for making me part of this discussion, because this is a discussion of fundamental importance that is also something that, maybe not necessarily a poor, but definitely a small country like mine, that we’re seeing, and you’ve highlighted some of the aspects. How can a small actor cope, survive, call it whatever you want, in such a system where, by design, the big ones have the resources, have the power? But the question is, does it have to be like this, or is it just, would there be alternatives? And I think we have already heard a number of elements where you actually can, how would the small ones need to cooperate in order to benefit from this as well? And of course, we know about the risks and all of this, but I think it would be a mistake not to use these technologies, because the potential is huge. And being an economist and a historian, and not a lawyer, actually, much of this reminds me of the first revolution in the industrial revolution, where, for instance, Switzerland was a country that was lagging behind. They had already trains and railways in the UK, and we were still walking around in the mountains. But then we were catching up quite quickly. But it wasn’t just enough to buy locomotives and coaches from the UK or produce them ourselves. We had to realize that you need to build a whole ecosystem in order to allow you to use this technology to make it your own, and some of it has been mentioned. What struck me, lately I just read an article about the extinguishing of the Credit Suisse, of the Swiss bank, and it struck me again, this bank was created by the politician and his people that were actually bringing the railways to Switzerland and building the railway system. So what did they do? They did not just buy coaches and build railways and bridges and tunnels. They also built the 88 Zurich. So they knew we need to have engineers. We need to have people that have the skills to actually drive these things, build the infrastructure. So they did not just create the railway. They created the first universities like in polytechnical universities. And they created, they knew like we are a small country, we do not have the resources. We need somebody that gives us credit. We need to have a financial system around it and it also connects you. You can have nice ideas, but if you do not get the resources for them, nothing happens. And that was remarkable that this was all through basically by one person plus his team in the 1840s and 50s. And I think we need to understand and I think we have heard a lot of input. What do we need? Each community for herself, but also in order to be able to create our own ecosystem and how to cooperate with others that are in the same situation. It can be communities in the different countries. It can actually also be communities at the other end of the world. But that may actually create a win-win situation with you. So I think this is really important and for the small actors, how can we break this vicious cycle of scaling effects that you cannot deliver? And we have heard also some elements that are important for us in Switzerland. The cooperative model is actually something much of our success stories economically are actually still cooperatives. The biggest supermarket in Switzerland was created 100 years ago as a cooperative. It is still a cooperative, not as much as it used to be, but legally it is a cooperative. Every customer can actually… We actually vote, so every few years there’s a discussion, should this supermarket be able to sell alcohol or not? And they want to, but the people say no. And we have insurances that are cooperatives and so on, so that’s an element. And another element is sharing the computing power. In Switzerland we’ve been working with NVIDIA to develop their chips 10 years ago, and now we have the result, we have one of the 10 biggest supercomputers, apart from the private ones of course of the big companies, that is in Switzerland. We cooperate with Lumi, with the Finns, and we try to create a network, we’ve started to set up a network to share computing power across the world for small actors, universities and so on. So this initiative is called ICANN, I-C-A-N. So there’s lots of things to do, and I think if we do a nice summary of the elements that we have heard so far, we can actually, yeah, that gives us some guidance for the next steps.


Valeria Betancourt: Thank you, thank you Ambassador Nandini, please help us with your views. It’s a very interesting conversation, and I think we are having this at a very timely moment,


Nandini Chami: when there is a recognition that if we are talking about a just and sustainable digital transition, we need to get out of the dominant AI paradigm and move towards something else. So I’ll just begin by sharing a couple of thoughts about challenges that we face in terms of steering AI innovation pathways for the common good. And these reflections come from the UNDP’s Human Development Report of 2025, which focuses on the theme, people and possibilities in the age of AI. So the first challenge that, you know, in this report we find is that in terms of shaping the trajectories of AI innovation, private value and public value creation goals are not always necessarily or automatically aligned. And to quote from a report, despite AI’s potential to accelerate technological progress and scientific discovery, current innovation incentives are geared towards rapid deployment, scale, and automation, often at the expense of transparency, fairness, and social inclusion. So how do we shape these with intentionality and consciously? That is very important. The second insight from this report is that since development is a path-dependent project, these path dependencies mean that AI adoption does not automatically open up routes to economic diversification. We just heard reflections on ecosystem strengthening, and this report also adds to the similar lens that the economic structures in many developing countries and LDCs may limit the local economy’s potential to absorb productivity spillovers from AI, and there may be fewer and weaker links to high-value added activities. So this actually means that there needs to be a complementarity between development roadmaps and AI roadmaps, and the objectives of development, the specific contextual strength opportunities challenges and weaknesses mapping in terms of where the potential for economic diversification lies, and where we use AI as bridge-building, as a general-purpose technology. These become extremely contextually grounded activities to do, and we need to move beyond an obsession with AI economy roadmap development as just a technological activity and look at it as an ecosystem activity. So from this perspective, I would just like to share from our work at IT4Change about three to four reflections on what it would take to make techno-institutional choices that will shape these innovation trajectories in these directions that we seek. So first, we come to the issue of technology foresight, and in the panel also we were discussing the question of do-no-harm principle. Oftentimes in these debates, we hear a discourse of inevitability of AI as a Frankenstein technology that will just definitely go out of control, and there’s a lot of long-termist alarmism about we will no longer be able to control AI. But what happens is this starts distracting from setting limits on AI development in the here and now, which actually means that in operationalizing and actionizing the do-no-harm principle, instead of moving fast and breaking things, we probably need to go back to the precautionary principle of the Rio Declaration about what we need to do to shape technologies. And secondly, as the Aarhus Convention on Environmental Matters specifies in the context of environmental decision-making, we need to be talking about right of the public to access information and participate in AI decision-making, so we are not just looking at rights of affected parties in the AI harms discourse. The second point is that in AI value chains which are transnational, which are very complex, and which have multiple actors and system providers and deployers and subject citizens on whom AI is finally deployed, how do we fix liability for individual, collective, and societal harms, and how do we update our product fault liability regimes so that the burden of proof is no longer on the affected party to prove the causal link between the defect in a particular AI product or service and the harm that was suffered? , they are the founders of OpenAI, and we are very proud to be a part of that, given the black box nature of this technology, thinking this through becomes very important. And thirdly, when we look at the technological infrastructure choices, of course, OpenAI affordances become very important as a starting point, but it’s also useful to remember that they don’t automatically become a part of the technology, and they do need to be part of the technology, but it’s very important to remember that there are many barriers to innovation and inclusivity, as experiences of how we build open-source AI on top of the stacks have shown, where it’s very much possible that a big tech firm’s dominant firms are able to use the primary infrastructure, and that’s why OpenAI is so important. So, I think that’s what this research shows. And my last point is actually about policy support for fostering alternatives, particularly federated AI commons thinking. So, there are alternative visions such as community AI that focus on looking at task-specific experiences in specific communities at IT4Change. We are exploring the development of such a model with the public school education system in Kerala, for instance. There have also been proposals that have been made in G20 discussions as part of the T20 dialogues about how do we shape public procurement policies and the directions of public research funding for the development of shared compute infrastructure, which came up in our discussion. And also, how do we ensure that in the participation of different market actors on public AI stacks and the use of public AI computing?


Valeria Betancourt: Thank you so much, Nandini. Let me check with Sadhana if there are remote participants who want to make interventions or have questions, and I also invite you all to also get ready with your questions and comments and reactions if you have one. Thank you, Valeria.


Sadhana Sanjay: I hope everyone can hear me. There is one question in the chat from Timothy who asks, digital transformation is built upon intellectual property rights frameworks, means of ownership and trade. When considering existing trends, projects and works that are resourced versus those that lack resourcing, how are the natural legal persons provided the necessary support to retain legal agency, both for themselves as well as to support traditional roles such as those of a parental guardian or others? Thank you, Sadhana. Is anyone in the panel? Anyone in the panel who would like to address that question? I didn’t hear the question clearly. This is about intellectual property. If you could repeat the second half, I got the first half, but not the second. If I understand correctly, the question is asking, given that there are ownership rights conferred on the developers of AI and non-natural legal persons such as corporations, the question is about how can natural legal persons such as ourselves retain our rights and agencies over the building blocks of AI, both individually as well as those who might be in charge of us such as guardians and custodians?


Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there are open-source solutions. So what we need to emphasize is to promote open-source solutions to the extent possible so that more and more developers get access to the APIs and they can build applications on top of it. The second part of it is that, with regard to… like, ultimately, somebody has to pay for these solutions. Like, it’s not that everything will come for free. And those companies which are known to provide services for free, they monetize your data. We all know about it. There have been big tech companies who are indulging in that. So at some point of time, we’ll have to take a call, whether if I want to use a service, like you mentioned, a chat GPT service, which kind of helps me in improving my efficiency, my productivity. Either I pay for their service or I contribute to their assets. So that call, individuals, companies, societies will need to take, that what is the cost of convenience or what is the cost of getting a service, in what form we can do. The other part which can be done, which is very complex, is to work out a marketplace kind of mechanism in which every service is priced. So, if we are contributing data sets, if I’m contributing to building a corpus in a particular language, then can we incentivize those who are contributing the data sets? In fact, there is an initiative, there is a company called Karya in India, which is doing that, which is actually paying people for contributing data sets, which kind of ensures that those who are part of the ecosystem, they do that. Then there are companies which have started incentivizing delivery boys, food delivery boys and cab drivers, Uber drivers, so that when they drive around, they get details about city amenities, about garbage dumps, about missing manual covers, street lights, traffic lights not functioning, sharing that information with the city government. and then they get in turn paid for doing that service. So, if the way a data contributor in what form is contributing, there can be models, there can be mechanisms in which a cost and revenue sharing model can be developed. But it will require specific approaches to the specific use case, but it’s not that cannot be done.


Valeria Betancourt: Thank you, Ambassador. Maybe if I can add, there’s a number of good examples. First of all, property rights are not carved in stone.


Thomas Schneider: This is something that can be and will need to be reformed, renegotiated. With what outcome and how, this is another question. Because otherwise, in many ways, property rights don’t work also for journalism, for media, in that part. So, we’ll have to develop a new approach and question what was the original idea behind property rights. The idea may be right, but then we need to find a new approach. That’s one element. And the other thing is like, this is what you may do on the political level, on the market level. And the other one is try to find ways to create a fair share system for benefits. And one is try to monetize it, like a kind of transaction, give every transaction or every data transaction a value. And the other thing is, and I think we’ve already heard this, go for, not think it only from the individual, but think it from society or from Switzerland. Also, we are a liberal country, but many things, people don’t want it to be privatized, because they think this should be in public hands. It’s like waste management or hospitals, it’s a very hot issue and so on. So, I think we should think about how, as a society, if we want to develop our health system, for instance. Health data is super important, it’s super valuable. And of course, the industry needs a lot of money to develop new pharmaceutical products. But how can we organize ourselves as a society, not because as an individual we are too weak, And the whole society can say, OK, we are offering something to businesses that can develop stuff that is OK, that they make money. But we want somehow a fair share of this because we are kind of your research lab. And then if you are a group, a big group, then you can actually have also a political weight. And then you need to find creative, concrete ways to actually then get this thing concretely done. So you need to work on the idea and the concept and on defining ways. But it’s a super important question. And if I can build on those two and fully agree with what’s been said.


Sarah Nicole: The question of having a stake in your data has often been framed on a personal level. And actual studies have shown that you would make very, very little if you were to monetize your own data. You know, the year would be like a couple of hundreds of euros or dollars. And the worst thing is that it could lead also to systems where poor people would probably spend lots of time online to generate very small revenue from this. So the answer will not be on an individual perspective, but it would be on a collective one. Because it’s when the data is aggregated, it’s when the data is in a specific context that then it gains value. And here again, let me bring the cooperative model. And that’s true. Theoretically, there’s a lot of work on data cooperative. Practically speaking, it’s still yet to emerge. But also one of the reasons is that it is not natural for businesses to turn into a cooperative model. Because it’s being perceived as this socialist or communist thing, which it is not. And hundreds of years of legacy have proven that. But there are many data cooperatives that pool specific data with a specific type of expertise. And then allow some AI to be trained on this expertise and high quality data. where we can have better rights and better protections for individuals once it is aggregated in common. So, really, the mentality really needs to shift from this personal data frame discussion, I think that benefits also a lot of the big tech companies, to a more collective and organization perspective.


Valeria Betancourt: Thank you. Anita.


Anita Gurumurthy: I don’t think that there’s an easy answer and I think we need to step up and rethink as people have said on this entire idea of what’s ownership. Two things I would like to say is that for developing countries particularly, I think in our global agreements on trade and intellectual property, we oftentimes cede our space to regulate in the public interest back in our countries. So often, transnational companies use the excuse of trade secrets to lock up data that otherwise should be available to public transportation authorities, public hospitals, etc. And perhaps we do need to strongly institute exceptions in IP laws for the sake of society to be able to use that threshold of aggregate data that is necessary to keep our societies in order. I’m sorry, I’m using that terminology in a very, very broad sense. But I mean, that is needed. You just can’t lock up that data and say it’s not available because it’s a trade secret. The second is that the largest source for the large language models, especially ChargPT, was Wikipedia. So you actually see free riding happening on top of these commons. And therefore, that’s another imperative, I think, for us to rethink the intellectual property regime on, well, we will do open source. But what if my open source meant for my community is actually servicing profiteering? So we do need laws to think through those data exchanges, whether it’s agricultural data, whatever data commons, or public data sets do need to protect society from free riding and also foul dealing. Foul dealing is when the exploitation really reaches a very, very high threshold. The last point I wanna make is we’ve been talking about the nudge economy that has generated the data sets, but what we read today is that there’s an economy of prompt. On top of AI models that you see when you search is the way in which you’re defining your prompts as users, and that is perfecting the large language models. So this is a complexity from nudge to prompt, which means that all of us are feeding the already monopolistic models with the necessary information for that to become more efficient. Which effectively means that the small can never survive.


Valeria Betancourt: So what do you do then for the small to survive is actually a question of societal commons so that this economy of prompt and economy of profiteering from prompt can actually be curtailed. And I think these are future questions for governance and regulation, but essentially also for international cooperation. That’s excellent. Okay, let me now invite your comment, please, or your question.


Audience: My name is Dr. Nermin Salim, the Secretary General of Creators Union of Arab. It’s a consultative status with the UN. And by accident, I’m expert intellectual property. So for comment for this question, I want just to comment about the intellectual property of AI. In the WIPO, the International Organization of Intellectual Property, they not yet reached the ideal convention for protecting AI because it’s divided between two sections. The AI as a data at the. . We are a platform for sharing content in a digital way, in a digital technology way, and the content which is generated by AI. But for this, we are at a civil society, have launched in the IGF last, IGF in Riyadh, a platform for protecting the content for users for digital area. When the users want to share their content, they find Whois, social media, internet, fax machines, Swish, we make a platform to submit and take a QR code, and verify by blockchain, go to the government, a minister siz responsible for registration, whoever gave them the 29 to the authority needs to verify it and customize it a little bit to obtain a personal property is taking into cza insid the case lóg conflict between users. That’s just a comment for the questions. We are a minute away from the end of the session. I would like to invite everyone of you, the panel, to share some final remarks. Yes. I would like to start with Nandini, who is the chair of the


Valeria Betancourt: panel, and I would like to invite her to share some final remarks. It’s available now as a demo. Thank you very much. Just very brief final remarks, like


Nandini Chami: 10 seconds with the highlight that you would like to leave the audience with, please. Let me start with you, Nandini. I think the discussion is showing us that there’s a long history of AI being a problem in the world. AI is a problem in the world, and while continuing to incentivize innovation and preserve common heritage, particularly in knowledge IP, AI is a new instantiation of that problem. Yes, Ambassador Schneider. Thank you, I just say this was really exciting and I hope we can follow up on this because it’s super important and I thank you really for this discussion. Sarah.


Sarah Nicole: Thank you will be the last thing as well. Ambassador. Yeah, my takeaway is that the


Abhishek Singh: cooperative model for infrastructure data sets works and then maybe for models and applications we need to push forward more for open source models without the concerns of IP and other stuff. Absolutely. I’m thinking that the public and the local cannot exist without each other. Yeah, absolutely, absolutely and thank you so much for your presence and not easy answers as you said and oh yes, I’m sorry, Jackie, please your final remarks. Yes, thank you. I think data is a very strategic and key asset for both AI and the digital economy and with that I just want to


Audience: share with you that we have recently established a multi-stakeholder working group on data governance so hopefully that could provide some recommendation on how we can develop a good data governance framework. Thank you. Absolutely, so not easy answers, some of the responses and solutions


Valeria Betancourt: are coming from the margins, from the academia, from the social movements and different groups impacted by digitalization so yes, let’s keep the conversation going and let’s use this space and hopefully also the WSIS last interview in order to be able to define the grounds for different approaches and a different paradigm for AI for the common good. So, thank you so much for your presence and to all of you for your contributions. Thank you so much. Thank you.


A

Anita Gurumurthy

Speech speed

149 words per minute

Speech length

1094 words

Speech time

438 seconds

AI investment doubled from $100-200 billion between 2022-2025, three times global climate adaptation spending

Explanation

Gurumurthy highlights the massive financial resources being directed toward AI development compared to climate adaptation efforts. This disparity shows misaligned priorities given the urgent need for climate action and the environmental costs of AI infrastructure.


Evidence

Statistics from UN Trade and Development Digital Economy Report showing AI investment doubling from $100 to $200 billion between 2022-2025, which is three times global spending on climate change adaptation


Major discussion point

Resource allocation priorities between AI development and climate adaptation


Topics

Development | Economic


Energy demand of data centers creates water disputes and climate concerns despite chip efficiency improvements

Explanation

Despite technological improvements in chip efficiency, the overall energy and water consumption of AI infrastructure continues to grow. Market trends suggest these efficiencies will support larger, more complex models rather than reducing environmental impact.


Evidence

References to water disputes occurring globally due to data center demands and the trend toward bigger, more complex large-language models that offset marginal energy savings


Major discussion point

Environmental sustainability of AI infrastructure


Topics

Development | Infrastructure


Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories

Explanation

Current AI systems, dominated by Western perspectives and English language, are not just excluding non-English speakers but actively changing worldviews and erasing diverse cultural knowledge systems. This represents a form of digital colonialism that threatens cultural diversity.


Evidence

Discussion of anglocentric perspective in AI development and how LLMs change ways of thinking and erase cultural histories


Major discussion point

Cultural preservation and decolonization in AI development


Topics

Sociocultural | Human rights principles


Need to retain multilingual society structures and decolonize scientific advancement in AI

Explanation

Preserving multilingual societies is essential because different language structures enable different ways of thinking and understanding the world. Decolonizing AI means building computational systems that reflect diverse epistemologies rather than imposing a single worldview.


Evidence

Emphasis on how multilingual structures allow different ways of thinking and the need to build ‘our own computational grammar’


Major discussion point

Decolonization and multilingualism in AI


Topics

Sociocultural | Human rights principles


Agreed with

– Wai Sit Si Thou
– Abhishek Singh

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


Tension exists between necessary pluralism and generalized models dominating market development

Explanation

There’s a fundamental conflict between the need for diverse, culturally-specific AI models and the market’s tendency toward unified, generalized systems. This tension represents the key challenge in developing truly inclusive AI that serves different communities.


Evidence

Discussion of the ‘sweet spot of investigation’ lying in the tension between pluralism and generalized models


Major discussion point

Balancing diversity with scalability in AI development


Topics

Sociocultural | Economic


Trade secrets shouldn’t lock up data needed by public institutions like hospitals and transportation authorities

Explanation

Transnational companies often use intellectual property protections to prevent public institutions from accessing data that would be beneficial for society. This creates barriers to public service delivery and societal functioning.


Evidence

Examples of public transportation authorities and public hospitals being denied access to data due to trade secret claims


Major discussion point

Public interest exceptions in intellectual property law


Topics

Legal and regulatory | Human rights principles


Agreed with

– Thomas Schneider
– Sadhana Sanjay

Agreed on

Current intellectual property frameworks are inadequate and need reform for the AI era


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation

Explanation

Major AI systems like ChatGPT have been trained extensively on freely available resources like Wikipedia, representing a form of exploitation of digital commons. This highlights the need for legal frameworks to protect community-created resources from commercial exploitation.


Evidence

Specific mention that Wikipedia was the largest source for large language models, especially ChatGPT


Major discussion point

Protecting digital commons from commercial exploitation


Topics

Legal and regulatory | Economic


Agreed with

– Sarah Nicole
– Thomas Schneider

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


W

Wai Sit Si Thou

Speech speed

140 words per minute

Speech length

951 words

Speech time

407 seconds

AI divide exists with NVIDIA producing 90% of GPUs, creating significant infrastructure inequality

Explanation

The concentration of critical AI infrastructure in the hands of a single company creates massive inequalities in access to AI capabilities. This monopolistic control over essential computing resources represents a fundamental barrier to democratizing AI development.


Evidence

Statistic that NVIDIA produces 90% of GPUs, which are critical components for AI computing resources


Major discussion point

Monopolization of AI infrastructure


Topics

Infrastructure | Economic


Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity

Explanation

Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructure, availability of diverse datasets, and development of necessary technical skills. These elements must be developed with explicit attention to equity rather than assuming market forces will provide fair access.


Evidence

Framework analysis showing AI divide across infrastructure, data, skills, R&D, patents, and scientific publications


Major discussion point

Foundational requirements for inclusive AI development


Topics

Development | Infrastructure


Worker-centric approach needed focusing on AI complementing rather than replacing human labor

Explanation

Rather than following historical patterns of automation that replace workers, AI development should prioritize applications that enhance human capabilities and create meaningful employment. This requires intentional design choices and policy interventions to steer technology toward complementary rather than substitutional uses.


Evidence

Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand side channels of complementing human labor and creating new jobs


Major discussion point

Human-centered AI development approach


Topics

Economic | Development


AI solutions must work with community-led data and indigenous knowledge for local contexts

Explanation

Effective AI applications for local communities require incorporating community-generated data and traditional knowledge systems rather than relying solely on external datasets. This approach ensures AI solutions address specific local problems and contexts.


Evidence

Emphasis on working with community-led data and indigenous knowledge to focus on specific local problems and issues


Major discussion point

Community-centered AI development


Topics

Sociocultural | Development


Agreed with

– Abhishek Singh
– Anita Gurumurthy

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


AI solutions should work offline to serve populations without internet access

Explanation

Given that one-third of the global population lacks internet access, AI solutions must be designed to function without constant connectivity. This technical requirement is essential for ensuring AI benefits reach underserved communities.


Evidence

Statistic that one-third of global population lacks internet access, making offline AI solutions essential


Major discussion point

Technical accessibility for underserved populations


Topics

Development | Infrastructure


Simple interfaces needed to enable broader user adoption of AI solutions

Explanation

AI systems must be designed with user-friendly interfaces that don’t require technical expertise to operate. This design principle is crucial for democratizing access to AI benefits across different skill levels and educational backgrounds.


Evidence

Emphasis on simple interfaces as key takeaway for promoting inclusive AI adoption


Major discussion point

User experience design for inclusivity


Topics

Development | Sociocultural


CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders

Explanation

The collaborative model used by CERN for particle physics research could be adapted for AI infrastructure, allowing multiple countries and organizations to pool resources for shared computing capabilities. This approach could democratize access to expensive AI infrastructure.


Evidence

Reference to CERN as world’s largest particle physics laboratory in Geneva and its successful resource-pooling model


Major discussion point

International cooperation models for AI infrastructure


Topics

Infrastructure | Development


Agreed with

– Abhishek Singh
– Thomas Schneider

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


South-South cooperation can address common challenges like training AI with regional languages

Explanation

Countries in the Global South can collaborate to overcome individual limitations in AI development, such as insufficient data for training models in shared languages. Regional cooperation can achieve what individual countries cannot accomplish alone.


Evidence

Example of East African countries pooling resources to train AI models in Swahili, which Rwanda alone couldn’t achieve


Major discussion point

Regional cooperation for AI development


Topics

Development | Sociocultural


Multi-stakeholder working group on data governance needed to develop good framework recommendations

Explanation

Given the strategic importance of data for both AI and the digital economy, a collaborative approach involving multiple stakeholders is necessary to develop effective governance frameworks. This multi-stakeholder model can provide comprehensive recommendations for data governance.


Evidence

Announcement of recently established multi-stakeholder working group on data governance


Major discussion point

Collaborative governance approaches for data


Topics

Legal and regulatory | Development


A

Abhishek Singh

Speech speed

177 words per minute

Speech length

1379 words

Speech time

466 seconds

India created shared compute infrastructure with government subsidizing 40% of costs to democratize access

Explanation

India addressed the challenge of expensive and scarce AI computing resources by creating a centralized facility that provides affordable access to researchers, academics, startups, and industry. Government subsidies make GPU access available at less than a dollar per hour, demonstrating a viable model for democratizing AI infrastructure.


Evidence

Specific details of 40% government subsidy and pricing at less than a dollar per GPU per hour for end users


Major discussion point

Government intervention to democratize AI infrastructure access


Topics

Infrastructure | Economic


Agreed with

– Wai Sit Si Thou
– Thomas Schneider

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


Disagreed with

– Nandini Chami

Disagreed on

Speed vs. Precaution in AI Development


Crowd-sourcing campaigns for linguistic datasets across languages and cultures can democratize data access

Explanation

When facing limited datasets for minor Indian languages, India launched crowd-sourcing initiatives that allowed people to contribute linguistic data through online portals. This approach can be scaled globally to address data scarcity for underrepresented languages and cultures.


Evidence

Description of portal-based crowd-sourcing campaign for linguistic data across Indian languages and cultures


Major discussion point

Community participation in AI dataset creation


Topics

Sociocultural | Development


Agreed with

– Wai Sit Si Thou
– Anita Gurumurthy

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


Global repository of AI applications in healthcare, agriculture, and education should be shareable across geographies

Explanation

Creating a centralized collection of AI use cases in critical sectors like healthcare, agriculture, and education would enable knowledge sharing and prevent duplication of effort across different regions. This repository approach could accelerate AI adoption for social good globally.


Evidence

Emphasis on building use cases in key sectors and creating shareable repositories across geographies


Major discussion point

Knowledge sharing for AI applications in social sectors


Topics

Development | Sociocultural


Capacity building initiatives needed for training on model development and GPU management skills

Explanation

The scarcity of AI talent requires systematic capacity building efforts to train people in technical skills like model training and managing large-scale computing resources. This skills development is essential for enabling local AI development capabilities.


Evidence

Mention of training needs for wiring up 1,000 GPUs and other technical AI development skills


Major discussion point

Technical skills development for AI


Topics

Development | Infrastructure


Marketplace mechanisms could incentivize data contributors through revenue sharing models

Explanation

Rather than having companies monetize user data without compensation, marketplace systems could be developed where data contributors receive payment for their contributions. This approach recognizes the value of data and provides fair compensation to those who generate it.


Evidence

Examples of Karya company paying people for contributing datasets and incentivizing delivery workers to share city information with governments


Major discussion point

Fair compensation for data contribution


Topics

Economic | Legal and regulatory


Agreed with

– Sarah Nicole
– Thomas Schneider

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Disagreed with

– Sarah Nicole

Disagreed on

Individual vs. Collective Data Monetization Approaches


S

Sarah Nicole

Speech speed

164 words per minute

Speech length

1326 words

Speech time

484 seconds

AI is automation tool that amplifies existing centralized structures rather than disrupting them

Explanation

Contrary to mainstream narratives about AI being completely disruptive, it actually functions as a neural network that analyzes data and finds patterns, essentially automating and accelerating existing processes. This means AI reinforces current power structures and centralization rather than fundamentally changing them.


Evidence

Technical explanation of AI as neural networks that replicate brain functions and analysis of how AI benefits from existing digital economy centralization


Major discussion point

AI as continuity rather than disruption


Topics

Economic | Sociocultural


Disagreed with

– Valeria Betancourt

Disagreed on

AI as Disruption vs. Continuity


Users deserve voice, choice, and stake in digital life through data agency and infrastructure design changes

Explanation

People should have meaningful control over their digital existence, which requires fundamental changes to how digital infrastructure is designed. This goes beyond surface-level privacy controls to restructuring the underlying systems that govern digital interactions.


Evidence

Discussion of data as political, social, and economic power tied to identities, and mention of DSNP protocol development


Major discussion point

User empowerment through infrastructure redesign


Topics

Human rights principles | Infrastructure


Data cooperatives provide collective bargaining power and incentivize high-quality data contribution

Explanation

Cooperative models allow users to collectively negotiate with technology companies rather than being powerless as individuals. Additionally, when people have ownership stakes in data cooperatives, they’re incentivized to contribute higher quality data since it benefits their own cooperative’s financial sustainability.


Evidence

Reference to cooperative model’s hundreds of years of legacy and explanation of financial incentives for data quality in cooperative structures


Major discussion point

Collective organization for data rights


Topics

Economic | Legal and regulatory


Agreed with

– Thomas Schneider
– Abhishek Singh

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Individual data monetization yields minimal returns; collective approaches through cooperatives more viable

Explanation

Studies show that individuals would earn very little money from monetizing their personal data – perhaps a few hundred dollars per year. Worse, this could create exploitative systems where poor people spend excessive time online for minimal income. Collective approaches through cooperatives offer more meaningful economic benefits.


Evidence

Specific mention of studies showing individual data monetization would yield only a couple hundred euros or dollars per year


Major discussion point

Economic viability of different data monetization models


Topics

Economic | Human rights principles


Agreed with

– Anita Gurumurthy
– Thomas Schneider

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


Disagreed with

– Abhishek Singh

Disagreed on

Individual vs. Collective Data Monetization Approaches


Open source protocols like DSNP can enable user data portability and interoperability across platforms

Explanation

Technical solutions like the Decentralized Social Networking Protocol (DSNP) can be built on existing internet infrastructure to give users control over their social identity and data. This allows people to move their data between platforms and interact across different services without being locked into single platforms.


Evidence

Technical description of DSNP protocol building on TCP/IP and enabling global, open social graph with data transportability


Major discussion point

Technical solutions for user data control


Topics

Infrastructure | Human rights principles


N

Nandini Chami

Speech speed

137 words per minute

Speech length

1016 words

Speech time

444 seconds

Private value and public value creation goals in AI innovation are not automatically aligned

Explanation

The profit motives driving private AI development don’t naturally align with public interest goals like transparency, fairness, and social inclusion. Current innovation incentives prioritize rapid deployment and scale over social benefits, requiring intentional intervention to redirect these pathways.


Evidence

Quote from UNDP Human Development Report 2025 stating that innovation incentives favor rapid deployment and automation over transparency, fairness, and social inclusion


Major discussion point

Misalignment between private and public interests in AI


Topics

Economic | Human rights principles


Path dependencies mean AI adoption doesn’t automatically enable economic diversification in developing countries

Explanation

The existing economic structures in many developing countries may not be able to absorb and benefit from AI productivity gains. Without complementary development strategies, AI adoption may not lead to the economic transformation that countries hope for.


Evidence

Reference to UNDP report findings on limited local economy capacity to absorb AI productivity spillovers and weaker links to high-value activities


Major discussion point

Structural barriers to AI-driven development


Topics

Development | Economic


Precautionary principle should replace ‘move fast and break things’ approach in AI development

Explanation

Instead of the Silicon Valley mantra of rapid deployment followed by fixing problems later, AI development should adopt the precautionary principle from environmental law. This means carefully assessing potential harms before deployment rather than dealing with consequences afterward.


Evidence

Reference to Rio Declaration’s precautionary principle and critique of ‘move fast and break things’ mentality


Major discussion point

Risk management approaches in AI development


Topics

Legal and regulatory | Human rights principles


Disagreed with

– Abhishek Singh

Disagreed on

Speed vs. Precaution in AI Development


Public participation rights needed in AI decision-making beyond just addressing harms to affected parties

Explanation

Drawing from environmental law principles like the Aarhus Convention, the public should have rights to access information and participate in AI-related decisions that affect society. This goes beyond just protecting people from AI harms to giving them a voice in AI governance.


Evidence

Reference to Aarhus Convention on Environmental Matters and its principles for public participation in decision-making


Major discussion point

Democratic participation in AI governance


Topics

Human rights principles | Legal and regulatory


T

Thomas Schneider

Speech speed

172 words per minute

Speech length

1186 words

Speech time

412 seconds

Cooperative model has hundreds of years of legacy and fits well for AI age challenges

Explanation

Switzerland’s economic success stories include many cooperatives that continue to operate successfully, such as the country’s largest supermarket chain. This model, with its democratic governance and member ownership, provides a proven framework for organizing economic activity that could be applied to AI and data governance.


Evidence

Examples of Swiss cooperatives including the biggest supermarket created 100 years ago that still operates as a cooperative with customer voting rights, and cooperative insurance companies


Major discussion point

Historical precedents for cooperative organization


Topics

Economic | Legal and regulatory


Agreed with

– Sarah Nicole
– Abhishek Singh

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing

Explanation

Current intellectual property frameworks may not be suitable for the AI age and will need to be reformed. Rather than thinking only at the individual level, societies need to organize collectively to ensure fair sharing of benefits from AI development, similar to how some countries handle healthcare or infrastructure as public goods.


Evidence

Examples of Swiss public services like waste management and hospitals that remain public rather than privatized, and discussion of health data as valuable public resource


Major discussion point

Collective approaches to intellectual property and benefit sharing


Topics

Legal and regulatory | Economic


Agreed with

– Sarah Nicole
– Anita Gurumurthy

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors

Explanation

Switzerland has created infrastructure sharing arrangements, including cooperation with Finland’s Lumi supercomputer and the ICANN network, to provide computing access to universities and small actors globally. This demonstrates how smaller countries can collaborate to access AI infrastructure.


Evidence

Mention of cooperation with NVIDIA on chip development, having one of the 10 biggest supercomputers, and the ICANN initiative for sharing computing power


Major discussion point

International cooperation for AI infrastructure access


Topics

Infrastructure | Development


Agreed with

– Wai Sit Si Thou
– Abhishek Singh

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


Small countries need ecosystem approach similar to 19th century railway development including education and finance

Explanation

Drawing lessons from Switzerland’s 19th-century railway development, small countries need to build complete ecosystems around AI, not just acquire the technology. This includes creating educational institutions, financial systems, and skilled workforce – just as railway development required polytechnical universities and banks like Credit Suisse.


Evidence

Historical example of Swiss railway development in 1840s-50s requiring creation of polytechnical universities, financial institutions, and complete infrastructure ecosystem


Major discussion point

Holistic ecosystem development for emerging technologies


Topics

Development | Infrastructure


V

Valeria Betancourt

Speech speed

121 words per minute

Speech length

929 words

Speech time

457 seconds

Global Digital Compact underscores urgent imperative for digital cooperation to harness AI for humanity’s benefit

Explanation

The Global Digital Compact recognizes the critical need for international cooperation in AI development to ensure it serves human welfare. This cooperation is particularly important for ensuring AI benefits reach the Global South through contextually grounded innovation.


Evidence

Reference to Global Digital Compact and evidence from Global South pointing to importance of contextually grounded AI innovation


Major discussion point

International cooperation for beneficial AI development


Topics

Development | Human rights principles


Disagreed with

– Sarah Nicole

Disagreed on

AI as Disruption vs. Continuity


Local AI must be examined through three dimensions: inclusivity, indigeneity, and intentionality

Explanation

Understanding local AI requires analyzing how it can be inclusive of different communities, respectful of indigenous knowledge systems, and designed with intentional purpose for social good. These three dimensions are essential for AI that contributes to well-being of people and planet.


Evidence

Framework for the panel discussion structured around these three dimensions


Major discussion point

Comprehensive framework for evaluating local AI


Topics

Development | Sociocultural | Human rights principles


Public accountability is essential in how AI is conceptualized, designed, and deployed

Explanation

AI development cannot be left solely to private actors but requires mechanisms for public oversight and accountability throughout the entire lifecycle. This ensures AI serves public interest rather than just private profit.


Evidence

Emphasis on enabling public accountability as a must in AI development processes


Major discussion point

Democratic oversight of AI development


Topics

Legal and regulatory | Human rights principles


S

Sadhana Sanjay

Speech speed

151 words per minute

Speech length

193 words

Speech time

76 seconds

Intellectual property frameworks create challenges for natural persons retaining legal agency in AI systems

Explanation

Current IP frameworks favor corporations and non-natural legal persons in AI development, potentially undermining individual rights and agency. This raises questions about how individuals can maintain control and rights over AI systems that affect them, including in guardian-ward relationships.


Evidence

Question about how natural legal persons can retain agency given existing IP frameworks and ownership structures


Major discussion point

Individual rights versus corporate control in AI systems


Topics

Legal and regulatory | Human rights principles


Agreed with

– Anita Gurumurthy
– Thomas Schneider

Agreed on

Current intellectual property frameworks are inadequate and need reform for the AI era


A

Audience

Speech speed

172 words per minute

Speech length

299 words

Speech time

103 seconds

Blockchain-based platform needed for protecting user content and intellectual property in digital era

Explanation

A platform using QR codes and blockchain verification can help users protect their digital content by providing proof of ownership and creation. This system would work with government authorities to verify and register content, providing legal protection in case of disputes.


Evidence

Description of platform launched at IGF in Riyadh that provides QR codes and blockchain verification for content protection, working with government registration authorities


Major discussion point

Technical solutions for content protection and IP rights


Topics

Legal and regulatory | Infrastructure


WIPO has not yet reached ideal convention for protecting AI intellectual property due to division between AI as data platform and AI-generated content

Explanation

The World Intellectual Property Organization faces challenges in creating comprehensive AI IP protection because of fundamental disagreements about whether to focus on AI systems as data platforms or on the content they generate. This division prevents unified international standards for AI intellectual property.


Evidence

Reference to WIPO’s ongoing struggles and the specific division between treating AI as data platform versus focusing on AI-generated content


Major discussion point

International challenges in AI intellectual property regulation


Topics

Legal and regulatory | Development


Agreements

Agreement points

Cooperative models are viable and proven solutions for AI governance and data management

Speakers

– Sarah Nicole
– Thomas Schneider
– Abhishek Singh

Arguments

Data cooperatives provide collective bargaining power and incentivize high-quality data contribution


Cooperative model has hundreds of years of legacy and fits well for AI age challenges


Marketplace mechanisms could incentivize data contributors through revenue sharing models


Summary

Multiple speakers endorsed cooperative models as effective organizational structures for AI and data governance, drawing on historical precedents and emphasizing collective approaches over individual solutions


Topics

Economic | Legal and regulatory


Shared infrastructure and resource pooling are essential for democratizing AI access

Speakers

– Wai Sit Si Thou
– Abhishek Singh
– Thomas Schneider

Arguments

CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders


India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors


Summary

All speakers agreed that expensive AI infrastructure requires collaborative approaches and resource sharing to ensure equitable access, with concrete examples from different countries and international models


Topics

Infrastructure | Development


Community-led and contextual approaches are necessary for meaningful AI development

Speakers

– Wai Sit Si Thou
– Abhishek Singh
– Anita Gurumurthy

Arguments

AI solutions must work with community-led data and indigenous knowledge for local contexts


Crowd-sourcing campaigns for linguistic datasets across languages and cultures can democratize data access


Need to retain multilingual society structures and decolonize scientific advancement in AI


Summary

Speakers consistently emphasized the importance of involving local communities in AI development and ensuring AI systems reflect diverse cultural and linguistic contexts rather than imposing homogeneous solutions


Topics

Sociocultural | Development


Current intellectual property frameworks are inadequate and need reform for the AI era

Speakers

– Anita Gurumurthy
– Thomas Schneider
– Sadhana Sanjay

Arguments

Trade secrets shouldn’t lock up data needed by public institutions like hospitals and transportation authorities


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing


Intellectual property frameworks create challenges for natural persons retaining legal agency in AI systems


Summary

Multiple speakers identified fundamental problems with existing IP frameworks in the context of AI, calling for reforms that better balance private rights with public interest and individual agency


Topics

Legal and regulatory | Human rights principles


Individual data monetization is insufficient; collective approaches are more viable

Speakers

– Sarah Nicole
– Anita Gurumurthy
– Thomas Schneider

Arguments

Individual data monetization yields minimal returns; collective approaches through cooperatives more viable


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing


Summary

Speakers agreed that individual-level solutions for data rights and monetization are inadequate, emphasizing the need for collective organization and protection of digital commons


Topics

Economic | Legal and regulatory


Similar viewpoints

Both speakers from IT4Change emphasized how current AI development serves private interests at the expense of cultural diversity and public good, requiring intentional intervention to redirect AI toward more equitable outcomes

Speakers

– Anita Gurumurthy
– Nandini Chami

Arguments

Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories


Private value and public value creation goals in AI innovation are not automatically aligned


Topics

Sociocultural | Human rights principles | Economic


Both speakers emphasized the fundamental importance of capacity building and skills development as essential components of inclusive AI development, alongside infrastructure and data access

Speakers

– Wai Sit Si Thou
– Abhishek Singh

Arguments

Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity


Capacity building initiatives needed for training on model development and GPU management skills


Topics

Development | Infrastructure


Both speakers challenged mainstream narratives about AI being inherently disruptive, instead arguing for more cautious, deliberate approaches that recognize AI’s role in reinforcing existing power structures

Speakers

– Sarah Nicole
– Nandini Chami

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Precautionary principle should replace ‘move fast and break things’ approach in AI development


Topics

Economic | Legal and regulatory


Unexpected consensus

Government intervention and public investment in AI infrastructure

Speakers

– Abhishek Singh
– Wai Sit Si Thou
– Thomas Schneider

Arguments

India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors


Explanation

Despite representing different political and economic contexts, speakers from India, UN agency, and Switzerland all endorsed significant government intervention and public investment in AI infrastructure, challenging typical market-driven approaches to technology development


Topics

Infrastructure | Economic | Development


Rejection of Silicon Valley ‘move fast and break things’ mentality

Speakers

– Nandini Chami
– Sarah Nicole
– Valeria Betancourt

Arguments

Precautionary principle should replace ‘move fast and break things’ approach in AI development


AI is automation tool that amplifies existing centralized structures rather than disrupting them


Public accountability is essential in how AI is conceptualized, designed, and deployed


Explanation

There was unexpected consensus across speakers from different backgrounds in rejecting the dominant Silicon Valley approach to technology development, instead advocating for more cautious, accountable approaches typically associated with environmental and public health regulation


Topics

Legal and regulatory | Human rights principles


Overall assessment

Summary

The speakers demonstrated remarkable consensus on the need for alternative approaches to AI development that prioritize collective organization, public accountability, and cultural diversity over market-driven solutions. Key areas of agreement included the viability of cooperative models, the necessity of shared infrastructure, the importance of community-led development, and the inadequacy of current intellectual property frameworks.


Consensus level

High level of consensus with significant implications for AI governance. The agreement across speakers from different sectors (government, UN agencies, civil society, academia) and countries suggests growing recognition that current AI development paradigms are insufficient for achieving equitable outcomes. This consensus provides a foundation for alternative policy approaches that emphasize public interest, collective action, and democratic participation in AI governance, challenging dominant narratives about inevitable technological disruption and market-led solutions.


Differences

Different viewpoints

Individual vs. Collective Data Monetization Approaches

Speakers

– Abhishek Singh
– Sarah Nicole

Arguments

Marketplace mechanisms could incentivize data contributors through revenue sharing models


Individual data monetization yields minimal returns; collective approaches through cooperatives more viable


Summary

Singh advocates for marketplace mechanisms where individuals can be paid for data contributions, citing examples like Karya company. Nicole argues individual monetization yields minimal returns and could exploit poor people, advocating instead for collective cooperative approaches.


Topics

Economic | Legal and regulatory


AI as Disruption vs. Continuity

Speakers

– Sarah Nicole
– Valeria Betancourt

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Global Digital Compact underscores urgent imperative for digital cooperation to harness AI for humanity’s benefit


Summary

Nicole presents AI as fundamentally non-disruptive, arguing it reinforces existing power structures. Betancourt frames AI as requiring urgent cooperative action for humanity’s benefit, implying transformative potential that needs guidance.


Topics

Economic | Sociocultural | Development


Speed vs. Precaution in AI Development

Speakers

– Nandini Chami
– Abhishek Singh

Arguments

Precautionary principle should replace ‘move fast and break things’ approach in AI development


India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


Summary

Chami advocates for precautionary approaches and careful assessment before AI deployment. Singh focuses on rapid infrastructure development and deployment to democratize access, representing a more accelerated approach.


Topics

Legal and regulatory | Human rights principles | Infrastructure


Unexpected differences

Fundamental Nature of AI Technology

Speakers

– Sarah Nicole
– Other speakers

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Explanation

Nicole’s characterization of AI as fundamentally non-disruptive contrasts sharply with the general framing by other speakers who treat AI as a transformative technology requiring new approaches. This philosophical disagreement about AI’s nature is unexpected in a discussion focused on local AI solutions.


Topics

Economic | Sociocultural


Intellectual Property Protection vs. Commons Access

Speakers

– Audience (Dr. Nermin Salim)
– Anita Gurumurthy

Arguments

Blockchain-based platform needed for protecting user content and intellectual property in digital era


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation


Explanation

The audience member advocates for stronger IP protection mechanisms while Gurumurthy argues for protecting commons from IP exploitation. This represents an unexpected fundamental disagreement about whether the solution is more or less IP protection.


Topics

Legal and regulatory | Infrastructure


Overall assessment

Summary

The discussion shows moderate disagreement on implementation approaches rather than fundamental goals. Main areas of disagreement include individual vs. collective data monetization, AI’s disruptive nature, development speed vs. precaution, and IP protection vs. commons access.


Disagreement level

Medium-level disagreement with significant implications. While speakers generally agree on the need for inclusive, locally-relevant AI, their different approaches to achieving this goal could lead to incompatible policy recommendations. The disagreements reflect deeper philosophical differences about technology’s role, market mechanisms, and the balance between innovation speed and social protection.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from IT4Change emphasized how current AI development serves private interests at the expense of cultural diversity and public good, requiring intentional intervention to redirect AI toward more equitable outcomes

Speakers

– Anita Gurumurthy
– Nandini Chami

Arguments

Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories


Private value and public value creation goals in AI innovation are not automatically aligned


Topics

Sociocultural | Human rights principles | Economic


Both speakers emphasized the fundamental importance of capacity building and skills development as essential components of inclusive AI development, alongside infrastructure and data access

Speakers

– Wai Sit Si Thou
– Abhishek Singh

Arguments

Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity


Capacity building initiatives needed for training on model development and GPU management skills


Topics

Development | Infrastructure


Both speakers challenged mainstream narratives about AI being inherently disruptive, instead arguing for more cautious, deliberate approaches that recognize AI’s role in reinforcing existing power structures

Speakers

– Sarah Nicole
– Nandini Chami

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Precautionary principle should replace ‘move fast and break things’ approach in AI development


Topics

Economic | Legal and regulatory


Takeaways

Key takeaways

Local AI development requires addressing three critical dimensions: inclusivity, indigeneity, and intentionality to ensure AI serves the common good rather than perpetuating existing inequalities


AI infrastructure inequality is severe, with massive investment disparities (AI investment 3x climate adaptation spending) and monopolistic control (NVIDIA controls 90% of GPUs)


Current AI models amplify Western cultural homogenization and epistemic injustices, erasing cultural histories and multilingual thinking structures


Cooperative models and shared infrastructure approaches can democratize AI access, as demonstrated by India’s subsidized compute infrastructure and Switzerland’s supercomputer sharing initiatives


Data governance must shift from individual to collective approaches, with data cooperatives providing better bargaining power and quality incentives than individual data monetization


AI is fundamentally an automation tool that amplifies existing centralized structures rather than disrupting them, requiring radical infrastructure changes for true user agency


The tension between necessary pluralism for local contexts and generalized models dominating the market represents a key challenge for inclusive AI development


Intellectual property frameworks need fundamental reform to prevent trade secrets from locking up data needed by public institutions and to protect commons from free-riding by commercial AI models


Resolutions and action items

Establish a CERN-like model for AI infrastructure sharing globally, pooling resources from multiple countries and organizations


Create global repository of AI applications in key sectors (healthcare, agriculture, education) that can be shared across geographies


Develop crowd-sourcing campaigns for linguistic datasets to support AI development in minoritized languages


Implement public procurement policies that steer AI development toward human-centric and worker-complementary solutions


Establish multi-stakeholder working group on data governance to develop framework recommendations


Create capacity building initiatives through UN bodies or global AI partnerships for training on model development and AI skills


Develop marketplace mechanisms for incentivizing data contributors through revenue sharing models


Reform intellectual property laws to include exceptions for public interest use of aggregated data


Unresolved issues

How to make small autonomous AI systems economically viable against dominant large-language models with massive scaling advantages


Finding scalable alternatives to data-scraping advertising business models that currently dominate the digital economy


Developing concrete metrics to define and measure safety, responsibility, and privacy in AI systems beyond ‘do no harm’ principles


Resolving the fundamental tension between open source AI development and preventing free-riding by commercial entities


Addressing the ‘economy of prompt’ where user interactions continue to improve monopolistic AI models


Determining how to fix liability for AI harms across complex transnational value chains with multiple actors


Establishing effective mechanisms for public participation in AI decision-making processes


Creating sustainable funding models for local AI development that don’t rely on exploitative data practices


Suggested compromises

Hybrid approach combining open source development with protections against commercial exploitation through reformed IP frameworks


Government subsidization of compute infrastructure costs (as demonstrated by India’s 40% cost underwriting) to balance private sector efficiency with public access


Society-level collective bargaining for data rights rather than purely individual or purely corporate control models


Balancing innovation incentives with precautionary principles by slowing ‘move fast and break things’ approach while preserving development momentum


Multi-stakeholder governance models that include private sector, government, and civil society in AI development decisions


Regional cooperation approaches (like East African countries pooling Swahili language data) to achieve necessary scale while maintaining local relevance


Public-private partnerships for AI infrastructure that leverage private sector capabilities while ensuring public benefit and access


Thought provoking comments

Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation… So the efficiencies in compute are really not necessarily going to translate into some kind of respite for the kind of climate change impacts.

Speaker

Anita Gurumurthy


Reason

This comment is deeply insightful because it reframes the AI discussion by introducing a critical tension between AI investment and climate priorities. It challenges the assumption that technological efficiency automatically leads to environmental benefits, revealing the paradox that AI efficiency gains are being used to build larger, more resource-intensive models rather than reducing overall environmental impact.


Impact

This comment established the foundational tension for the entire discussion, setting up the core dilemma that all subsequent speakers had to grapple with: how to democratize AI benefits while addressing planetary boundaries. It shifted the conversation from purely technical considerations to systemic sustainability concerns.


AI is essentially a neural network… So overall, AI is an automation tool. It is a tool that accelerates and amplifies everything that we know… So, if AI is a continuity and an amplification of what we already know, then the radicality needs to come from the response that we’ll bring to it.

Speaker

Sarah Nicole


Reason

This comment is profoundly thought-provoking because it directly challenges the mainstream narrative of AI as revolutionary disruption. By reframing AI as an amplification tool that reinforces existing power structures, it shifts the focus from the technology itself to the systemic responses needed to address its impacts.


Impact

This reframing fundamentally altered the discussion’s direction, moving away from technical solutions toward structural and infrastructural changes. It provided intellectual grounding for why radical responses are necessary and influenced subsequent speakers to focus more on systemic alternatives like cooperatives and commons-based approaches.


We reject the unified global system. But the question is, are these smaller autonomous systems even possible?… So this tension between pluralism that is so necessary and generalized models that seem to be the way, the only way AI models are developing in the market, is this tension is where the sweet spot of investigation actually lies.

Speaker

Anita Gurumurthy


Reason

This comment identifies the central paradox of local AI development – the need for cultural and linguistic diversity versus the economic and technical pressures toward centralized, generalized models. It articulates the core tension that makes this problem so complex and resistant to simple solutions.


Impact

This comment established the intellectual framework that guided much of the subsequent discussion. It helped other speakers understand why technical solutions alone (like shared computing infrastructure) need to be coupled with new governance models and cooperative approaches.


The largest source for the large language models, especially ChatGPT, was Wikipedia. So you actually see free riding happening on top of these commons… But what if my open source meant for my community is actually servicing profiteering?

Speaker

Anita Gurumurthy


Reason

This observation is particularly insightful because it reveals how current AI development exploits commons-based resources while privatizing the benefits. It challenges the assumption that open-source solutions automatically serve community interests and highlights the need for protective mechanisms.


Impact

This comment deepened the discussion about intellectual property and data governance, leading to more nuanced conversations about how to structure commons-based approaches that can’t be easily exploited by commercial interests. It influenced the later discussion about cooperative models and collective bargaining.


The question of having a stake in your data has often been framed on a personal level… the answer will not be on an individual perspective, but it would be on a collective one. Because it’s when the data is aggregated, it’s when the data is in a specific context that then it gains value.

Speaker

Sarah Nicole


Reason

This comment is insightful because it challenges the dominant framing of data rights as individual privacy issues and redirects attention to collective action and cooperative models. It provides a practical pathway forward that moves beyond the limitations of individual data monetization.


Impact

This comment shifted the discussion from individual rights to collective organizing, influencing other speakers to elaborate on cooperative models and community-based approaches. It helped bridge the gap between theoretical critiques and practical alternatives.


We launched a crowd-sourcing campaign to get linguistic data across languages, across cultures, in which people could kind of come to a portal and contribute data sets… If we can take up capacity-building initiatives and training… it can really, really help.

Speaker

Abhishek Singh


Reason

This comment is valuable because it provides concrete, implementable examples of how local AI can work in practice, moving beyond theoretical discussions to actual policy implementations. It demonstrates that alternative approaches are not just idealistic but practically feasible.


Impact

This grounded the discussion in real-world examples and gave other participants concrete models to reference. It helped shift the conversation from problem identification to solution implementation, influencing the final recommendations about cooperative infrastructure and capacity building.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a progression from problem identification to systemic analysis to practical alternatives. Anita Gurumurthy’s opening comments about the climate-AI investment paradox and the tension between pluralism and generalization set up the core dilemmas. Sarah Nicole’s reframing of AI as amplification rather than disruption provided the theoretical foundation for why radical responses are necessary. The subsequent comments built on this foundation, moving from critique to concrete alternatives like cooperative models, shared infrastructure, and community-based data governance. Together, these comments transformed what could have been a technical discussion about AI optimization into a deeper conversation about power structures, commons governance, and alternative economic models. The discussion evolved from identifying problems with current AI development to articulating a coherent vision for community-controlled, environmentally sustainable AI systems.


Follow-up questions

Are smaller autonomous AI systems even possible, and how can fragmented community efforts be brought together to collaborate?

Speaker

Anita Gurumurthy


Explanation

This addresses the fundamental tension between necessary pluralism and the market trend toward generalized models, which is crucial for enabling local AI development


How do we build our own computational grammar and reject unified global systems while maintaining viability?

Speaker

Anita Gurumurthy


Explanation

This is essential for decolonizing scientific advancement and preserving multilingual societies’ diverse ways of thinking


How can we create a global compute infrastructure facility (CERN model for AI) across countries with multilateral bodies joining to make infrastructure available affordably?

Speaker

Abhishek Singh


Explanation

This could democratize access to expensive AI compute infrastructure that is currently controlled by few companies


How can we establish a global repository of AI applications and use cases that can be shared across geographies?

Speaker

Abhishek Singh


Explanation

This would enable knowledge sharing and prevent duplication of efforts in developing AI solutions for common problems


How do we find a scalable alternative business model to the current data scraping and advertising model?

Speaker

Sarah Nicole


Explanation

Current business models undermine user agency and data ownership, so alternatives are needed for a fair data economy


How do we develop qualitative and quantitative metrics to define safety, responsibility, and privacy in AI systems?

Speaker

Sarah Nicole


Explanation

Clear metrics are needed to move beyond vague principles and create accountability mechanisms


How do we fix liability for individual, collective, and societal harms in complex transnational AI value chains?

Speaker

Nandini Chami


Explanation

Current liability regimes are inadequate for the complexity of AI systems and the difficulty of proving causal links to harms


How do we update product fault liability regimes so the burden of proof is not on affected parties to prove causal links between AI defects and harms?

Speaker

Nandini Chami


Explanation

Given the black box nature of AI technology, current liability frameworks place unfair burden on those harmed by AI systems


How can we work out marketplace mechanisms where data contribution is priced and contributors are incentivized?

Speaker

Abhishek Singh


Explanation

This addresses the fundamental question of how to fairly compensate those whose data contributes to AI development


How do we institute exceptions in IP laws for public interest use of aggregate data by public authorities?

Speaker

Anita Gurumurthy


Explanation

Trade secrets are being used to lock up data that should be available to public transportation, hospitals, and other essential services


How do we protect open source and data commons from free riding by profit-making entities?

Speaker

Anita Gurumurthy


Explanation

Current systems allow companies to profit from commons like Wikipedia without fair compensation to the community


How do we curtail the ‘economy of prompt’ where users perfect monopolistic models through their interactions?

Speaker

Anita Gurumurthy


Explanation

User prompts are continuously improving large language models, further entrenching monopolistic advantages


How can we develop good data governance frameworks through multi-stakeholder approaches?

Speaker

Wai Sit Si Thou


Explanation

Data governance is strategic for both AI and digital economy development, requiring collaborative frameworks


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #137 Ethical Hacking for a Safer Internet

Lightning Talk #137 Ethical Hacking for a Safer Internet

Session at a glance

Summary

This discussion focused on the legal challenges surrounding ethical hacking and the need for improved legal frameworks to support cybersecurity efforts. Tim Philipp Schafers from Mint Secure and lawyer Carolin Kothe presented their analysis of how different jurisdictions treat ethical hacking versus malicious hacking activities. They began by defining ethical hacking as systematic testing to uncover security vulnerabilities, distinguishing between authorized penetration testing and unauthorized but well-intentioned security research conducted for societal benefit.


The speakers emphasized the critical importance of external hackers in strengthening cybersecurity, noting that the NIS2 directive recognizes that most security disclosures come from external testers. They highlighted how crowdsourced defense works effectively, as demonstrated by open source software development and corporate bug bounty programs. However, they identified a significant problem: most legal systems fail to differentiate between ethical and malicious hacking, creating uncertainty and potential legal risks for security researchers.


The presentation examined various jurisdictional approaches across Europe, noting that Poland stands out as a rare example with explicit statutory support for ethical hacking when done solely to secure systems. Most other countries equate ethical hacking with criminal activity, though some like the US and France have prosecutorial discretion policies that provide safe harbor for responsible disclosure. The speakers outlined four key elements needed for an ideal legal framework: legal certainty, explicit immunity for ethical hackers, reframing of hacking terminology, and clear differentiation between ethical and malicious activities.


They concluded by calling for harmonized international regulations and greater public awareness to support collaboration between ethical hackers, private companies, and governments in strengthening cybersecurity defenses.


Keypoints

## Major Discussion Points:


– **Definition and Types of Ethical Hacking**: The speakers distinguish between malicious hacking and ethical hacking, explaining that ethical hacking involves systematic testing to uncover security vulnerabilities with good intent. They identify two subtypes: authorized ethical hacking (contracted penetration testing, bug bounty programs) and unauthorized ethical hacking done for societal benefit without financial gain.


– **Legal Inconsistencies Across Jurisdictions**: The presentation highlights how different countries treat ethical hacking legally, with most jurisdictions failing to distinguish between ethical and malicious hacking. Poland is cited as a rare positive example with explicit statutory support, while countries like Germany, the US, and France rely on prosecutorial discretion rather than clear legal protections.


– **Current Legal Challenges for Ethical Hackers**: Despite following responsible disclosure practices, ethical hackers face legal uncertainty, potential prosecution, and emotional pressure. Even when not prosecuted, they may face investigations, reputational damage, and restrictions on sharing their findings for educational purposes.


– **Proposed Legal Framework Improvements**: The speakers outline four key elements for better regulation: legal certainty, explicit immunity for responsible disclosure, reframing of hacking in public perception, and clear differentiation between ethical and malicious activities. They also advocate for harmonized international regulations.


– **Need for Collaboration and Public Awareness**: The discussion emphasizes the importance of ethical hackers in cybersecurity, citing examples like the Heartbleed bug discovery and DEF CON voting village, while calling for better collaboration between private sector, ethical hacking community, and government.


## Overall Purpose:


The discussion aims to advocate for legal reform that would protect and encourage ethical hacking by establishing clear legal frameworks that distinguish between beneficial security research and malicious cybercrime. The speakers seek to educate the audience about the value of ethical hacking and promote policy changes that would provide legal certainty for security researchers.


## Overall Tone:


The tone is professional, educational, and advocacy-oriented throughout. The speakers maintain an informative approach while expressing clear frustration with current legal ambiguities. The tone remains consistently constructive, focusing on solutions rather than criticism, and becomes more engaging during the Q&A session where practical concerns about surveillance and brain drain are addressed with empathy and understanding.


Speakers

– **Tim Philipp Schafers**: Co-founder of Mint Secure, specializes in ethical hacking and criminal law in regards to computer crime


– **Carolin Kothe**: Trained lawyer, does software development in her law firm, deals with questions of standardization and citizen knowledge as part of her role at the Liquid Legal Institute


– **Audience**: Multiple audience members asking questions during the Q&A session (roles and expertise not specified)


Additional speakers:


None – all speakers were included in the provided speakers names list.


Full session report

# Legal Challenges and Reform Needs for Ethical Hacking: A Comprehensive Discussion Summary


## Introduction and Context


This discussion brought together Tim Philipp Schafers, co-founder of Mint Secure specializing in ethical hacking, and Carolin Kothe (pronounced “Carolin Kothein Kothe”) from the Liquid Legal Institute, who combines legal expertise with software development experience in standardization and citizen knowledge. Their presentation addressed the critical legal challenges facing ethical hackers and the need for comprehensive legal reform to support cybersecurity efforts while protecting legitimate security researchers.


The speakers presented their analysis through a structured four-step approach: defining ethical hacking and its variants, explaining why ethical hacking is important, examining current legal frameworks across jurisdictions, and proposing solutions for legal reform.


## Defining Ethical Hacking and Its Variants


Carolin Kothe explained that hacking fundamentally involves systematic testing to uncover security vulnerabilities, with the crucial distinction between ethical and malicious hacking lying in three critical factors: intent, authorization, and methods employed. The actual judgment of whether hacking is ethical or malicious depends on these factors rather than the technical actions themselves.


Kothe distinguished between two distinct subtypes of ethical hacking: authorized ethical hacking, which includes contracted penetration testing and corporate bug bounty programs, and unauthorized but benevolent ethical hacking, conducted without individual contracts but motivated by societal benefit rather than financial gain.


Tim Philipp Schafers referenced the established hacker ethic from the 1980s, later extended by groups like the Chaos Computer Club, which established moral principles including breaking systems to enhance security, avoiding data littering, and protecting private information. He provided concrete examples including the discovery of the Heartbleed bug in OpenSSL affecting HTTPS connections, testing conducted at DEF CON voting villages, and responsible information handling. Schafers also mentioned historical examples like the Loft hacker collective’s testimony and Taiwanese activist groups who handled sensitive information responsibly.


## The Critical Importance of Ethical Hacking in Cybersecurity


Both speakers emphasized the indispensable role of ethical hackers in modern cybersecurity. Kothe highlighted that external security researchers provide the majority of security disclosure reports to Community Emergency Response Teams (CERTs), as recognized by regulations like the NIS2 directive. This external perspective proves essential because internal security teams may miss vulnerabilities due to familiarity with their own systems.


Schafers noted that “crowdsource defense works,” referencing the open source software model where distributed scrutiny by many contributors strengthens overall security. Corporate recognition of ethical hacking’s value has grown, with companies increasingly investing in bug bounty programs, though Schafers cautioned that hackers can be “uncautious with their wording” when asking for rewards, potentially creating legal complications.


The speakers emphasized that ethical hacking serves as crucial defense against increasing cybercrime costs, both monetary and in terms of privacy breaches and infrastructure disruption.


## Legal Framework Disparities Across Jurisdictions


The presentation revealed significant inconsistencies in how different countries approach ethical hacking within their legal systems. Kothe’s analysis demonstrated that most jurisdictions fail to distinguish between ethical and malicious hacking, creating uncertainty for security researchers.


Poland emerged as a rare positive example, with explicit statutory support stating that no offense is committed when hacking is conducted “solely on the purpose of securing a system.” Kothe termed this a “unicorn regulation” that represents what comprehensive legal protection could look like, yet remains exceptional.


The complexity varies considerably across jurisdictions. Some countries require bypassing security measures as an objective element of computer crime, while others treat authorization as either an objective element or a justification defense. Countries like Latvia incorporate substantial harm requirements, while Germany and Austria include intent to harm or enrich as subjective elements, which better distinguishes ethical from malicious hacking but still creates uncertainty.


## Current Legal Challenges and Prosecution Approaches


Despite following responsible disclosure practices, ethical hackers face considerable legal uncertainty. Schafers emphasized the emotional pressure security researchers experience when discovering vulnerabilities, lacking clear statutory protection even when acting with beneficial intent.


The speakers identified four approaches jurisdictions currently employ: explicit statutory support (Poland), additional legal requirements favoring ethical hackers, prosecutorial discretion policies creating safe harbors, and reliance on justification defenses.


Countries like the United States and France have implemented prosecutorial discretion policies. Kothe referenced the justice.gov website and French authority safe harbor details, but noted these approaches remain inadequate because security researchers still technically commit crimes and face restrictions on publishing findings for educational purposes.


Even without prosecution, the investigation process creates significant hardship through mental burden, potential reputation damage, and restrictions on sharing research findings that could benefit the broader security community.


## Proposed Solutions for Comprehensive Legal Reform


The speakers outlined their “wish list” of four essential elements for an ideal legal framework. First, legal certainty must be established so security researchers understand how to responsibly report vulnerabilities without fear of prosecution.


Second, explicit immunity should be codified in law rather than relying on prosecutorial discretion. Third, comprehensive reframing of hacking terminology and public perception is necessary to move away from purely negative connotations. Fourth, clear legal differentiation between ethical and malicious actors must be established in statutory frameworks.


The speakers advocated for harmonized international regulation, recognizing that software vulnerabilities affect multiple jurisdictions and fragmented national approaches create unnecessary complexity for companies acting internationally.


## Audience Engagement and Unresolved Implementation Issues


The question-and-answer session revealed additional complexities. One audience member asked about Germany’s progress after a failed referendum, prompting Kothe to explain details about the “not authorized a scene if” provision and burden of proof considerations in German legal reform attempts.


An important concern was raised about whether intent requirements might expose security researchers to intrusive surveillance practices. Another audience member, Janik, questioned potential brain drain effects, suggesting that legal uncertainty might push talented individuals toward black hat activities rather than legitimate white hat security research. Schafers responded by noting that anonymous reporting through onion networks represents one way people navigate these legal uncertainties.


The question of how far ethical hackers can proceed in their testing activities remains unresolved, as hacking involves a series of actions rather than a single act, raising complex questions about which specific actions are covered by legal justifications.


## Areas of Consensus and Approach Differences


Both speakers agreed that ethical hacking provides essential security benefits and should be clearly distinguished from malicious activities. They shared the view that current legal frameworks create harmful uncertainty for security researchers and that comprehensive legal reform including explicit statutory protection is necessary.


Both advocated for harmonized international regulation and recognized that societal perception of hacking needs fundamental change. They agreed that prosecutorial discretion approaches are inadequate solutions.


Differences emerged primarily in emphasis, with Kothe providing detailed technical legal analysis while Schafers focused more on practical implementation needs and public awareness requirements.


## Conclusions and Call to Action


The speakers established that current legal approaches fail to serve either security or justice interests effectively, creating uncertainty for beneficial actors while potentially driving talent toward malicious activities. They called for comprehensive rather than piecemeal reform, addressing statutory protections, public perception, international coordination, and practical implementation challenges.


The speakers concluded with specific action items: collecting and discussing points about better legal frameworks within companies and with lawmakers, sharing ideas about differentiating between malicious and ethical activities, working toward harmonized international regulation, and increasing public awareness through education and discussion.


The discussion highlighted that achieving comprehensive reform will require sustained effort and careful attention to unintended consequences, while recognizing the essential role ethical hackers play in protecting digital infrastructure and systems.


Session transcript

Tim Philipp Schafers: Hello and welcome to our talk Ethical Hacking for a Safer Internet. My name is Tim Philipp Schafers and today we will talk about criminal law in regards of computer crime and I’m the co-founder of Mint Secure. We are also doing ethical hacking and I’m happy to be here today with Carolin Kothe.


Carolin Kothe: My name is Carolin Kothein Kothe, I’m a trained lawyer. I’m also doing the software development in my law firm. I’m also dealing with questions of standardization and citizen knowledge as part of my role at the Liquid Legal Institute. So we will examine today the legal patchwork concerning the treatment of ethical hacking in different jurisdictions and want to kind of show you how a harmonized framework could look like that empowers ethical hackers to strengthen our IT landscape. We will proceed in four steps, which is first defining what hacking and ethical hacking actually means to start with a common ontology for our talk. Then we will continue with kind of emphasizing the importance of external hackers as indispensable and then we will continue showing you the main differences in jurisdictions in Europe. Last but not least we will envision how an ideal legal framework could look like as a start of a little discussion. So what is ethical hacking? Hacking has a negative connotation, a negative narrative to it, but what it actually means is that we just do the systematic test to uncover security vulnerabilities and systems and applications in networks and to judge the actual act we have to look at the intent, we have to look at the authorization, we have to look at the methods that the hacker actually used. So what people usually have in mind when they think of hacking is this kind of malicious act, meaning somebody seeks private gain, sabotage, theft, but there’s also ethical hacking and we can even distinguish ethical hacking in two subtypes. The one that is authorized, meaning companies that actually hire penetration test teams or do bug bounty programs to invite external testers to actually back their defenses and then we have the other even more highly debatable group which doesn’t have these individual contracts but actually is just working without seeking financial benefit but doing it out of society’s reason, society’s interest. And because of that we will actually show you the disclosure policies that all these hackers, no matter what kind of ethical hacking group you belong to, will look like. But first we want to emphasize why we’re actually having this talk. So there’s an increasing surge in cybercrime and with that comes a high increase of costs and we don’t only mean the monetary cost to it but also the intangible risk. And that is actually why the regulators already have recognized it, they have recognized that it’s a need to put pressure on companies to invest in their security systems and especially we have seen this in the NIST2 directive which even states that the majority of disclosure reports come actually from external testers. And the market reinforces this, so there are already plenty of companies that invest heavily in bug bounty programs where they pay those who report responsibly and we also see this with an increase of open source usage. Because open source relies on so many eyes, they take this kind of expertise of different people which know different kind of security vulnerabilities to then build up higher security barriers. So crowdsource defense works and open source is a living proof of that. So already this kind of discussion is going on for quite a while already and to make an example of that I can hand over to Tim to give you one of these examples.


Tim Philipp Schafers: Yeah, thank you very much Carolin Kothe. Actually here you can see a testimony from the Loft hacker collective. It’s kind of the first time where hackers were in direct exchange with politicans and as you can see this is still a while ago and at that time it was kind of the first remarks where it was mentioned that there is certain critical infrastructures, that there is a real harm that can exist there. But actually not that much has changed in regards of how the media perceives hackers In general, as Carolin Kothe mentioned, this is very often connotated with a negative framing. And actually we kind of want to flip that and also want to emphasize that hacking is also a possibility to enhance security. And very often one can hear that hacking is malicious or something, but actually if we look back at the so-called hacker ethic, we see that even within this community there is a huge understanding how to act and how to act morally. Here you can see an excerpt from the so-called hacker ethic, which basically describes how you should work as a real hacker. And there you can see again that, for example, the idea of breaking things to enhance them and to make them even more secure is a very basic principle which is already there. Furthermore, that you should not litter with other people’s data and also use public data and protect private data. So this is really a common ground and understanding. In the 1980s, this was first kind of proposed and discussed and later on it was extended by the Chaos Computer Club, for example, where many people thought about, OK, how can we handle hacking or what is really good hacking in that regard. And to my personal understanding, it’s really important to understand that breaking things always some kind helps of fixing things. We also have a few examples here, which might be familiar for you or not. I just want to briefly mention a few of those things. Actually, there was a so-called Heartbleed bug, which was a security vulnerability within OpenSSL, which is used for transport layer security. And in 2014, there was a serious vulnerability in that software, which is basically used by a lot of web servers on the Internet. Probably when you enter a website and enter HTTPS, this software is used on the server side to encrypt certain connections. And the good thing is that people very often find these bugs, report these bugs, and that they can fix. This is mostly how open source software, for example, is secured. There’s also the principle that you don’t disclose any information about the security vulnerability before it is fixed. This is also closely related to the hacker ethic you have seen before. Furthermore, a second example is, for example, the so-called DEF CON voting village. DEF CON is a security conference in the US. And there is a basic idea that, for example, voting machines are hardly tested by hackers to see whether they are secure or not. And of course, this also helps to enhance the security at that point and to make sure that those components are secured. As Caro mentioned before, for example, the NIST2 directive also aims in the direction of saying, okay, it makes sense to break certain things and fix them afterwards. This is the basic enhancement process, I would say. And the third example here is from a Taiwanese activist group. To me, this is also very important because a lot of people think in regards of hacking always from the technical standpoint. But for a lot of hackers, and also for me personally, hacking also is handling information responsibly. And in this case, for example, people were able to make use of public information and APIs, and made a more user-friendly way to disclose information. This is very often also something that hackers do. So just to give you a few examples, what can be done with hacking, and this is just a short excerpt. There are many more examples where security of software and products were enhanced in the past also by certain people, hacker collectives, and so on. And now I would hand over to Caroline so that we look at certain legal examples.


Carolin Kothe: So after Tim told you about the disclosure policies, you might think that if you follow those policies, you are not treated as a criminal. Yet statutory certainty is quite rare for ethical hackers. Most countries still equate ethical hacking with criminals. And we had a referendum in Germany, which was actually So due to that and due to the fact that usually companies act internationally, meaning their software is internationally used, meaning we have always different jurisdictions affected, we actually had a look into the other countries. And we did found one good example, one rare example in the Polish panel code, which actually explicitly supports ethical hacking in the sense that it says no offense is committed if you do it solely on the purpose of securing a system. And however, this is kind of a unicorn regulation, because other states don’t do this differentiation. They equate ethical hacking with malicious hacking on the first place. So I can hand over to Tim what it actually means in practice, if you equate malicious hacking with ethical hacking.


Tim Philipp Schafers: Yeah, so in general, one potentially can imagine that it’s combined with a lot of emotional pressure also when you find, for example, a certain vulnerability, but you are unsure whether this is fully covered by the law and how to potentially report this. So what we see is that ethical hackers often are threatened by the classical legal system or how the laws are working. And from my perspective, the core question is whether we want this so that also ethical hackers are put under pressure or don’t know how to report certain vulnerabilities, or if it doesn’t make more sense to say, hey, please, please hack public systems to secure them to responsibly report this. There are some community emergency response teams around the world that also receive reports and handle them. And in a few cases, of course, it helps to make systems even more secure. In other cases, there was also the case that certain hackers got a little bit of legal pressure and were not able to disclose or talk a lot about these topics.


Carolin Kothe: So to understand the main differences between the jurisdictions and how they treat ethical hacking, we need to clarify, at least on a brief level first, what actually makes an act a crime and what will be punished and what will be prosecuted. So a crime usually has two conditions to it. The first one is, did you fulfill all the elements of the offense that is stated by the law? And the second one is, is this act deemed lawful or unlawful? And it is unlawful if you lack any kind of legal justification for it, as we mentioned the authorization at the start. So let’s have a look at the main differences in the jurisdictions, starting from the act itself. So actually, we have in every kind of jurisdiction some variance of, I’m assessing, I’m altering, interfering with the system, I’m interfering with data. But what we also have is that some countries, but not all of them, have an additional bypassing of security measures in their samples. And we also have the element of authorization, sometimes as an objective element of the act and sometimes as a justification. And as stated, that makes a huge difference, because one means that even commissioned ethical hackers committed a crime but are justified, and the I-didn’t-commit-a-crime-at-all kind of variation. There’s another issue with the authorization, especially when it comes to third-party systems, because there is a dispute, whose authorization do I actually need to be completely covered? It could be that I’m commissioned by one company, but if I’m accidentally or by intention accessing a third-party system, I might need another system owner’s authorization too. So even commissioned hackers are always in that kind of gray area, which is obviously not what is wanted. You have also countries that have put these additional requirements that kind of put up a higher threshold to it, which is to the benefit of ethical hackers, and that one example would be Latvia, who says you need an extra substantial harm. And this kind of substantial harm, though it’s kind of a vague, ambitious term, because what does substantial actually mean? It does help ethical hackers, because especially if you see it as financial harm, this is usually not fulfilled by ethical hackers, and by that you have this kind of distinction to it. But when we look actually onto the subjective elements of an offense, we actually see that some countries put even a better threshold that even distinguishes more between ethical hacking and malicious attacks, and that is, the subject element usually says you intentionally and knowingly do what is stated in the objective offense, but if you also add the intent to harm someone or the intent to enrich yourself or a third party to the law, which is quite easily done, which was also done in the German referendum, but also for example Austria is doing that, this intent is actually what differentiates the ethical hacker from the malicious attacks, and by that you kind of do this distinguishing, so ideal version of doing it. As stated, even if you meet all these technical requirements, the act itself could still be rendered as lawful if you have a justification reason. And most hackers argue whether it’s a state of emergency for this personal data or there’s a state of emergency because it’s critical infrastructure and we all kind of are dependent on that, and this is kind of highly debatable, because what means immediate? The state has happened already quite a while before, the state is there for quite a while already. And there’s another even severe question to the justification reason argument, because hacking is not just one act, it’s a series of actions, and the question is what of these actions are actually covered by the justification reason? So how far can I as a hacker actually go and how far is too far? What is actually required? But after all these issues, we want to mention one good thing, kind of at least, which is that most countries that till that point still equate ethical hacking and malicious attacks actually do not convict or prosecute. And we see, for example, in the US and in France, that there are public enforcement discretives, like you can actually see on, for example, the USA, on the justice government website, where they state as long as you follow the responsible disclosure guidelines, we won’t prosecute. Or in case of France, if you report to our authority that is meant for security, well, then you have a safe harbor, we won’t tell your name, even if some kind of complaint is filed. As said, you still have committed a crime, and it’s just not kind of prosecuted. And this comes also with a little kind of snippet to it, because what hackers, especially ethical hackers, like to do is use what they have done for educational purposes and kind of publish it, and they are not allowed to do that. As soon as they do, all this kind of on-hold procedure is gone. And that is also not helpful, because we want people to publish what could be a security vulnerability and exchange on that. So to sum it up, we have basically four different legal approaches. We have that explicit statutory support, like in Poland, where we already have in the law this kind of framing of ethical hackers are not seen as criminals, optimal version. Then we have the second kind of favorable version of putting additional requirements to it that are really fulfilled by ethical hackers. Also good, not optimal, because we kind of like that reframing of the first version. And then we have the prosecution directives, meaning, as stated, for example, for France, creating this kind of safe harbor to it. The last one, which is still happening in most of the countries, is the least favorable one, because it lets the hacker rely on justification reasons, let’s see, basically the interpretation of different judges, he never knows what is going to happen. And then we also have the thing that the prosecution investigation is still ongoing, meaning that they might face hard procedures, they might face mental load of legal battles, they might even face reputation loss, which is especially affecting those who have another business as IT researchers, too, to it. And leaving me up to that one, I can hand over to Tim and ask him what his wish list for ideal legal framework would be.


Tim Philipp Schafers: Yeah, actually we thought about, okay, what might be helpful and for better legal framework we have outlined at least four things that are important. On the one hand is legal certainty needs to be established, so what Caro mentioned that in a lot of cases, as a hacker reports something, maybe a case is opened or not, but yeah, it would be great if it would be very clear. that you really know, okay, where is it possible to responsibly report certain security vulnerabilities and how to act in the legal framework. Then there’s another point, explicit immunity. So like we heard about safe harbor regulations, that this is really stated in the law that you are allowed to report certain security vulnerabilities. As mentioned before, a lot of computer emergency response teams around the world say, hey, please report us security vulnerabilities, but in the law, this case is not existing at all. So that is very important that also the lawmaker understands, okay, it makes sense and that ethical hacking helps to secure systems and enhance security of companies and for the society, for our society in general. Then this reframing of hacking so that this is not just a negative approach or that hacking harms certain people or system, but that is also very positive. Also in the media, as mentioned before, you can see that very often the term hacker is connotated negatively, but from our perspective, this must not be the case. It’s more the question how we perceive this and how those people really act. And there’s also a way of acting responsibly. And then the differentiation, as mentioned before, between ethical hacking and malicious actors. This is really important in a lot of cases, not the case in the law itself. So it just describes hacking as a bad thing, which might be something from the past and where we need to reframe this. Then some general actions or something we wish from your side, on the one hand, that you potentially collect this points about a better legal framework, also in discussions within your company, maybe also with lawmakers, that you kind of share the idea and describe why it makes sense to differentiate between malicious activities and ethical activities. Then a harmonized regulation would make sense because even if some countries adapt the change, the problem exists that if you, for example, find a certain security vulnerability in a software, it might be used in a lot of different countries and jurisdictions, which is also a problem because if you, as an ethical hacker, report a certain vulnerability in one country and then you report it in another country and one country has a stricter hacking law, so to say, then you would face legal problems. So it would make a lot of sense to have a harmonization of the regulation and the reporting ways in that regard. And in general, that’s also why we are giving the talk here, is to have a greater public awareness and empathy about those topics, so that it can be discussed. Because the ultimate goal from our perspective is that we really tackle security vulnerabilities, make it even harder for hackers to break systems, and for that a stronger collaboration between the private sector, the ethical hacking community and even the government is needed to enhance the security level. Because from our perspective, nowadays, sometimes they are still in their corner, so maybe the government is saying, hey, we need to prosecute hackers, because as we have seen, cybercrime is a big topic, that the hacking community tries to do certain things, tries to improve software with open source projects, as we have heard, and of course also the private companies have an interest in regards of really prosecute malicious intents, but maybe also, as Caro mentioned, with bug bounty programs, have a reward for ethical hacking and really use it as a driving force, which can help us to secure systems. Yeah, that maybe as an overview. So thank you very much. We would have the possibility for one or two questions, if there are any from the public, so to say. So thank you very much. So are there any questions or examples? So we have one here at the front.


Audience: Thank you. Not really an example, but just a question. See, I gather you are German. Do you have any idea where this is going in Germany? Try that referendum, which didn’t fly, I understand. Any other progress in sight?


Tim Philipp Schafers: Actually, we have a new government and they also put this in the plan for the next year, so to say. So my hope is that over the next couple of years, we will see some progress there. But the current or the last referendum thing now is gone. So it needs to be built up completely new, which is really important for our point of view, because the German law explicitly, yeah, not differentiates between ethical hacking and malicious attempts.


Carolin Kothe: The referendum, I think, ran there as I talked about it. The referendum that was there before the election actually included an exception for people who do it solely for the purpose of securing a system and has this additional intent as a requirement. But it’s a little bit still up to debate if it’s just an acceptable or even ideal solution, because what they did is they just added a paragraph to it and said not authorized a scene if, and that might seem like kind of like simple, like why does it matter? But some argue that this is actually putting a point on the question of who needs to prove what. Do I need now, some ethical hackers read that as do I now need to prove that I didn’t have a malicious intent? And in my view, that is not the case, because in Germany, you have the principle of the prosecutor needing to prove the stuff. And usually you have, when it comes to prosecution and they need to prove if you had a certain intent, then prosecutors will have a hard time struggling that you had this kind of intent of enrichment or intent of harming someone. Especially, there’s one little exception to that, because sometimes ethical hackers are a little bit uncautious with their wording in their reports and ask for, well, I would be happy if you would give me a reward for finding your vulnerability, and that could cause some suspicion. But except that, I think it’s fine.


Audience: Hi, yeah, thanks for the excellent presentation. I already raised my hand like a few minutes ago, and you started answering my question already. But I was wondering about this intent requirement, as you were just talking about, because I was wondering if it doesn’t maybe expose security researchers maybe to intrusive surveillance practices to like figure out if there was malicious intent. I was just wondering if you have any knowledge of something like this going on, or whether this is not possible under the current laws?


Tim Philipp Schafers: Actually, as Caro described, very often there are cases that are opened, and when a case is opened, there’s uncertainty for the people that are affected by that. And that could also mean for security researchers that they might be under surveillance, so to say, because somebody might need to find out, okay, what are they doing, why are they doing this, are they acting on their own, and so on. That’s why we need a clearer regulation on that, to make sure that people are not threatened, that people responsibly can report it and have kind of a peace of mind in what they are doing, because they are securing certain systems which are very important to us.


Audience: That’s why we graded the prosecution approach a little bit lower, because that means that there is already investigation if you have this intent, if you’re acting in good faith, if you have followed all the responsible disclosure guidelines, and that could, in practice, we actually know that this basically is you getting called, what did you do, what was your intention, and if they are then fine with you, then it’s good to go, but that is already causing a hard race for the ethical hacker itself, because he knows he’s part of this prosecution investigation. Hi, I’m Janik, I used to work in the industry, and what I saw at that time when I worked there, that it’s also a matter of brain drain, because people would go rather in the black hat direction and not in the white hat direction, just exclusively working over the onion net or something, would you say that it’s the case for today as well, or is it in a better state?


Tim Philipp Schafers: I mean, in some cases it makes sense to report security vulnerabilities anonymous, because you want to have your name attached to this, I know certain cases where this happened, but from my perspective it’s very sad that things like that are needed, or that security researchers might hide their activity behind the onion network or things like that, because it should be legal, because it really helps us to secure certain systems, and from my perspective it’s really something from the past that you say, okay, this is just illegal activity and needs to be prosecuted, because we have learned a lot through hacking to understanding how the world and systems work and how to improve them, because, I mean, every human makes mistakes, every program or computer can make mistakes, so it makes sense to recognize this and to change it to the better in regards of hacking in general, and maybe also to the law in that case. Okay, I think then we are fine, thank you very much for having us, and have a nice day. Thank you.


C

Carolin Kothe

Speech speed

143 words per minute

Speech length

2100 words

Speech time

877 seconds

Hacking involves systematic testing to uncover security vulnerabilities, with the actual judgment depending on intent, authorization, and methods used

Explanation

Kothe argues that hacking itself is simply the systematic testing of systems to find vulnerabilities, and whether it’s considered ethical or malicious depends on three key factors: the hacker’s intent, whether they have authorization, and what methods they employ.


Evidence

Distinguished between malicious acts (seeking private gain, sabotage, theft) and ethical hacking done for society’s benefit


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Tim Philipp Schafers

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Ethical hacking can be divided into two subtypes: authorized (contracted penetration testing/bug bounties) and unauthorized but benevolent (done for society’s interest without financial gain)

Explanation

Kothe categorizes ethical hacking into two distinct groups: those who have explicit contracts and authorization from companies through penetration testing or bug bounty programs, and those who work without individual contracts but act in society’s interest without seeking financial benefit.


Evidence

Examples of companies hiring penetration test teams and running bug bounty programs to invite external testers


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Legal and regulatory


External hackers are indispensable as the majority of disclosure reports come from external testers, as recognized by the NIST2 directive

Explanation

Kothe emphasizes that external hackers play a crucial role in cybersecurity, with most vulnerability disclosures coming from outside testers rather than internal security teams. This importance has been formally recognized by regulatory frameworks.


Evidence

NIST2 directive explicitly states that the majority of disclosure reports come from external testers


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Tim Philipp Schafers

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Crowdsourced defense works effectively, with open source software serving as proof that many eyes make security stronger

Explanation

Kothe argues that distributed security testing through multiple contributors is highly effective, using the open source software model as evidence that having many different experts examine code leads to better security outcomes.


Evidence

Open source software relies on many eyes and different expertise to build higher security barriers, with increased open source usage demonstrating this principle


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Infrastructure


Companies are increasingly investing in bug bounty programs and recognizing the value of responsible vulnerability reporting

Explanation

Kothe points out that the market is already demonstrating the value of ethical hacking through increased corporate investment in bug bounty programs that reward responsible disclosure of vulnerabilities.


Evidence

Market reinforcement through companies investing heavily in bug bounty programs that pay those who report responsibly


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Economic


Ethical hacking helps tackle cybercrime’s increasing surge and associated costs, both monetary and intangible risks

Explanation

Kothe argues that ethical hacking is essential for addressing the growing cybercrime problem, which brings not only direct financial costs but also intangible risks that affect society broadly.


Evidence

Increasing surge in cybercrime with high increase of costs, leading regulators to recognize the need for companies to invest in security systems


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Economic


Most countries equate ethical hacking with criminal hacking, creating statutory uncertainty for ethical hackers

Explanation

Kothe explains that the majority of legal systems fail to distinguish between ethical and malicious hacking, treating all hacking activities as criminal regardless of intent or purpose. This creates legal uncertainty for those trying to improve security.


Evidence

Statutory certainty is quite rare for ethical hackers, with most countries still equating ethical hacking with criminals


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tim Philipp Schafers

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


Poland provides a rare positive example with explicit statutory support, stating no offense is committed when done solely for system security purposes

Explanation

Kothe highlights Poland as an exceptional case where the legal system explicitly supports ethical hacking by providing clear statutory language that exempts security-focused hacking from criminal prosecution.


Evidence

Polish panel code explicitly supports ethical hacking by stating no offense is committed if done solely for securing a system, described as a ‘unicorn regulation’


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tim Philipp Schafers

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Legal frameworks differ in their elements: some require bypassing security measures, others have authorization as objective elements vs. justifications, creating confusion about whose authorization is needed for third-party systems

Explanation

Kothe explains that different jurisdictions structure their computer crime laws differently, with some including security bypassing as an element and others treating authorization differently. This creates particular confusion when ethical hackers might access third-party systems while working on commissioned projects.


Evidence

Some countries have additional bypassing of security measures requirements, and authorization sometimes appears as objective element vs. justification, with disputes over whose authorization is needed for third-party systems


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Jurisdiction


Some countries like Latvia add substantial harm requirements, while others like Germany and Austria include intent to harm as subjective elements, better distinguishing ethical from malicious hacking

Explanation

Kothe describes how some jurisdictions have developed better legal frameworks by adding requirements that help distinguish ethical hackers from malicious actors, either through harm thresholds or intent requirements that ethical hackers typically don’t meet.


Evidence

Latvia requires extra substantial harm; Germany and Austria include intent to harm or enrich as subjective elements, which differentiates ethical hackers from malicious attacks


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Cybersecurity


Even when following responsible disclosure policies, ethical hackers lack statutory certainty and may still be treated as criminals

Explanation

Kothe emphasizes that even ethical hackers who follow all best practices for responsible disclosure still face legal uncertainty and potential criminal treatment because the laws themselves don’t provide clear protection.


Evidence

Following disclosure policies doesn’t guarantee protection from being treated as criminals, with statutory certainty being quite rare


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tim Philipp Schafers

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes

Explanation

Kothe explains that while some countries have created practical protections through prosecutorial discretion, these approaches still treat ethical hacking as criminal activity and restrict hackers’ ability to share their knowledge publicly for educational purposes.


Evidence

US justice department website states they won’t prosecute if responsible disclosure guidelines are followed; France provides safe harbor through their security authority but hackers still committed crimes and cannot publish findings


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Legal and regulatory | Cybersecurity


Disagreed with

– Tim Philipp Schafers

Disagreed on

Adequacy of prosecution discretion approaches vs. statutory reform


Legal investigations can cause hardship for ethical hackers even when they ultimately face no prosecution

Explanation

Kothe points out that even when ethical hackers are not ultimately prosecuted, the investigation process itself creates significant burden and stress for individuals who are trying to help improve security.


Evidence

Prosecution investigation procedures can cause mental load of legal battles and reputation loss, especially affecting IT researchers


Major discussion point

Concerns About Implementation and Surveillance


Topics

Legal and regulatory | Human rights


Current prosecution approaches still involve investigation procedures that create mental burden and potential reputation loss for ethical hackers

Explanation

Kothe argues that even the more favorable prosecution discretion approaches still subject ethical hackers to investigation procedures that can cause significant personal and professional harm through mental stress and damage to their reputation.


Evidence

Investigation procedures might face hard procedures, mental load of legal battles, and reputation loss, especially affecting those with IT research businesses


Major discussion point

Concerns About Implementation and Surveillance


Topics

Legal and regulatory | Human rights


Agreed with

– Tim Philipp Schafers

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


T

Tim Philipp Schafers

Speech speed

141 words per minute

Speech length

2060 words

Speech time

872 seconds

The hacker ethic from the 1980s establishes moral principles including breaking things to enhance security, not littering with others’ data, and protecting private information

Explanation

Schafers argues that the hacking community has long-established ethical principles that guide responsible behavior, emphasizing that true hackers follow moral guidelines about how to conduct their activities responsibly.


Evidence

Hacker ethic from 1980s describes breaking things to enhance and secure them, not littering with other people’s data, using public data and protecting private data, later extended by Chaos Computer Club


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Sociocultural


Agreed with

– Carolin Kothe

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Breaking systems helps fix them, as demonstrated by examples like Heartbleed bug discovery, DEF CON voting village testing, and responsible information handling by activist groups

Explanation

Schafers provides concrete examples to illustrate how the process of finding and responsibly disclosing vulnerabilities leads to improved security across various domains, from web encryption to voting systems to public information access.


Evidence

Heartbleed bug in OpenSSL (2014) found and fixed through responsible disclosure; DEF CON voting village tests voting machine security; Taiwanese activist group made user-friendly disclosure of public information through APIs


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Infrastructure


Agreed with

– Carolin Kothe

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Ethical hackers face emotional pressure and uncertainty when finding vulnerabilities due to unclear legal coverage

Explanation

Schafers explains that the current legal uncertainty creates significant psychological stress for ethical hackers who discover vulnerabilities but are unsure whether reporting them might lead to legal consequences.


Evidence

Ethical hackers are threatened by classical legal system and face uncertainty about whether vulnerability reporting is fully covered by law


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Legal and regulatory | Human rights


Agreed with

– Carolin Kothe

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


Legal certainty must be established so hackers know where and how to responsibly report vulnerabilities

Explanation

Schafers argues that clear legal frameworks are essential so that ethical hackers can understand exactly what is permitted and have confidence in their ability to report security issues without legal risk.


Evidence

Computer emergency response teams around the world receive reports and handle them, but legal framework doesn’t explicitly support this


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Carolin Kothe

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Explicit immunity should be codified in law, not just stated by computer emergency response teams

Explanation

Schafers emphasizes that legal protection for ethical hackers needs to be formally written into law rather than just being policy statements from technical organizations, ensuring that lawmakers understand the value of ethical hacking.


Evidence

Computer emergency response teams say to report vulnerabilities, but this case doesn’t exist in law at all


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Carolin Kothe

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Disagreed with

– Carolin Kothe

Disagreed on

Adequacy of prosecution discretion approaches vs. statutory reform


Reframing of hacking is needed to move away from purely negative connotations in media and public perception

Explanation

Schafers argues that society needs to change how it perceives hacking, moving beyond the purely negative framing to recognize the positive contributions that ethical hackers make to security and society.


Evidence

Media very often connotates the term hacker negatively, but this perception needs to change based on how people actually act


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Sociocultural | Cybersecurity


Clear differentiation between ethical hacking and malicious actors should be established in legal frameworks

Explanation

Schafers advocates for legal systems that can distinguish between hackers who help improve security and those who cause harm, rather than treating all hacking activities as inherently criminal.


Evidence

Current laws often just describe hacking as bad without differentiation, which is something from the past that needs reframing


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Carolin Kothe

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions

Explanation

Schafers explains that because software is used globally, ethical hackers need consistent legal protection across countries to avoid facing different legal risks when reporting the same vulnerability that affects multiple jurisdictions.


Evidence

Software vulnerabilities might be used in different countries and jurisdictions, creating problems when one country has stricter hacking laws than another


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Jurisdiction


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security

Explanation

Schafers calls for breaking down silos between different stakeholders and fostering collaboration to improve cybersecurity, arguing that currently these groups often work in isolation when they should be working together.


Evidence

Currently stakeholders are sometimes in their corners – government prosecuting hackers, hacking community improving open source, private companies using bug bounties – but stronger collaboration is needed


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Cybersecurity | Legal and regulatory


A

Audience

Speech speed

170 words per minute

Speech length

298 words

Speech time

105 seconds

Intent requirements may expose security researchers to intrusive surveillance practices to determine malicious intent

Explanation

An audience member raises concern that legal frameworks requiring proof of intent could lead to invasive surveillance of security researchers to determine whether their motivations were malicious or benevolent.


Major discussion point

Concerns About Implementation and Surveillance


Topics

Human rights | Privacy and data protection


Current legal uncertainty may cause brain drain, with researchers potentially moving toward black hat activities rather than white hat ethical hacking

Explanation

An audience member suggests that the legal risks and uncertainties facing ethical hackers might drive talented security researchers away from legitimate white hat activities toward illegal black hat hacking where they can work anonymously.


Evidence

People would rather work in black hat direction exclusively over onion networks rather than white hat direction


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Cybersecurity | Legal and regulatory


Agreements

Agreement points

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Hacking involves systematic testing to uncover security vulnerabilities, with the actual judgment depending on intent, authorization, and methods used


External hackers are indispensable as the majority of disclosure reports come from external testers, as recognized by the NIST2 directive


The hacker ethic from the 1980s establishes moral principles including breaking things to enhance security, not littering with others’ data, and protecting private information


Breaking systems helps fix them, as demonstrated by examples like Heartbleed bug discovery, DEF CON voting village testing, and responsible information handling by activist groups


Summary

Both speakers agree that ethical hacking serves a vital security function and should be clearly differentiated from malicious activities based on intent, methods, and outcomes. They provide evidence of its effectiveness and established ethical principles.


Topics

Cybersecurity | Legal and regulatory


Current legal frameworks are inadequate and create uncertainty for ethical hackers

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Most countries equate ethical hacking with criminal hacking, creating statutory uncertainty for ethical hackers


Even when following responsible disclosure policies, ethical hackers lack statutory certainty and may still be treated as criminals


Ethical hackers face emotional pressure and uncertainty when finding vulnerabilities due to unclear legal coverage


Current prosecution approaches still involve investigation procedures that create mental burden and potential reputation loss for ethical hackers


Summary

Both speakers agree that existing legal systems fail to provide adequate protection for ethical hackers, creating uncertainty and stress even for those following best practices.


Topics

Legal and regulatory | Cybersecurity


Legal reform should include explicit statutory protection and clear differentiation

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Poland provides a rare positive example with explicit statutory support, stating no offense is committed when done solely for system security purposes


Legal certainty must be established so hackers know where and how to responsibly report vulnerabilities


Explicit immunity should be codified in law, not just stated by computer emergency response teams


Clear differentiation between ethical hacking and malicious actors should be established in legal frameworks


Summary

Both speakers advocate for comprehensive legal reform that provides explicit statutory protection for ethical hackers and establishes clear legal distinctions between ethical and malicious activities.


Topics

Legal and regulatory | Cybersecurity


Similar viewpoints

Both speakers believe in the effectiveness of collaborative, distributed approaches to cybersecurity and see market validation through increased corporate investment in ethical hacking programs.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Crowdsourced defense works effectively, with open source software serving as proof that many eyes make security stronger


Companies are increasingly investing in bug bounty programs and recognizing the value of responsible vulnerability reporting


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security


Topics

Cybersecurity | Economic


Both speakers recognize that the global nature of software and cybersecurity requires harmonized international legal approaches rather than fragmented national regulations.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions


Legal frameworks differ in their elements: some require bypassing security measures, others have authorization as objective elements vs. justifications, creating confusion about whose authorization is needed for third-party systems


Topics

Legal and regulatory | Jurisdiction


Both speakers believe that societal perception of hacking needs to change and that current prosecutorial discretion approaches are insufficient because they still treat ethical hacking as criminal and restrict educational sharing.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Reframing of hacking is needed to move away from purely negative connotations in media and public perception


Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Topics

Sociocultural | Legal and regulatory


Unexpected consensus

Prosecution discretion approaches are inadequate despite being more favorable than criminalization

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Legal investigations can cause hardship for ethical hackers even when they ultimately face no prosecution


Explanation

It’s somewhat unexpected that both speakers would criticize what might seem like progressive approaches (prosecutorial discretion) as still inadequate. This shows their commitment to fundamental legal reform rather than accepting partial solutions.


Topics

Legal and regulatory | Cybersecurity


The importance of educational sharing and publication of security findings

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security


Explanation

The emphasis on the right to publish and share security research findings for educational purposes represents an unexpected consensus on the importance of knowledge dissemination beyond just vulnerability reporting.


Topics

Cybersecurity | Human rights


Overall assessment

Summary

There is strong consensus between the two main speakers on the fundamental issues: ethical hacking provides essential security benefits, current legal frameworks are inadequate and harmful, and comprehensive legal reform with explicit statutory protection is needed. They also agree on the need for international harmonization and societal reframing of hacking.


Consensus level

Very high consensus between the main speakers, with audience questions reinforcing concerns about current legal approaches. This strong agreement suggests a well-developed shared understanding of the problems and solutions in this field, which could facilitate coordinated advocacy for legal reform.


Differences

Different viewpoints

Adequacy of prosecution discretion approaches vs. statutory reform

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Explicit immunity should be codified in law, not just stated by computer emergency response teams


Summary

While Kothe acknowledges prosecution discretion as a partial solution, Schafers emphasizes the inadequacy of this approach and the need for explicit legal immunity. Kothe presents it as one of four approaches while Schafers argues it’s insufficient because it still treats ethical hacking as criminal.


Topics

Legal and regulatory | Cybersecurity


Unexpected differences

Scope of surveillance concerns in intent-based legal frameworks

Speakers

– Audience
– Tim Philipp Schafers
– Carolin Kothe

Arguments

Intent requirements may expose security researchers to intrusive surveillance practices to determine malicious intent


Legal investigations can cause hardship for ethical hackers even when they ultimately face no prosecution


Explanation

An audience member raised concerns about surveillance implications of intent-based frameworks, which the speakers had not fully addressed despite advocating for intent-based legal distinctions. This revealed a potential tension between their proposed solutions and privacy concerns.


Topics

Human rights | Privacy and data protection | Legal and regulatory


Overall assessment

Summary

The discussion showed minimal direct disagreement between the main speakers, who were largely aligned in their goals. The primary tension was between different approaches to legal reform rather than fundamental disagreements about objectives.


Disagreement level

Low disagreement level among main speakers, with most differences being matters of emphasis rather than substance. The audience questions revealed some unaddressed concerns about implementation details, but overall there was strong consensus on the need for legal reform to protect ethical hackers. This high level of agreement suggests the speakers were presenting a unified advocacy position rather than debating competing approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers believe in the effectiveness of collaborative, distributed approaches to cybersecurity and see market validation through increased corporate investment in ethical hacking programs.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Crowdsourced defense works effectively, with open source software serving as proof that many eyes make security stronger


Companies are increasingly investing in bug bounty programs and recognizing the value of responsible vulnerability reporting


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security


Topics

Cybersecurity | Economic


Both speakers recognize that the global nature of software and cybersecurity requires harmonized international legal approaches rather than fragmented national regulations.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions


Legal frameworks differ in their elements: some require bypassing security measures, others have authorization as objective elements vs. justifications, creating confusion about whose authorization is needed for third-party systems


Topics

Legal and regulatory | Jurisdiction


Both speakers believe that societal perception of hacking needs to change and that current prosecutorial discretion approaches are insufficient because they still treat ethical hacking as criminal and restrict educational sharing.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Reframing of hacking is needed to move away from purely negative connotations in media and public perception


Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Topics

Sociocultural | Legal and regulatory


Takeaways

Key takeaways

Ethical hacking should be legally distinguished from malicious hacking based on intent, authorization, and methods used


Current legal frameworks in most countries treat ethical and malicious hacking equally, creating uncertainty and potential criminalization of beneficial security research


External ethical hackers are essential for cybersecurity, with the majority of vulnerability disclosures coming from external testers


Poland provides the best legal model with explicit statutory support for ethical hacking when done solely for system security purposes


Four legal approaches exist: explicit statutory support (optimal), additional requirements favoring ethical hackers, prosecution discretion policies, and reliance on justification defenses (least favorable)


Legal uncertainty may cause brain drain from white hat to black hat activities and discourage beneficial security research


Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions


Resolutions and action items

Collect and discuss points about better legal frameworks within companies and with lawmakers


Share ideas about differentiating between malicious and ethical activities to promote understanding


Work toward harmonized international regulation for vulnerability reporting


Increase public awareness and empathy about ethical hacking through education and discussion


Foster stronger collaboration between private sector, ethical hacking community, and government to enhance security


Unresolved issues

Germany’s new government plans to address ethical hacking legislation but timeline and specific approach remain uncertain


Debate continues over whether intent requirements place burden of proof on ethical hackers


Concerns about potential intrusive surveillance of security researchers to determine intent remain unaddressed


Question of how far ethical hackers can go in their testing activities and what actions are covered by legal justifications


Uncertainty about whose authorization is needed when accessing third-party systems during security research


Issue of ethical hackers being unable to publish findings for educational purposes under current prosecution discretion policies


Suggested compromises

Prosecution discretion policies that create safe harbors for ethical hackers who follow responsible disclosure guidelines (as implemented in US and France)


Adding substantial harm requirements to legal frameworks to create higher thresholds that favor ethical hackers


Including intent to harm or enrich as subjective elements in laws to better distinguish ethical from malicious hacking


Creating explicit exceptions in law for those acting solely to secure systems while maintaining overall computer crime protections


Thought provoking comments

We can even distinguish ethical hacking in two subtypes. The one that is authorized, meaning companies that actually hire penetration test teams or do bug bounty programs… and then we have the other even more highly debatable group which doesn’t have these individual contracts but actually is just working without seeking financial benefit but doing it out of society’s reason, society’s interest.

Speaker

Carolin Kothe


Reason

This distinction is crucial because it identifies the core legal challenge – while authorized ethical hacking has some legal protection through contracts, unauthorized ethical hacking done for societal benefit exists in a legal gray area. This nuanced categorization moves beyond the simple ‘good hacker vs bad hacker’ narrative to reveal the complexity of motivations and legal standings.


Impact

This comment established the fundamental framework for the entire discussion. It shifted the conversation from a binary view of hacking to a more sophisticated understanding that would inform all subsequent legal analysis. The presenters repeatedly returned to this distinction when discussing different jurisdictions and legal approaches.


Most countries still equate ethical hacking with criminals… And we did found one good example, one rare example in the Polish panel code, which actually explicitly supports ethical hacking in the sense that it says no offense is committed if you do it solely on the purpose of securing a system. And however, this is kind of a unicorn regulation, because other states don’t do this differentiation.

Speaker

Carolin Kothe


Reason

This observation is particularly insightful because it demonstrates that legal frameworks CAN distinguish between ethical and malicious hacking, but most choose not to. The term ‘unicorn regulation’ effectively captures how rare progressive legal thinking is in this area, highlighting the gap between what’s possible and what’s implemented.


Impact

This comment served as a pivotal moment that transitioned the discussion from theoretical concepts to concrete legal realities. It provided hope (Poland’s example) while emphasizing the widespread problem, setting up the subsequent detailed analysis of different jurisdictional approaches.


There’s another even severe question to the justification reason argument, because hacking is not just one act, it’s a series of actions, and the question is what of these actions are actually covered by the justification reason? So how far can I as a hacker actually go and how far is too far?

Speaker

Carolin Kothe


Reason

This comment reveals a sophisticated understanding of the practical complexities that legal frameworks fail to address. It moves beyond theoretical discussions to the granular reality of how ethical hacking actually works – as a process involving multiple steps, each potentially requiring separate legal justification.


Impact

This observation deepened the technical legal analysis and highlighted why simple legal fixes are insufficient. It demonstrated that even well-intentioned legal protections may be inadequate because they don’t account for the multi-step nature of security research, adding complexity to the discussion of ideal legal frameworks.


I was wondering about this intent requirement… because I was wondering if it doesn’t maybe expose security researchers maybe to intrusive surveillance practices to like figure out if there was malicious intent.

Speaker

Audience member


Reason

This question introduced an unexpected dimension – the potential for legal protections themselves to create new problems. It showed sophisticated thinking about unintended consequences and how attempts to protect ethical hackers might paradoxically harm them through surveillance.


Impact

This question elevated the discussion by introducing the concept that legal solutions might create new problems. It prompted the speakers to acknowledge that even ‘better’ legal approaches (like prosecution discretion) still involve investigations that can harm ethical hackers, reinforcing their argument for clearer statutory protections.


What I saw at that time when I worked there, that it’s also a matter of brain drain, because people would go rather in the black hat direction and not in the white hat direction, just exclusively working over the onion net or something.

Speaker

Audience member (Janik)


Reason

This comment introduced a critical societal consequence that hadn’t been explicitly discussed – that unclear legal frameworks may actually push talented individuals toward malicious activities. It connected legal policy to broader cybersecurity outcomes in a concrete way.


Impact

This observation added urgency to the discussion by suggesting that poor legal frameworks don’t just harm individual ethical hackers, but may actively contribute to cybercrime by driving talent toward illegal activities. It reinforced the speakers’ arguments about the societal benefits of clear legal protections.


Overall assessment

These key comments transformed what could have been a straightforward legal presentation into a nuanced exploration of complex policy challenges. The speakers’ sophisticated categorization of ethical hacking types and jurisdictional approaches provided a solid analytical framework, while the audience questions introduced unexpected dimensions like surveillance concerns and brain drain effects. Together, these comments revealed that the issue extends far beyond simple legal reform – it involves balancing security needs, individual rights, societal benefits, and unintended consequences. The discussion evolved from describing the problem to exploring why solutions are complex and why the stakes are higher than initially apparent, ultimately making a compelling case for urgent, thoughtful legal reform.


Follow-up questions

What is the current status and future progress of ethical hacking legislation in Germany following the failed referendum?

Speaker

Audience member


Explanation

The audience member specifically asked about progress in Germany after the referendum didn’t pass, and while Tim mentioned the new government has plans, the specific timeline and approach remain unclear


Do intent requirements in ethical hacking laws expose security researchers to intrusive surveillance practices to determine malicious intent?

Speaker

Audience member


Explanation

This question addresses a potential unintended consequence of legal frameworks that require proving intent, which could lead to privacy violations for legitimate security researchers


Is there currently a brain drain problem where potential ethical hackers choose black hat activities over white hat due to legal uncertainties?

Speaker

Janik (audience member)


Explanation

This question explores whether unclear legal frameworks are pushing talented individuals toward illegal hacking activities rather than legitimate security research, which would be counterproductive to cybersecurity goals


How can harmonized international regulation be achieved given the complexity of different legal systems and jurisdictions?

Speaker

Tim Philipp Schafers and Carolin Kothe


Explanation

While they identified the need for harmonized regulation, the practical steps and mechanisms for achieving international coordination on ethical hacking laws were not detailed


What constitutes ‘substantial harm’ in jurisdictions like Latvia that use this threshold, and how can this vague term be better defined?

Speaker

Carolin Kothe


Explanation

Carolin noted that ‘substantial harm’ is an ambiguous term that helps ethical hackers but lacks clear definition, which could lead to inconsistent application


How far can ethical hackers go in their testing activities when relying on justification reasons, and what specific actions cross the line?

Speaker

Carolin Kothe


Explanation

This addresses the practical boundaries of ethical hacking activities and what constitutes acceptable versus excessive testing when operating under legal justifications


Whose authorization is actually required when ethical hackers access third-party systems during commissioned testing?

Speaker

Carolin Kothe


Explanation

This legal gray area affects even commissioned ethical hackers and needs clarification to provide proper legal protection


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #123 Responsible AI in Security Governance Risks and Innovation

WS #123 Responsible AI in Security Governance Risks and Innovation

Session at a glance

Summary

This discussion, moderated by Yasmin Afina from the United Nations Institute for Disarmament Research (UNIDIR), focused on responsible AI governance in security contexts and the critical role of multi-stakeholder engagement. The session was part of UNIDIR’s roundtable on AI security and ethics (RAISE), established in partnership with Microsoft to bridge global divides and foster cooperation on AI governance issues. Three expert panelists provided opening remarks: Dr. Jingjie He from the Chinese Academy of Social Sciences emphasized the importance of inclusive, multi-stakeholder approaches and highlighted AI’s positive applications in satellite remote sensing for conflict monitoring, while noting challenges like adversarial attacks. Michael Karimian from Microsoft outlined industry’s crucial role in establishing norms and safeguards, emphasizing transparency, accountability, due diligence throughout the AI lifecycle, and proactive collaboration to reduce global capacity disparities. Dr. Alexi Drew from the International Committee of the Red Cross advocated for comprehensive lifecycle management approaches to AI governance, arguing that ethical and legal considerations must be integrated at every stage rather than treated as afterthoughts.


The discussion addressed several critical concerns raised by participants, including the need for AI content authentication to prevent misinformation and violence, the risks of AI misalignment in military contexts where commanders may rely on AI systems under pressure, and questions about responsibility for mitigating AI risks in developing countries with limited technological control. All panelists agreed that responsibility for AI governance is shared among all stakeholders—governments, industry, civil society, and individuals—though each has distinct roles and capabilities. The conversation concluded with optimism that innovation and security can coexist when guided by proper values and governance frameworks, emphasizing that responsible AI development requires collective global effort rather than competitive approaches.


Keypoints

## Major Discussion Points:


– **Multi-stakeholder governance of AI in security contexts**: The discussion emphasized the critical need for inclusive engagement across various stakeholders (governments, industry, civil society, academia) to effectively govern AI applications in international peace and security, with particular focus on platforms like UNIDIR’s RAISE initiative.


– **Industry responsibility and proactive engagement**: Extensive discussion on how technology companies must take active roles in establishing norms, implementing due diligence processes, ensuring transparency and accountability, and contributing technical expertise throughout the AI lifecycle rather than treating governance as an afterthought.


– **Lifecycle management approach to AI governance**: A central theme focusing on the necessity of integrating ethical, legal, and technical governance considerations at every stage of AI development – from initial design and data selection through validation, deployment, and eventual decommissioning – rather than applying governance as a final checkpoint.


– **AI authenticity and content verification challenges**: Participants raised concerns about the security implications of AI-generated content that cannot be easily distinguished from human-created content, discussing the need for technical solutions like digital signatures to identify AI-generated materials and prevent misuse for disinformation or conflict instigation.


– **Military applications and human-machine interaction risks**: Discussion of specific challenges in military contexts, including the risk of AI systems becoming misaligned during battlefield use, commanders’ over-reliance on AI decision-support systems under pressure, and the importance of maintaining compliance with international humanitarian law in AI-enabled military operations.


## Overall Purpose:


The discussion aimed to explore responsible AI governance frameworks for international peace and security through a multi-stakeholder lens, examining how different actors (UN institutions, industry, civil society, military) can collaborate to ensure AI technologies enhance rather than undermine global stability and security.


## Overall Tone:


The discussion maintained a professional, collaborative, and constructive tone throughout. It began with an informative and academic approach during the introductory presentations, then became more interactive and practically-focused during the Q&A session. Despite addressing serious security concerns and potential risks, the conversation remained optimistic about the possibility of achieving responsible AI governance through collective action. The tone was notably inclusive, with moderators actively encouraging participation from diverse geographic and sectoral perspectives, and speakers consistently emphasizing shared responsibility rather than assigning blame.


Speakers

– **Yasmin Afina** – Researcher from the United Nations Institute for Disarmament Research (UNIDIR), moderator of the session on responsible AI in security, governance, and innovation


– **Jingjie He** – Dr. from the Chinese Academy of Social Sciences, researcher working on AI and satellite remote sensing projects


– **Michael Karimian** – Representative from Microsoft, involved in the roundtable for AI security and ethics (RAISE)


– **Alexi Drew** – Dr. from the International Committee of the Red Cross (ICRC), expert on lifecycle management approach to AI governance in security


– **Bagus Jatmiko** – Commander in the Indonesian Navy, researcher in AI and information warfare within the military domain and defense sector


– **Audience** – Multiple audience members who asked questions and made comments during the session


**Additional speakers:**


– **Francis Alaneme** – Representative from the .ng domain name registry


– **George Aden Maggett** – Judge at the Supreme Court of Egypt and honorary professor of law at Durham University, UK


– **Rowan Wilkinson** – From Chatham House (mentioned in chat/questions but did not speak directly in the transcript)


Full session report

# Comprehensive Report: Responsible AI Governance in Security Contexts – Multi-Stakeholder Perspectives and Collaborative Frameworks


## Executive Summary


This discussion, moderated by Yasmin Afina from the United Nations Institute for Disarmament Research (UNIDIR), examined responsible artificial intelligence governance in international peace and security contexts. The session formed part of UNIDIR’s Roundtable on AI Security and Ethics (RAISE), a collaborative initiative established in partnership with Microsoft to foster international cooperation on AI governance issues.


The discussion brought together perspectives from academia, industry, humanitarian organisations, military institutions, and the judiciary to explore multi-stakeholder approaches to AI governance challenges. Through interactive polling, structured presentations, and Q&A dialogue, participants examined questions about responsibility, accountability, and practical implementation of AI governance frameworks whilst addressing concerns about technical limitations, power imbalances, and real-world consequences of AI deployment in security contexts.


## Session Context and UNIDIR/RAISE Introduction


Yasmin Afina opened by explaining UNIDIR’s role as the UN’s dedicated research institute on disarmament, established during the Cold War to provide neutral space for dialogue on security issues. She positioned the RAISE initiative as continuing this tradition by creating depoliticised forums for AI governance discussions that can overcome competitive dynamics and distrust hindering international cooperation.


The moderator emphasised that whilst AI presents opportunities for enhancing international peace and security, it also introduces complex challenges requiring collaborative approaches across traditional boundaries. She noted the session’s connection to broader international efforts, including ongoing discussions around the Global Digital Compact and other UN-sponsored platforms addressing AI governance.


## Interactive Opening: Stakeholder Perspectives


Using Slido polling (code 179812), Afina engaged participants on two key questions about AI’s role in international peace and security and the multi-stakeholder community’s effectiveness in addressing governance challenges.


Participant responses highlighted diverse concerns including:


– Censorship and surveillance capabilities


– Fake news and misinformation


– Data privacy violations


– Facial recognition at borders


– Autonomous weapons systems


– Cybersecurity threats


These responses established the broad scope of AI governance challenges that would be addressed throughout the session.


## Expert Panel Presentations


### Academic Perspective: Dr. Jingjie He, Chinese Academy of Social Sciences


Dr. He emphasised the critical importance of multi-stakeholder approaches, arguing that technological challenges inherently require interdisciplinary solutions. She highlighted positive applications of AI in peace and security contexts, specifically referencing Amnesty International’s Darfur project as a successful example. This initiative used Element AI technology with 29,000 volunteers to analyse satellite imagery for conflict monitoring, demonstrating AI’s potential as a tool for humanitarian purposes.


However, Dr. He acknowledged significant technical challenges, particularly adversarial attacks that make AI systems fragile and governance discussions complex. She introduced the concept of AI as both a “force multiplier” and “threat multiplier,” noting that poorly designed systems create risks for both civilian populations and military forces themselves.


Regarding transparency, Dr. He expressed scepticism about algorithm openness due to intellectual property concerns and industry practices of protecting core technologies. She concluded by emphasising shared responsibility for AI governance whilst acknowledging the need for better knowledge sharing between technology developers and decision-makers, particularly in military contexts.


### Industry Perspective: Michael Karimian, Microsoft


Karimian outlined industry’s role in establishing norms and safeguards for responsible AI deployment in security contexts. He emphasised that companies are uniquely positioned to identify risks early in development processes and have obligations under UN guiding principles to ensure their products are not used for human rights abuses.


He stressed industry responsibility extends beyond compliance to proactive engagement in norm-setting and standard development. Karimian advocated for clear standards ensuring AI systems used in security applications are transparent about their capabilities and limitations, with robust accountability mechanisms including documentation, monitoring, and auditing capabilities.


Addressing global capacity disparities, Karimian noted the importance of proactive collaboration to reduce inequalities in AI governance capabilities between developed and developing nations. He suggested industry has a role in supporting capacity-building initiatives, particularly where regulatory frameworks are still emerging.


### Humanitarian Perspective: Dr. Alexi Drew, International Committee of the Red Cross


Dr. Drew presented a comprehensive lifecycle management framework for AI governance, arguing that governance must be integrated at every stage from initial design through decommissioning. She identified three critical stages:


1. **Development stage**: Ensuring compliance requirements like International Humanitarian Law are built in from the outset


2. **Validation stage**: Addressing risks of localization where systems may not work as intended in different contexts


3. **Deployment stage**: Managing inscrutability risks where users may not understand system limitations


Dr. Drew emphasised that systems should be designed, trained, and tested with compliance requirements integrated rather than retrofitted, preventing governance from becoming a “checkbox exercise.” She highlighted that all stakeholders possess various levers of influence, including participation in standard-setting organisations and procurement strategies to enforce governance requirements.


Addressing innovation concerns, Dr. Drew rejected the notion that responsible AI governance requires trade-offs between innovation and security, characterising this as a design challenge rather than a zero-sum game. She also stressed the importance of training military users to understand AI system capabilities, limitations, and failure modes.


## Audience Q&A and Discussion


### Content Authenticity Challenges


Francis Alaneme from the .ng domain name registry raised concerns about AI-generated content that cannot be distinguished from human-created materials, highlighting security implications of AI-generated video content being used to spread false information and potentially instigate violence.


Dr. Drew responded by mentioning the Content Authenticity Initiative (CAI) as an example of industry efforts to develop technical solutions for content authentication, whilst acknowledging implementation challenges in balancing comprehensive coverage with practical feasibility.


### Military Applications and Human-Machine Interaction


Commander Bagus Jatmiko, an Indonesian Navy officer and researcher in AI and information warfare, raised concerns about AI systems becoming misaligned during battlefield use. He introduced the concept of AI systems becoming “psychopathic” in their tendency to provide answers users want to hear rather than accurate assessments, warning this could be dangerous when commanders are under pressure and may accept AI-generated answers that confirm existing beliefs rather than challenge assumptions.


This highlighted the critical importance of training and education for AI system users in high-stakes environments where consequences of poor decision-making can be severe.


### Global Power Imbalances and Accountability


Judge George Aden Maggett from the Supreme Court of Egypt raised fundamental questions about responsibility and accountability, particularly regarding power imbalances between technology companies in developed countries and affected populations in developing nations. His intervention connected abstract policy considerations to real-world consequences, including civilian casualties in current conflicts involving AI-enabled weapons systems.


### Algorithm Transparency and Openness


Rowan Wilkinson from Chatham House asked about recent policy shifts regarding AI openness, prompting discussion about balancing transparency requirements with security and commercial considerations. This highlighted ongoing tensions between demands for accountability and practical constraints on algorithm disclosure.


## Key Themes and Takeaways


### Shared Responsibility Framework


All speakers agreed that responsibility for AI governance is distributed across stakeholders rather than concentrated in any single entity. This encompasses governments, industry, civil society, international organisations, and individuals, though speakers emphasised different implementation mechanisms.


### Multi-Stakeholder Engagement as Foundation


Strong consensus emerged that effective AI governance requires inclusive participation from diverse stakeholders, bringing together different perspectives and expertise to address complex technological challenges. Platforms like UNIDIR’s RAISE initiative provide valuable neutral spaces for knowledge-sharing that can transcend geopolitical constraints.


### Lifecycle Management Approach


Both industry and humanitarian perspectives converged on integrating governance considerations throughout the entire AI system lifecycle. This approach prevents governance from becoming mere compliance whilst ensuring ethical and legal considerations are substantively integrated into system design and operation.


### Technical Implementation Challenges


Several technical challenges remain unresolved, including practical implementation of content authentication systems, addressing adversarial attacks on AI systems used for peace and security monitoring, and developing effective mechanisms for preventing AI misalignment in operational contexts.


### Sustainability and Resource Concerns


Dr. He specifically noted funding challenges facing platforms like RAISE, emphasising that effective governance requires sustained commitment and resources from all stakeholders. This challenge could significantly impact long-term effectiveness of collaborative governance efforts.


## Conclusion


This discussion demonstrated both the complexity of AI governance challenges in security contexts and the potential for collaborative solutions. The consensus on fundamental principles of shared responsibility, multi-stakeholder engagement, and lifecycle management provides a foundation for developing governance frameworks that can enhance international peace and security whilst ensuring responsible AI development and deployment.


However, unresolved questions about sustainability, implementation, and accountability highlight significant work remaining to translate these principles into effective practice. The session’s combination of technical expertise, practical experience, and moral urgency suggests that effective AI governance will require continued collaboration across diverse stakeholder groups, sustained commitment to addressing global inequalities, and ongoing adaptation to evolving technological capabilities.


The optimistic perspective that innovation and security can coexist when guided by proper governance frameworks provides hope that these challenges can be addressed through collective effort rather than competitive approaches, though significant technical and institutional challenges remain to be resolved.


Session transcript

Yasmin Afina: Good afternoon from Oslo or good morning, wherever you are tuning in from. My name is Yasmin Afina, researcher from the United Nations Institute for Disarmament Research. And I have the pleasure of moderating today’s session on responsible AI in security, governance, and even innovation. For those who are joining us in person, may I please highly encourage you to come to the front to this beautiful, almost roundtable to allow us to have a free-flowing roundtable discussion because this session forms part of our project related to the roundtable on AI security and ethics and in the spirit of having a roundtable, I do highly encourage everyone in the room who have just joined us today to join us in the front because I would like this to be very interactive and highly engaging. And for those who are joining online, thank you very much for joining us online, wherever you are. And as we are using Zoom, I do encourage you to use the raise hand function if you would like to take the floor, as again, this is a very highly interactive discussion. So again, my name is Yasmin Afina and I am very pleased to be joined today by three excellent speakers who, for those who are in the room, we do not see them yet, but for those online, you will see them. Dr. Jingjie He from the Chinese Academy of Social Sciences, Michael Karimian from Microsoft, and Dr. Alexey Drew from the ICRC. And before we get into this, into a kick-off remarks from our excellent panelists, I would like to spend five minutes to introduce you a little bit to the roundtable for AI, security and ethics, and my institute, the United Nations Institute for Disarmament Research. So, at a glance, UNIDIR is an autonomous institution within the United Nations. You can think of us like a think tank within the UN ecosystem. We’re independent from the Secretariat, and we’ve been established in 1980 at the height of the Cold War to ensure that the deliberations of states are well-informed and evidence-based in the area of disarmament. Of course, today, the landscape of disarmament is much different from what it was in 1980, and so we are conducting evidence-based policy research, we’re conducting multi-stakeholder engagement, and we want to make sure that we also facilitate dialogue where there is none, including on sensitive issues such as AI insecurity. So, one of the priority areas of UNIDIR and our work is related to AI and autonomy insecurity and defense, including the military domain, and what we’ve noticed is that in the light of this technology’s highly unique nature, we understood very quickly the importance of multi-stakeholder engagement and perspectives to obtain input on the implications of AI for international peace and security. So, we saw the need to provide a platform for open, inclusive, and meaningful dialogue. We saw this need as well to warrant public trust and legitimacy, and to ensure that these discussions are not just a one-way discussion, but actually are coming both from the bottom-up and from the top-up. but also from the top-down approach. We also want to make sure that we improve cross-disciplinary literacy and so on and so forth. You may see on the slides a number of very different incentives as to why the multi-stakeholder in perspective is indeed important on this issue. So that is why in March 2024, Unidire joined forces with Microsoft and in partnership with a series of other stakeholders. The establishment of the roundtable for AI security and ethics, ARRAISE. Our idea is to bring together experts and thought leaders from all around the world. So we have, for example, experts from China, from Russia, from the United States and United Kingdom, but also from Namibia, Ecuador, Kenya, India. We really want to make sure that we bridge divides and we bridge the conversation when there is none on these issues of AI and security. We aspire to lay the foundation for robust global AI governance grounded in cooperation, transparency, and mutual learning. With the idea that we should overcome any sense of competitiveness or distrust, and where there is any need for building distrust, this is where it would be. We also would like to use RAISE to foster and facilitate compliance with international law and ethical norms in the light of their importance in the age of innovation, warfare, and security, and destabilization. Finally, we would like to complement and reinforce responsible and ethical AI practices in the security and defense domains. Again, in an area where we are hoping to disrupt monopolies in the hands of the few, and to ensure that all voices are heard from all layers of society. Before we hear from our excellent panelists, I would like to provide an opportunity for both participants who are joining online, but also in person to share their thoughts via Slido. For those who are unfamiliar with Slido, Please ask our technicians to share the Slido presentation on screen. Thank you very much. First, before we start, I wanted to get your sense of what you think AI and international peace and security means for you. There is no right or wrong answer. And for that, what I would encourage you to do is to go on slido.com and put in the code 179812. For those who see the screen, use the QR code to join the conversation. And you will see a text box where you’ll be able to provide your input on what you think AI and international peace and security means for you. And again, no right or wrong answer. It really is for us to understand your thoughts and your perspectives and to really set the scene and see where things are at. Because, of course, it is important for us to share with you the work that we’re doing. But it’s also important for us to engage with the incredibly diverse IGF community to see what you think about this issue. So I will leave this poll open for a few minutes while you put in your contributions on what you think AI and international peace and security means for you. And there should be the results showing in. Perhaps if I can ask our technicians to see if there’s any input that has been added. So it is not showing on the screen, but oh, sorry, if you can please come back to Slido. Sorry, I’m bugging the technicians with my. I can see for those who are joined online that there’s quite a few responses already. So we see, for example, censorship fake news that has been generated using AI. I see that AI could be used for good or for bad, and I really appreciate this balanced approach to looking at AI for international peace and security. I see issues related to data privacy and treats on human intelligency. I also see the use of AI in the military, law enforcement, and how they are used responsibly in their respective fields, facial recognition at borders, countering the proliferation of AI-enabled weapons systems, and I also see automated target selection. So a very wide range of responses, and please keep adding your responses to this question. May I please ask our colleagues from IT to share again Slido, and this time for the next question. Oh perfect, now we can see them. So I think that this is great. Thank you very much to our IT team, and I’ve heard that there was a connectivity issue, so please bear with us as we navigate the hybrid space of discussions for this session. So now I’m going to get us to the second question. What should be the role of the multi-stakeholder community in the governance of AI and international peace and security? For those who are on Slido, it’s the same link. If you just refresh your page, or it should be refreshing on its own. If you can please add your responses, and they should start appearing. And for those who have just joined us, may I encourage you to open slido.com using your laptop or your phone by scanning the QR code to provide your input on what you think should be the role of the multistakeholder community in the governance of AI and international peace and security. So I see already one response on agreeing on and implementing norms. I do encourage everyone to keep sharing their discussions and their reflections on what they think should be the role of the multistakeholder community because that will also help us at UNIDIR to inform our work on this and how to better engage with the multistakeholder community. So I see big commitments and I would love to hear your thoughts when we’ll open the floor on what sort of commitments do you think that the multistakeholder community could have a role in. I also see industry and AI and perhaps again when we’ll open the floor for discussions, I would love to hear your thoughts on what they mean. And once again, for those who are joining us in the room physically, may I encourage you highly to come into the stage to join us in the middle to enable us to have a roundtable discussion so that it is interactive. I see a lot of input into the slido and I do see, for example, trust building standards proposed solutions, technical standards again, actionable legislation, responsible and peace. So I do appreciate you really putting a lot of input into these discussions and now that we’ve had this little warm-up exercise, I will please ask our IT colleagues to get back us to the PowerPoint for me to introduce once again our speakers for today’s discussions. So the way it will work for the rest of the session as we have 45 minutes. I will be sharing, I will be providing the floor to three speakers who are joining us online for kick-off remarks, who are supposed to be introductory and generate more questions and answers perhaps on very select issues related to AI, international peace and security. And then I will open the floor for both for those who are joining us online but also in person for a discussion on perhaps reactions to what you’ve heard, perhaps to elaborate a bit on the questions that you have, on the answers that you have shared with us and also perhaps if you have any questions for our panellists and speakers who are joining us today. So again for those who have just joined, we are joined virtually by three excellent speakers Dr Jingjie He from the Chinese Academy of Social Sciences, Dr Alexi Drew from the International Committee of the Red Cross and Michael Karimi on Microsoft. For those who are joining in person, I assure you they are online and they should be appearing on the screen when it is their turn to speak. So now may I please turn online and ask Jingjie to provide us with her opening remarks. May I please ask the IT colleagues to show Jingjie on the screen. Jingjie, you have the floor. Thank you very much. Thank you Yasmin. Very nice to be meeting you all and thank you for the invitation. Always a pleasure to join the conversation just to see your face on screen.


Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of artificial intelligence and the main reason will be that technological challenges I believe can often be addressed through technological solutions. However, the identification of the true nature of artificial intelligence and artificial intelligence The nature of these challenges requires an interdisciplinary and multi-stakeholder approach. Such an inclusive approach ensures that a wide range of knowledge, expertise, and perspectives, often complementary in nature, are taken into account in shaping responsible and equitable understandings, norms, policies for AI development and deployment. So here I want to take the opportunity to really underscore the importance of the UN-sponsored platform, such as UNODIR’s RISE that Jasmine just introduced, and the IGF, and the Global Digital Compact, etc. So these platforms play a critical role in enabling multi-stakeholder engagements. What sets them apart from more state-centric mechanisms is their unique ability to provide neutral, depoliticized, and inclusive spaces. So within those platforms, knowledge-sharing and confidence-building can take place beyond the constraints of geopolitical tensions and national interests, allowing for more constructive, balanced, and therefore more promising outcomes. But of course, one dilemma that I want to point out is that such platforms, especially like RISE, do face funding issues and questions about how to make the project more sustainable. I remember the first time I attended RISE and Jasmine was sharing the concern that this project should be more sustainable and stuff like that. I do believe that Jasmine and Michael have done a great job. supporting this program, but I do believe that this should be a more collective effort for all of us to bring resources and contribute to this project and these communities. So Yasmin also asked me to provide some concrete examples for how AI fosters international peace and security. So one of my recent projects is about AI and satellite remote sensing. So satellite remote sensing has been increasingly recognized as a critical tool for international peace and security. In recent years there has been a growing interest in applying AI and machine learning to enhance analytical efficiency of satellite imageries. So one example is Amnesty International in collaboration with a company called Element AI as well as almost 29,000 volunteers. So they develop tools to automatically analyze satellite imagery for monitoring conflicts in Darfur. So this is just one of the many examples of how AI can empower satellite imagery and benefits for international peace, security, and non-proliferation missions, etc. Of course, I always care about the challenges. So one potential challenge is, as my previous research shows, that there’s always a challenge of adversarial attacks in such systems which will make the system more vulnerable and our discussion more interesting and challenging. So I will stop for now and I will be happy to answer questions. Jasmin?


Yasmin Afina: Thank you very much, Jingjie He, for this very short and crisp but also very provocative introductory remarks and I do appreciate you noting as well the difficulty that the UN is currently facing on fundraising and of course the UNIDIR as a voluntary funded institute we do rely on voluntary contributions so I do appreciate you noting as well the dire situation that we’re facing today to enable such dialogue from happening but also I appreciate you sharing the importance of AI to enhance the ability to analyze and to monitor conflicts including by civil society organizations so it does show the potential of AI to enhance international peace and security while of course being balanced by the risks that may resurface including with regards to adversarial attacks on these AI technologies so I think that one key aspect that you also shared with us is the importance of engaging all kinds of stakeholders and we’re very fortunate today to be joined by Michael Karimian from Microsoft Michael, may I please ask now that you provide us with your key coffee marks particularly to see what do you think is the role of industries in supporting responsible AI practices for international peace and security Michael, over to you


Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today’s discussion but of course of being an essential partner in the work of the roundtable for AI security and ethics as we’ve heard and as I think we already know AI is and will rapidly reshape international security dynamics and the governance frameworks needed to ensure its responsible use urgently require quite robust multi-stakeholder engagement just as Jingjie outlined and industry in particular has a critical role to play obviously as developers and deployers of AI technology but also I think as proactive stakeholders in establishing norms and standards and safeguards to mitigate risks associated with AI in security contexts. And the roundtable for AI security and ethics has already quite clearly highlighted that while states and international organizations are vital in setting norms and regulations, industry in particular has quite practical contributions to governance, which I think can’t be overstated. So for example, industry actors often are the first to encounter and understand AI risks and vulnerabilities, in part due to their direct involvement in developing and deploying these technologies. That can put industry players in a unique position to provide expertise on technical feasibility, operational impacts, and risk mitigation strategies, which are of course essential for effective governance. And through RAISE, industry stakeholders, including Microsoft, have already identified several key contributions that can be made. Firstly, transparency and accountability. Industry must develop and adhere to clear standards that ensure AI systems used in security applications are transparent in their capabilities and limitations, with accountability mechanisms clearly articulated. And that involves quite robust documentation practices, as well as continuous monitoring and the capability to audit AI systems, which together I think provide greater predictability and trust. Second and relatedly is the topic of due diligence. The Secretary General’s upcoming report and also ongoing UN General Assembly discussions will likely continue to underscore the importance of due diligence, because industry actors have a responsibility to implement robust due diligence processes across the AI lifecycle, from design and development through to deployment and eventual decommissioning. And this aligns closely with lifecycle management approaches already being emphasized by both UNIDIR and the ICRC in its submission to the Secretary General. and others. This is the topic of proactive collaboration. Industry should actively contribute technical expertise and capacity-building initiatives, particularly in regions where regulatory frameworks are still emerging. Effective governance, of course, requires global equity in knowledge and resources, and so initiatives such as RAISE, but also RE-AIM, the responsible AI in the military domain process, we see them promoting practical and inclusive governance strategies which serve as a strong foundation. And industry collaboration through those platforms can, of course, further amplify these efforts. I think on the topic of reducing disparities and capacity-building and knowledge transfer, industry really does have a significant technical and expertise resources that are needed to support governance, civil society, and international organizations, particularly those from the global south, in understanding and assessing and mitigating AI risks. So strengthening global capacity is really key to ensuring inclusive governance and avoiding exacerbating already existing inequalities and security capabilities. I guess if we look ahead, industry’s engagement should continue to be structured, it should continue to be sustained, and it should, of course, be substantive. And this means participating in and supporting frameworks established through the United Nations and other multilateral venues, as well as initiatives such as RAISE to collectively shape responsible AI governance and security. And I think that we can ensure that our collective or collaborative efforts lead not only to innovative but enhanced global stability, resilience, and trust. I look forward to the discussion.


Yasmin Afina: Thank you very much, Michael, for, again, for a very comprehensive overview of what you think should be the role of industries in promoting and enhancing responsible AI practices for international peace and security, both as developers but also as deployers. And I do appreciate your remarks as well, your points on The industry is needing to be proactive actors to mitigate the risks and harms that may enter from these technologies. I also note from your remarks the importance of implementing feasible and effective risk mitigation mechanisms throughout the life cycle of technologies for AI and for international peace and security. And we’re very fortunate to be joined by Dr. Alexi Drew from the International Committee of the Red Cross who’s been our expert within the race, who’s been promoting relentlessly on the importance of a life cycle management approach to the governance of AI and security. So now may I please ask Alexi to come to the floor and also share her remarks on this point. Thank you very much, Alexi.


Alexi Drew: Thank you very much, Yasmina. Thank you, Michael, for setting the stage for me. It makes it a lot easier for me to continue my crusade to make life cycle management a feature that everyone is aware of and be more aware of the necessity of why it needs to be approached and understood and actually engaged with rather than as a secondary feature. And that secondariness is actually one of the key reasons why life cycle management is critical because we’ve been talking about governance quite a bit. We’ve been talking about the need to be responsible and ethical and how we design, develop and deploy these systems. But governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle. Now, for the purposes of this discussion, I’m going to break life cycles down into very simple segments. In this case, we’re going to talk about the development stage, the validation stage and the deployment stage. And I thought it would be helpful if I gave you a particular series of risks with hypothetical context where those things are actually producing risks now so we can understand why governance at each stage is important. So one of these risks that I see them and the ICRC approaches them is the trend that we have tried to engage towards a localisation of aid. and assistance is reversed through the utilization of systems which are by their default and their design not local. So for example at the development stage you might use data which is taken from the global north, train a model which is designed to be deployed in the global south, it doesn’t reflect the realities. A predictive model for example based on this for delivery of humanitarian aid is going to prioritize the delivery of aid to certain groups as opposed to others based upon the data that has been selected for it which is not applicable to the local context and that’s going to effectively create a compounding problem. At the validation stage localization could also create problems if it’s not properly taken into account with regards to localization. If you test something outside of the local context in which you intend to deploy it you’re not actually testing for the scenario and circumstance and the context which the thing is going to be used in. So your ability to be sure that it’s delivering as expected is undermined, you’re ignoring the social, economic, political dynamics of the context in which something is going to be ultimately deployed in. So our clean test beds which might be suitable for some circumstances are not likely to be suitable for those if you try and use the same system in multiple places. At the deployment stage for example we might be using aid algorithms that worked in one context that systematically exclude marginalized communities in another. So we might have a refugee processing system which trained on one population works perfectly fine but fails catastrophically when applied in a different way, slightly different linguistic characteristics, social characteristics, economic needs and requirements. When you take these across these localized issues at the development stage, the validation and deployment stage this is a compounding of problems and risks which you can’t then remove by a set of governance which is attached to the end of a life cycle. It’s something which has to be addressed at each of these stages to ensure that these risks are avoided and not compounded. There’s also the problem of inscrutability. Now inscrutability is almost the opposite of transparency. and explainability that Michael mentioned earlier. But sometimes inscrutability is a design choice that takes place at a certain point or several points in the life cycle. At the development stage, rather than choosing something which is open-source understood as a model, you might choose a proprietary algorithm which is more niche, more sophisticated, but a complex neural network selected because it seems more appropriate and more complicated. When actually a simpler, more explainable model could do the job, it’s going to introduce inscrutability into the system at the development stage. Further on at the stage where you’re actually validating or generating a model, you’re then going to create a system which is so complex that not only can the end users, the subjects of the system not understand decisions being made, but the users themselves may not be able to, particularly if these users haven’t been the designers, they simply purchased the systems from those who asked to procure it for them. What’s the real world impact of this? Well, it means that humanitarians or aid suppliers on the ground can’t explain to individuals why the decisions are being made as they are. They can’t explain why aid isn’t being delivered to one group while it is to another. They can’t explain why some resources are available in one place or not another. That undermines trust in both the humanitarian sector, but also in the systems being used, which further means that in the long term, this life cycle of a redeployment redesign is going to have less than an effective impact on the very communities and the very peace building that it’s designed to develop. And the final point I’d raise is that life cycles are often, and we use the term cycle, but what do we mean by cyclical? And what does that actually imply to how things are used? Well, the problem is, is that if you look at a life cycle as a series of stages that are begun at one end and produce a tour at the other, and then perhaps cycle round again, it seems like a conveyor belt. It could be seen and operated on operationally by the designers, procurers, and the ultimate deployers of these systems as a series of check boxes that you move from one stage to the other once certain things have been completed. But what that means is that we have. rather than a series of checks and balances and means of ensuring that these risks are not compounded, we have a series of things which is simply checked off as complete without sufficient evidence to the fact without the ability to understand is this system suitable for what it’s being used for. And when that’s then recycled and the requirements might be changed and this tool is deployed in a different context for a different purpose, and then we find ourselves further compounding the issues that we saw before. So what I would like to say is what we need to be aware of with this life cycle finally taking away from this is that if we are to ensure that these systems are being used in a manner which is humane, ethical and principled and adding to our security and building peace rather than the creating or recreating rather the conditions that have led to insecurity, unethical practice and a risk to both civilians, combatants and other already highly impacted and at risk individuals, we need to ensure that not only do we have a shared understanding of how these tools are made on the different stages of their life cycle, we need to understand and come up with a means of technical, ethical and humanitarian governance which intersects with all of these stages effectively. And I’ll leave it there and look forward to your questions.


Yasmin Afina: Thank you very much, Alexey, for again this very comprehensive overview of why the life cycle management approach to the governance of AI is indeed important. I particularly like the way that you ended the discussions and your remarks by noting that this is a prerequisite to ensure that these technologies indeed will build peace instead of exacerbating the sources of insecurity and instability. So on that note we have a bit around 20 minutes I would say for an open discussion. I would highly encourage for those who are both in the room in Oslo but also who are joining us virtually, to ask questions to our panelists, but also building on the Slido discussions that we had earlier, where we collected your responses of what AI, international peace and security means, but also the role of the multi-stakeholder community. I would encourage you to also take the floor to elaborate a bit more on these answers. But also if you have anything else to add based on, for example, we heard from Alex the importance of local contexts. How is AI being deployed and used, for example, in your respective regions or states or your organization to build peace and to enhance international peace and security? So on that note, I would like to open the floor now for those who are joining in person and online. For those who are online, I will keep an eye on the Zoom. But for those who are joining in person, I believe there’s a microphone on the side for those who are joining from the floor, or perhaps from those who are joining on the center table, if they would like to take the floor. I think there are microphones in front of you. So on that note, I’m opening the floor now and perhaps give a few seconds as well for you to collect your thoughts or your questions. The gentleman on my left, I think you have a question. Please introduce yourself and share your name, where you’re coming from. And also if you have a question, if it is directed specifically at a speaker, please do so as well. Thank you very much.


Audience: Okay, thank you very much. My name is Francis Alaneme. I’m from the .ng domain name registry. And so it’s just more of a comment. So I know AI is widely used and AI is something that a lot of persons are jumping into and it’s flying everywhere. A lot of contents are generated with AI. And I think, so part of the things that… The AI adoption is driving us, is trying to make imaginary things come real and I think part of the algorithm should look at ways to actually You know make AI Generated contents have more of a signature that okay people can actually easily say or can actually identify what AI is Generated and what humans actually generated, you know, when you look at some video contents You see that a lot of you know, there are video contents that you see and you think they are real and you know Those kind of contents can be used to pass some kind of false informations Can be used to actually instigate some kind of em, you know violence in some places, you know where you see some kind of contents that Actually not, you know culture friendly or something that can actually instigate some kind of thoughts in mind of people so I think there should be more of em, you know, that’s kind of a Signature or that kind of a you know thing to identify AI generated contents and human Contents. Thank you


Yasmin Afina: Thank you very much sir for Outlining the importance of ensuring some sort of signature or at least means to verify What is AI generated? What is not AI generated and perhaps the security implications of not being able to differentiate between the two? I see that we have a hand raised virtually by Bagus Chatmiko who I know is joining us from very late from Indonesia perhaps may I ask our IT technicians to


Bagus Jatmiko: Display him on the screen and Bagus, please. You have the floor now. Thank you very much Bagus if you can please unmute yourself and Turn on your camera if you if you would like to intervene. Okay, can you hear me now? Yeah, we can hear you. Okay, so I don’t know whether. Thank you. Okay, thank you. Sorry. Oh, there you go. Sorry. Sorry for the connection and also the technical issues. So I see a very familiar faces in this conference and also would like to bring some concern. I also would like to maybe bring some question to the panelists also in the way that I’m working in the defense sector. And AI is being used exponentially. And I also talked about this during the ICRC conference virtually last week, if I’m not mistaken. And I bring concern about how AI is being used in a way that some of the commander or the user within the military domain is unaware of the possibility that AI might be corrupted during the use. Like what we call as the emergent misalignment or there’s also the misalignment with the system itself. And I also would like to bring the concern about the possibility of the maybe it’s not possibility. This the tendency of AI being psychopath in a way that I would also maybe provide the answers that the users would like to bring or would like to seek. And being in the battlefield, that kind of tendency would be very, in a way, very risky and maybe dangerous and how they can actually misalign the user or the commander in the battlefield. In taking what you may call the decision that might increase the risk for the humanity and also for the civilian population And this goes to my question, how would you all maybe provide the attention and maybe the focus on how the AI is being used, especially in the military domain? This is for all the panelists. And how would you maybe encourage more into the use of AI, responsible AI within the military domain? Because if I relate it to the humanitarian law, somehow in the fog of war, in the condition of uncertainty, mostly commander would like to see the quick answers provided by AIDSS. And maybe they just ignore the possibility or the existence of law or humanitarian law in this case. Thank you, Vargus. Perhaps, may I please ask that you also introduce yourself for those who are not familiar with your work and where you’re coming from? Yeah, sorry for not introducing myself. So my name is Commander Bagus Jadmiko, and I’m actually Indonesian Navy officers. And I’m also a researcher in AI and information warfare that bring me to the attention that the use of AI within the military domain and defense sector. Thank you.


Yasmin Afina: Thank you very much, Bagus. I see a gentleman here would like to ask a question or perhaps show some comments, and then I’ll get back to our panelists for some reactions or answers to the questions. Gentleman, please. Good evening, everybody. Allow me to raise a very short question in the beginning. Who


Audience: is responsible for the mitigation of AI risks? This is a very short question for me. Is it high tech big companies who are creating AI and developing AI? Because it is not in the hand of the government, especially in the developing countries right now. So let me have the big issue here. While I’m following the rapid development advancement of AI, especially in fields which are related to security, I am terrorized, you know, because we are I am not going to mention or to name any country now, but we can see how AI is being used in current ongoing wars. And the victims behind the use of AI technology in autonomous weapon, for example, how civilians are being killed without accountability. So for this reason, looking from a developing country’s perspective, which have nothing to do in their hands right now, it is all in the hands of the big tech companies which exist in the powerful countries. So this is my issue here, how we are going to mitigate ourselves this risk. Thank you. May I please ask that you introduce yourself? Sorry, can you please introduce yourself in the microphone just to sort of we know who you are, and where you’re coming from? My name is George Aden Maggett. I am a judge at the Supreme Court of Egypt and I am also an honorary professor of law at Durham University, the UK.


Yasmin Afina: Thank you. Perfect. Thank you very much, sir. So, in the interest of time, I realize that we have 10 to 15 minutes left, so I just want to make sure and check in the room virtually or online, virtually or in person, if there’s any further questions or comments or remarks for our panelists or to add to the discussions today. If not, I know that, Alexi, you’ve also put in the chat that there is an ongoing project on adding signatures to AI-generated content, the Content Authenticity Initiative, which you might be interested in, and perhaps, Alexi, you’ll be able to elaborate a bit more. Before I give the floor back to our panelists, I do note a question from Rowan Wilkinson from Chatham House. Hi, Rowan. Many policymakers are discussing the importance of AI openness in civilian contexts, including in meeting safety commitments through OSS and community oversight. Does the panel foresee a policy shift around openness in the AI peace and security domain? So, we do have quite a few questions and remarks and also reflections. So, we had a question surrounding AI authenticity and the implications of not knowing what is generated or not, and the destabilizing effects. We had a question from Bagros on the commander and perhaps the human-machine interaction in the battlefield, and perhaps also how do we make sure that the use of AI remains indeed responsible in the hands of the commander, particularly under situations of pressure, such as in the battlefield. We had a question of who should be responsible for the mitigation of risks of AI, particularly in the light of ongoing conflicts today and the… implications of civilians. And finally, we have a question on openness in the AI peace and security domains. So perhaps may I ask, in the interest of time, Jingjie to start us off with three to four minutes. Please feel free to answer any questions you may think or any other element that you would like to add based on what you’ve heard today. Jingjie, please, you have the floor. Thank you for your questions. So first, the question is from Bagus. I think the first thing we need to do is knowledge sharing, because I assume that in military, when you deploy an AI


Jingjie He: system, you developed it first as a project and you deploy it. So many times, based on my experience from the civilian field and industries, many times the one who makes the decision whether to use, deploy or complete the project may not always be the one who understands the technologies. So knowledge sharing is very important. Transparency is important. Those people who make the decisions need to understand the technology perspective. And also, the second point I want to make is the importance of incentives. It is very important for the militaries to understand that AI is not only a force multiplier, but also a threat multiplier. It is not only about the risk of civilians. It’s also about increasing risk of your own combatants when you have a poorly designed, unverified AI system with uncertainties and you cannot be confident about it and there’s a whole black box. So this kind of incentive is very important. With this understanding, I believe many militaries will be more incentivized to improve their systems? A quick answer for the second question, who’s responsible for AI governance? I think everyone. Like, I’m sure Michael will talk more about from the industry point of view, but I do sense that everyone is responsible for, you know, raising a voice, being sensitive about the importance of AI governance, you know, incentivize or promoting a dialogue about AI risks. And the third question about AI openness, I’m actually not sure what is the AI openness, because if you’re talking about openness about the algorithm, I think it’s very difficult because when we’re in the industry, where we go for due diligence or technological scouting, we ask the company, what is it? What’s your core technology? They told us, they were likely to tell us it’s their own IP and they will not be able to reveal it. But look, we have a good system and it works perfectly, just believing our results. This is what happens. So if you’re talking about openness, about AI algorithm, I have a huge question mark about the feasibilities and possibility of this kind of solutions. Thank you.


Yasmin Afina: Thank you, Jingjie He, for your very sharp response and also the fact that you’re actually joining us from very far and it’s very late at home. So thank you very much for this. Michael, would you like to intervene now? Thank you, Yasmin. Happy to do so. A shared flag, Zoom keeps telling me that my internet connection is unstable. So if I pause at any moment, that’ll be the reason why.


Michael Karimian: In answer to the questions, Frances’s question on AI signature, I appreciate the question. I think one way of thinking about this is, are there specific use cases? where we really need AI signatures or would we be comfortable with other use cases where we don’t need them? I suspect that’s possibly the direction we will go in but of course the proliferation of AI solutions means that there’ll always be solutions or actors who would circumvent that anyway but that doesn’t downplay the importance of having AI signatures in the first place. To Bagus’ question on emergent misalignment and AI-supported DSS, I think this Bagus, your question really points to something which we’ve certainly discussed in the context of the roundtable for AI security and ethics and that’s the challenge which exists at the moment in having access to meaningful and trustworthy use cases to understand in very effective ways actually how AI is being used. I think the academic community, civil society, industry, governments actually at the moment we are relying on a number of examples which you know partly come from here say or just perhaps aren’t that reflective of how AI is being used in security domains but I’m hopeful that as AI is further adopted in various security domains, transparency around use cases will improve and then we’ll be better able to understand implications of them. To the question from a colleague from Egypt slash the University of Durham, change is right, who has responsibility? Everyone. From a human rights perspective, of course states have the state duty to protect, respect and fulfil human rights. Industry has a corporate responsibility to respect human rights and individuals have a right to remedy when their rights have been harmed. Focusing just specifically on the role of industry, what that means is that all companies under the UN guiding principles on business human rights have a responsibility to ensure that their products and services are not being used in ways to facilitate or contribute to serious human rights abuses and this means that any engagement with a government, ministry of defence, armed forces, especially in the context of ongoing armed conflict or where there are credible allegations of international law violations, must be subject to rigorous due diligence, clear red lines on misuse and where risks cannot be mitigated there should be a refusal to provide or maintain support. And actually that’s not new, this has been an established position for a number of years now, but of course matters there is implementation. And lastly to Rowan’s question, yes I would hope so, that we will see more openness and actually one example of that is the RE-AIM process which I mentioned earlier, responsible AI in the military domain process, last year hosted in South Korea and in the next 6-12 months will be hosted in Spain and so anyone in the audience that are stakeholders who are interested in this should certainly keep an eye on the RE-AIM process. Thank you very much Michael for this and the importance of differentiating of course principles but also actual implementation and the importance as well of human rights and providing a framework to ensure that everyone is indeed held accountable but also to ensure that civilians have a right to remedy including in the context of AI for international peace and security. Finally Alexi would you like to have any concluding remarks and responses to any of the questions raised?


Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to start with the silver lining to the signatures and the kind of demarking in authentic content or AI generated content from human generated content. As someone who used to work in arms control there’s a great thing to kind of bear in note that every time a new threat arises or a new innovation creates a threat it’s very quick for a counter to be developed against it and that is just as true in the identification of inauthentically generated or machine generated content as it is with any other. risk of this type that we’ve seen before. So I’m encouraged to see that it’s not just the CAI that exists in this space. There are a number of initiatives coming with technical and non-technical means to kind of give us the means of, as Michael says, in critical circumstances, being able to identify when content has been generated as opposed to when it has been created and is authentic. On the question from Vagos’ reference, compliance and command as component, this is actually part of what I was referencing when I was talking about the need to ensure that governance, both ethical, legal and economic, is built in at every stage of the design of life cycle. If we take IHL as part of that governance, a system should be designed, trained, tested, authenticated and verified with the data selected with its need to be compliant with IHL in mind. If it isn’t, that’s when you’re introducing the risks that something could be designed which is either completely incompatible with IHL or is open to being – or is possible for it to be used in a manner which is non-compliant. If you actually treat the life cycle effectively and how you incorporate IHL across it rather than just in a section of it or in, say, the assurance stage or treat it as a checkbox exercise, then you can actually constrain the risks of that going wrong. That being said, there are other components to that. The fact that any user of a system should be trained to understand what it can and what it cannot do, what does it look like when it fails, what are the circumstances which have led to its failure in testing, what influences its level of accuracy so they can make informed decisions as to how much and whether, in fact, to trust an AI-based tool system or weapon system, be it decision-making system, strategic or tactical, or be it a direct weapon system. And it should also be that in some cases, these tools simply aren’t used because it’s understood because of how These systems have been designed with IHL baked into each part of it, but in the context you’re seeking to apply how they simply cannot be compliant with IHL. On the subject of trust around these tools and how particularly LLMs, some of them have been found to be very non-critical of their human users and how that might influence it. Yes, that is a problem. They’re not designed to be critical and to push back on their human users. They’re designed to be supportive administrative assistants that say yes a lot and that should be something which is understood as a potential failing and implications to how a military should design, create doctrine and then deploy a tool. Moving quickly on to the who owns it, where’s the responsibility. I agree with both previous speakers, Jingjie and Michael. Everyone owns the responsibility here, but there are levers despite the complex ownership structures between the private sector, the public sector, the global north and south and those with less seeming control that everyone has a lever they can use here. Be it the taking part in globalized standard setting organizations, technical or non-technical, or be it in procurement strategies and procurement standards. If governance is critical, IHL, ethical, social and economic, then it should be a condition of procurement from government to suppliers. So then even if they don’t own the system or the services required to operate the system, say it’s AI as a service, the thing has been designed to set these standards and it’s legally necessary to do so to meet the procurement standards. Finally, on a point on openness, I’m going to try and be positive with a bit of negativity here. I think we’re at a point where innovation is being posed as a solution to our increasing state of insecurity and a risk to peace. And it’s been posited as a zero-sum game between innovation and security or insecurity and constraint on innovation. That is not the case. You can in fact have security and innovation with adherence to values.


Yasmin Afina: and Alexu. Thank you very much indeed Alexu for ending us on a positive note but also for you know for Jingjie He to also add a point on the fact that Chinese social media also had signatures to AI generated content and I think that also adds to the importance of collective responsibility to ensure responsible AI and international peace and security and I do know the importance of incentivization noted by Jingjie He, I know the importance of human rights as a framework, compliance with IHL and yeah on that hopefully positive note that we end we’re ending this workshop. Thank you very much everyone for joining us today either online or in person. Please join me in giving a round of applause to our speakers online. Thank you very much. Thank you very much. . . .


J

Jingjie He

Speech speed

121 words per minute

Speech length

833 words

Speech time

412 seconds

Inclusive engagement across stakeholders is essential for effective global AI governance because technological challenges require interdisciplinary approaches

Explanation

Jingjie He argues that while technological challenges can often be addressed through technological solutions, identifying the true nature of AI challenges requires an interdisciplinary and multi-stakeholder approach. This inclusive approach ensures that a wide range of knowledge, expertise, and perspectives are taken into account in shaping responsible AI policies.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian
– Yasmin Afina

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


UN-sponsored platforms provide neutral, depoliticized spaces for knowledge-sharing beyond geopolitical constraints

Explanation

She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF play a critical role in enabling multi-stakeholder engagement. What sets them apart from state-centric mechanisms is their unique ability to provide neutral, depoliticized, and inclusive spaces where knowledge-sharing and confidence-building can take place beyond geopolitical tensions.


Evidence

References to UNIDIR’s RAISE platform, IGF, and Global Digital Compact as examples of such platforms


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Everyone has responsibility for AI governance and raising awareness about AI risks

Explanation

When asked who is responsible for AI governance, Jingjie He responds that everyone has a role to play. She emphasizes the importance of raising voices, being sensitive about AI governance importance, and promoting dialogue about AI risks.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian
– Alexi Drew

Agreed on

Universal responsibility for AI governance


AI enhances satellite imagery analysis for conflict monitoring, as demonstrated by Amnesty International’s Darfur project

Explanation

Jingjie He provides a concrete example of how AI can foster international peace and security through satellite remote sensing. She explains that AI and machine learning are being applied to enhance analytical efficiency of satellite imagery for monitoring conflicts.


Evidence

Amnesty International’s collaboration with Element AI and 29,000 volunteers to develop tools for automatically analyzing satellite imagery for monitoring conflicts in Darfur


Major discussion point

AI Applications for Peace and Security


Topics

Cybersecurity | Human rights principles


AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities

Explanation

She argues that AI applications in satellite imagery analysis represent just one example of many ways AI can benefit international peace, security, and non-proliferation missions. However, she also acknowledges the challenges that come with these applications.


Evidence

References her previous research showing challenges of adversarial attacks in such systems


Major discussion point

AI Applications for Peace and Security


Topics

Cybersecurity | Human rights principles


Knowledge sharing between technology developers and decision-makers is crucial in military contexts

Explanation

Jingjie He emphasizes that in military deployments of AI systems, the people making decisions about deployment may not always be those who understand the technology. She stresses the importance of transparency and knowledge sharing so decision-makers can understand the technology perspective.


Evidence

References her experience from civilian field and industries where decision-makers often don’t understand the technologies they’re deploying


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


AI serves as both force multiplier and threat multiplier, increasing risks for combatants with poorly designed systems

Explanation

She argues that militaries need to understand that AI is not only a force multiplier but also a threat multiplier. Poorly designed, unverified AI systems with uncertainties create risks not just for civilians but also for the military’s own combatants when they cannot be confident about the system’s performance.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Algorithm openness faces feasibility challenges due to intellectual property concerns

Explanation

When discussing AI openness, Jingjie He expresses skepticism about the feasibility of algorithm transparency. She explains that in industry due diligence, companies typically claim their core technology as intellectual property and refuse to reveal algorithms, instead asking clients to trust their results.


Evidence

Her experience in industry technological scouting where companies refuse to reveal their core algorithms, claiming them as IP


Major discussion point

Technical Challenges and Risks


Topics

Legal and regulatory | Intellectual property rights


Disagreed with

– Michael Karimian

Disagreed on

Feasibility of AI algorithm transparency and openness


Adversarial attacks make AI systems more vulnerable and discussions more challenging

Explanation

She acknowledges that there are potential challenges with AI applications in peace and security contexts, specifically mentioning adversarial attacks as a vulnerability that makes AI systems more susceptible to manipulation and makes governance discussions more complex.


Evidence

References her previous research on adversarial attacks in AI systems


Major discussion point

Technical Challenges and Risks


Topics

Cybersecurity | Network security


M

Michael Karimian

Speech speed

149 words per minute

Speech length

1198 words

Speech time

480 seconds

Industry has critical role as developers and deployers, plus proactive stakeholders in establishing norms and safeguards

Explanation

Michael Karimian argues that industry has a critical role not just as developers and deployers of AI technology, but also as proactive stakeholders in establishing norms, standards, and safeguards to mitigate risks associated with AI in security contexts. He emphasizes that industry’s practical contributions to governance cannot be overstated.


Evidence

References the roundtable for AI security and ethics (RAISE) which has highlighted industry’s practical contributions to governance


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Yasmin Afina

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


Industry actors are first to encounter AI risks due to direct involvement in development and deployment

Explanation

He argues that industry actors are often the first to encounter and understand AI risks and vulnerabilities because of their direct involvement in developing and deploying these technologies. This puts industry players in a unique position to provide expertise on technical feasibility, operational impacts, and risk mitigation strategies.


Major discussion point

Industry Responsibility and Due Diligence


Topics

Legal and regulatory | Human rights principles


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms

Explanation

Karimian emphasizes that industry must develop and adhere to clear standards that ensure AI systems used in security applications are transparent in their capabilities and limitations, with clearly articulated accountability mechanisms. This involves robust documentation practices, continuous monitoring, and the capability to audit AI systems.


Major discussion point

Industry Responsibility and Due Diligence


Topics

Legal and regulatory | Human rights principles


Agreed with

– Alexi Drew

Agreed on

Lifecycle approach is crucial for AI governance


Disagreed with

– Jingjie He

Disagreed on

Feasibility of AI algorithm transparency and openness


Companies have responsibility under UN guiding principles to ensure products aren’t used for human rights abuses

Explanation

He explains that under the UN guiding principles on business and human rights, all companies have a responsibility to ensure their products and services are not used to facilitate or contribute to serious human rights abuses. This means engagement with governments or armed forces, especially in conflict contexts, must be subject to rigorous due diligence and clear red lines on misuse.


Evidence

References UN guiding principles on business and human rights as established framework


Major discussion point

Industry Responsibility and Due Diligence


Topics

Human rights principles | Legal and regulatory


Agreed with

– Jingjie He
– Alexi Drew

Agreed on

Universal responsibility for AI governance


AI signatures may be needed for specific critical use cases rather than universal application

Explanation

In response to questions about AI content signatures, Karimian suggests thinking about whether there are specific use cases where AI signatures are really needed versus other use cases where they might not be necessary. He acknowledges that the proliferation of AI solutions means there will always be actors who would circumvent such measures.


Major discussion point

Content Authenticity and Misinformation


Topics

Legal and regulatory | Content policy


Agreed with

– Alexi Drew
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


A

Alexi Drew

Speech speed

182 words per minute

Speech length

2065 words

Speech time

680 seconds

All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies

Explanation

Alexi Drew argues that despite complex ownership structures between private and public sectors and between global north and south, everyone has levers they can use. These include participating in globalized standard-setting organizations and using procurement strategies as governance tools.


Evidence

Suggests that if governance is critical, it should be a condition of procurement from government to suppliers


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Michael Karimian

Agreed on

Universal responsibility for AI governance


Governance cannot be added as afterthought but must be designed to fit each stage of the lifecycle

Explanation

Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs to be something designed to fit into each stage of the AI system lifecycle, from development through validation to deployment.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian

Agreed on

Lifecycle approach is crucial for AI governance


Development, validation, and deployment stages each present unique risks that compound if not properly addressed

Explanation

She provides detailed examples of how localization issues can create compounding problems across the AI lifecycle. For instance, using Global North data to train models for Global South deployment, testing outside local contexts, and deploying systems that systematically exclude marginalized communities.


Evidence

Specific examples include refugee processing systems trained on one population failing when applied to populations with different linguistic or social characteristics, and aid algorithms that exclude marginalized communities


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles | Development


Systems should be designed, trained, and tested with compliance requirements like IHL built in from the start

Explanation

Drew argues that if International Humanitarian Law (IHL) compliance is required, AI systems should be designed, trained, tested, authenticated and verified with IHL compliance in mind from the beginning. This prevents systems from being designed that are incompatible with IHL or open to non-compliant use.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Lifecycle approach prevents treating governance as checkbox exercise rather than integrated process

Explanation

She warns against treating the AI lifecycle as a conveyor belt or series of checkboxes to be completed. Instead, she advocates for understanding lifecycles as requiring checks, balances, and means of ensuring risks are not compounded throughout the process.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Innovation can coexist with security and adherence to values, not a zero-sum game

Explanation

Drew concludes on a positive note, arguing against the false premise that innovation and security are in a zero-sum relationship. She contends that you can have both security and innovation while maintaining adherence to values, rejecting the notion that innovation must come at the expense of security or ethical constraints.


Major discussion point

AI Applications for Peace and Security


Topics

Legal and regulatory | Human rights principles


Military users need training to understand AI system capabilities, limitations, and failure modes

Explanation

Drew emphasizes that any user of an AI system should be trained to understand what the system can and cannot do, what failure looks like, what circumstances have led to failures in testing, and what influences accuracy levels. This enables informed decisions about how much to trust AI-based tools.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Counter-innovations quickly develop against new threats, including tools for identifying machine-generated content

Explanation

Drawing from her arms control background, Drew notes that every time a new threat arises or innovation creates a threat, counters are quickly developed. She applies this principle to AI-generated content, expressing encouragement that multiple initiatives exist to identify inauthentic or machine-generated content.


Evidence

References the Content Authenticity Initiative (CAI) and notes there are multiple technical and non-technical initiatives in this space


Major discussion point

Content Authenticity and Misinformation


Topics

Cybersecurity | Content policy


Agreed with

– Michael Karimian
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


B

Bagus Jatmiko

Speech speed

130 words per minute

Speech length

477 words

Speech time

219 seconds

AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear

Explanation

Commander Bagus Jatmiko, working in the defense sector, raises concerns about AI being used exponentially in military contexts where commanders may be unaware that AI might be corrupted during use through emergent misalignment. He also notes the tendency of AI to be ‘psychopathic’ in providing answers that users want to seek rather than accurate assessments.


Evidence

His experience working in the defense sector and AI/information warfare research


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Commanders may ignore humanitarian law when seeking quick AI-generated answers in fog of war

Explanation

Jatmiko expresses concern that in battlefield conditions of uncertainty and the ‘fog of war,’ commanders seeking quick answers from AI decision support systems may ignore the possibility or existence of humanitarian law. This creates risks for humanity and civilian populations.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Human rights principles


A

Audience

Speech speed

138 words per minute

Speech length

490 words

Speech time

211 seconds

AI-generated content needs signatures for identification to prevent false information and violence instigation

Explanation

Francis Alaneme from the .ng domain registry argues that AI adoption is making imaginary things seem real, and AI-generated content should have signatures so people can easily identify what is AI-generated versus human-generated. He warns that realistic AI-generated video content can be used to pass false information and instigate violence in some places.


Evidence

Examples of video content that appears real but could be culturally inappropriate or violence-instigating


Major discussion point

Content Authenticity and Misinformation


Topics

Content policy | Cybersecurity


Agreed with

– Michael Karimian
– Alexi Drew
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


Big tech companies in powerful countries hold significant control while developing countries have limited influence

Explanation

Judge George Aden Maggett from Egypt’s Supreme Court raises concerns about the power imbalance in AI development and deployment. He argues that big tech companies in powerful countries control AI development while developing countries have nothing in their hands, leading to situations where AI is used in autonomous weapons killing civilians without accountability.


Evidence

References current ongoing wars where AI is being used in autonomous weapons with civilian casualties


Major discussion point

Industry Responsibility and Due Diligence


Topics

Human rights principles | Legal and regulatory | Development


Y

Yasmin Afina

Speech speed

150 words per minute

Speech length

3381 words

Speech time

1344 seconds

Multi-stakeholder engagement is essential for AI governance to bridge divides and overcome competitiveness and distrust

Explanation

Yasmin Afina emphasizes that UNIDIR’s approach brings together experts from diverse countries including China, Russia, US, UK, but also Namibia, Ecuador, Kenya, and India to bridge divides and facilitate conversation where there is none on AI and security issues. The goal is to overcome competitiveness and distrust through inclusive dialogue.


Evidence

UNIDIR’s ARRAISE initiative bringing together experts from China, Russia, United States, United Kingdom, Namibia, Ecuador, Kenya, India


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Michael Karimian

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


AI governance requires both bottom-up and top-down approaches to ensure public trust and legitimacy

Explanation

Afina argues that discussions on AI and security should not be one-way but should incorporate both bottom-up and top-down approaches. This dual approach is necessary to warrant public trust and legitimacy in AI governance processes.


Evidence

UNIDIR’s platform design for open, inclusive, and meaningful dialogue


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Cross-disciplinary literacy improvement is crucial for AI governance in security contexts

Explanation

Afina emphasizes the importance of improving cross-disciplinary literacy as part of multi-stakeholder engagement on AI and security issues. This reflects the complex nature of AI challenges that require understanding across different fields and disciplines.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Interdisciplinary approaches


AI governance should disrupt monopolies and ensure all voices from all layers of society are heard

Explanation

Afina advocates for using platforms like ARRAISE to disrupt monopolies in the hands of the few and ensure that all voices are heard from all layers of society. This reflects a commitment to democratizing AI governance rather than leaving it to a select few powerful actors.


Evidence

ARRAISE platform design and objectives


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Voluntary funded institutes face dire fundraising situations that threaten dialogue facilitation

Explanation

Afina acknowledges the difficulty that the UN and UNIDIR face in fundraising, noting the dire situation they face today to enable such dialogue. As a voluntary funded institute, UNIDIR relies on voluntary contributions, which creates sustainability challenges for important governance initiatives.


Evidence

UNIDIR’s status as voluntary funded institute relying on voluntary contributions


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Development


AI’s unique nature requires multi-stakeholder perspectives for understanding implications on international peace and security

Explanation

Afina argues that due to AI technology’s highly unique nature, UNIDIR quickly understood the importance of multi-stakeholder engagement and perspectives to obtain input on AI’s implications for international peace and security. This recognition led to the establishment of platforms for inclusive dialogue.


Evidence

UNIDIR’s establishment of multi-stakeholder platforms and ARRAISE initiative


Major discussion point

AI Applications for Peace and Security


Topics

Legal and regulatory | Human rights principles


Agreements

Agreement points

Universal responsibility for AI governance

Speakers

– Jingjie He
– Michael Karimian
– Alexi Drew

Arguments

Everyone has responsibility for AI governance and raising awareness about AI risks


Companies have responsibility under UN guiding principles to ensure products aren’t used for human rights abuses


All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies


Summary

All three main speakers agree that responsibility for AI governance is shared across all stakeholders – governments, industry, civil society, and individuals – rather than being concentrated in any single entity.


Topics

Legal and regulatory | Human rights principles


Multi-stakeholder engagement is essential for effective AI governance

Speakers

– Jingjie He
– Michael Karimian
– Yasmin Afina

Arguments

Inclusive engagement across stakeholders is essential for effective global AI governance because technological challenges require interdisciplinary approaches


Industry has critical role as developers and deployers, plus proactive stakeholders in establishing norms and safeguards


Multi-stakeholder engagement is essential for AI governance to bridge divides and overcome competitiveness and distrust


Summary

There is strong consensus that effective AI governance requires inclusive participation from diverse stakeholders, bringing together different perspectives, expertise, and capabilities to address complex technological challenges.


Topics

Legal and regulatory | Human rights principles


Lifecycle approach is crucial for AI governance

Speakers

– Michael Karimian
– Alexi Drew

Arguments

Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Governance cannot be added as afterthought but must be designed to fit each stage of the lifecycle


Summary

Both speakers emphasize that governance considerations must be integrated throughout the entire AI system lifecycle, from development through deployment, rather than being treated as an add-on or afterthought.


Topics

Legal and regulatory | Human rights principles


Need for technical solutions to AI content authenticity challenges

Speakers

– Michael Karimian
– Alexi Drew
– Francis Alaneme (Audience)

Arguments

AI signatures may be needed for specific critical use cases rather than universal application


Counter-innovations quickly develop against new threats, including tools for identifying machine-generated content


AI-generated content needs signatures for identification to prevent false information and violence instigation


Summary

There is agreement that technical solutions are needed to address AI-generated content authenticity, though with recognition that implementation may vary by use case and that counter-measures are rapidly developing.


Topics

Content policy | Cybersecurity


Similar viewpoints

Both speakers emphasize the critical importance of knowledge transfer and transparency between those who develop AI technologies and those who make decisions about their deployment, particularly in security contexts.

Speakers

– Jingjie He
– Michael Karimian

Arguments

Knowledge sharing between technology developers and decision-makers is crucial in military contexts


Industry actors are first to encounter AI risks due to direct involvement in development and deployment


Topics

Legal and regulatory | Cybersecurity


Both speakers highlight the critical need for military personnel to understand AI system limitations and potential failure modes to make informed decisions about trust and deployment in security contexts.

Speakers

– Alexi Drew
– Bagus Jatmiko

Arguments

Military users need training to understand AI system capabilities, limitations, and failure modes


AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear


Topics

Cybersecurity | Legal and regulatory


Both speakers maintain an optimistic view that AI can be a positive force for peace and security when properly governed, rejecting the notion that innovation must come at the expense of security or ethical considerations.

Speakers

– Jingjie He
– Alexi Drew

Arguments

AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities


Innovation can coexist with security and adherence to values, not a zero-sum game


Topics

Legal and regulatory | Human rights principles


Unexpected consensus

Global South representation and power imbalances

Speakers

– Yasmin Afina
– George Aden Maggett (Audience)
– Alexi Drew

Arguments

AI governance should disrupt monopolies and ensure all voices from all layers of society are heard


Big tech companies in powerful countries hold significant control while developing countries have limited influence


All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies


Explanation

Unexpectedly, there was strong consensus across speakers from different sectors (UN, judiciary, ICRC) about the need to address power imbalances between Global North tech companies and Global South stakeholders, with practical suggestions for how developing countries can exercise influence through procurement and standards participation.


Topics

Legal and regulatory | Human rights principles | Development


Limitations of algorithm transparency

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


AI signatures may be needed for specific critical use cases rather than universal application


Explanation

Both academic and industry perspectives unexpectedly converged on the practical limitations of full algorithmic transparency, acknowledging intellectual property constraints while still supporting targeted transparency measures for critical applications.


Topics

Legal and regulatory | Intellectual property rights


Overall assessment

Summary

The discussion revealed remarkably high consensus among speakers on fundamental principles of AI governance, including shared responsibility, multi-stakeholder engagement, lifecycle management, and the need for technical solutions to content authenticity. There was also unexpected agreement on addressing Global South representation and practical limitations of algorithmic transparency.


Consensus level

High level of consensus with significant implications for AI governance frameworks. The agreement across diverse stakeholders (academic, industry, humanitarian, military, judicial) suggests these principles have broad legitimacy and could form the foundation for effective global AI governance mechanisms. The consensus on shared responsibility and multi-stakeholder approaches particularly validates current UN and multilateral efforts in this space.


Differences

Different viewpoints

Feasibility of AI algorithm transparency and openness

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Summary

Jingjie He expresses strong skepticism about algorithm transparency due to IP concerns and industry practices of protecting core technology, while Michael Karimian advocates for transparency standards and accountability mechanisms in AI systems used in security applications.


Topics

Legal and regulatory | Intellectual property rights


Unexpected differences

Practical implementation of AI transparency in security contexts

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Explanation

This disagreement is unexpected because both speakers are advocates for responsible AI governance, yet they have fundamentally different views on whether transparency is achievable. Jingjie He’s practical industry experience leads her to question feasibility, while Michael Karimian’s industry perspective emphasizes the necessity and possibility of transparency standards.


Topics

Legal and regulatory | Intellectual property rights


Overall assessment

Summary

The discussion shows remarkably high consensus among speakers on fundamental principles of AI governance, with only one significant disagreement on algorithm transparency feasibility. Most differences are about emphasis and approach rather than fundamental disagreement.


Disagreement level

Low level of disagreement with high implications – the transparency debate touches on core tensions between security, commercial interests, and accountability that are central to AI governance in security contexts. The consensus on multi-stakeholder responsibility suggests strong foundation for collaborative approaches, but the transparency disagreement highlights practical implementation challenges that could impede progress.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of knowledge transfer and transparency between those who develop AI technologies and those who make decisions about their deployment, particularly in security contexts.

Speakers

– Jingjie He
– Michael Karimian

Arguments

Knowledge sharing between technology developers and decision-makers is crucial in military contexts


Industry actors are first to encounter AI risks due to direct involvement in development and deployment


Topics

Legal and regulatory | Cybersecurity


Both speakers highlight the critical need for military personnel to understand AI system limitations and potential failure modes to make informed decisions about trust and deployment in security contexts.

Speakers

– Alexi Drew
– Bagus Jatmiko

Arguments

Military users need training to understand AI system capabilities, limitations, and failure modes


AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear


Topics

Cybersecurity | Legal and regulatory


Both speakers maintain an optimistic view that AI can be a positive force for peace and security when properly governed, rejecting the notion that innovation must come at the expense of security or ethical considerations.

Speakers

– Jingjie He
– Alexi Drew

Arguments

AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities


Innovation can coexist with security and adherence to values, not a zero-sum game


Topics

Legal and regulatory | Human rights principles


Takeaways

Key takeaways

Multi-stakeholder engagement is essential for effective AI governance in security contexts, requiring inclusive participation from governments, industry, civil society, and international organizations


Industry has a critical responsibility as both developers and deployers of AI technology, with obligations under UN guiding principles to prevent human rights abuses


Lifecycle management approach is crucial – governance must be integrated at development, validation, and deployment stages rather than added as an afterthought


AI serves as both a force multiplier and threat multiplier in military contexts, requiring careful consideration of risks to both civilians and combatants


Everyone shares responsibility for AI governance, though different stakeholders have different levers of influence including procurement standards and participation in standard-setting organizations


AI has positive applications for peace and security, such as enhancing satellite imagery analysis for conflict monitoring and humanitarian purposes


Content authenticity and AI signature identification are important for preventing misinformation and violence instigation


Knowledge sharing between technology developers and decision-makers is crucial, especially in military contexts where commanders may not fully understand AI system limitations


Resolutions and action items

Continue supporting and participating in UN-sponsored platforms like UNIDIR’s RAISE and the RE-AIM process for responsible AI in military domains


Implement robust due diligence processes across the AI lifecycle from design through deployment and decommissioning


Develop clear standards ensuring AI systems used in security applications are transparent with accountability mechanisms


Provide training for military users to understand AI system capabilities, limitations, and failure modes


Integrate compliance requirements like International Humanitarian Law (IHL) into each stage of AI system development rather than treating it as a checkbox exercise


Support capacity-building initiatives particularly in regions where regulatory frameworks are still emerging


Unresolved issues

Funding sustainability for UN-sponsored AI governance platforms and multi-stakeholder initiatives


Technical feasibility of requiring algorithm openness due to intellectual property concerns


Power imbalance between big tech companies in developed countries and developing nations with limited influence over AI governance


Lack of meaningful and trustworthy use cases to understand how AI is actually being used in security domains


How to effectively implement AI signatures universally versus only for specific critical use cases


Addressing emergent misalignment and AI systems’ tendency to provide answers users want to hear rather than critical assessment


Ensuring compliance with humanitarian law in high-pressure battlefield situations where commanders seek quick AI-generated answers


Suggested compromises

Focus AI signature requirements on specific critical use cases rather than universal application across all AI-generated content


Balance innovation with security through integrated governance approaches rather than viewing them as zero-sum trade-offs


Combine technical and non-technical means for identifying machine-generated content rather than relying solely on one approach


Use procurement standards as leverage for governance compliance even when governments don’t own the AI systems or services


Develop counter-innovations and defensive measures alongside AI advancement to address emerging threats


Thought provoking comments

AI is not only a force multiplier, but also a threat multiplier. It is not only about the risk of civilians. It’s also about increasing risk of your own combatants when you have a poorly designed, unverified AI system with uncertainties and you cannot be confident about it and there’s a whole black box.

Speaker

Jingjie He


Reason

This comment reframes the AI security discussion by highlighting that AI risks aren’t just external threats to civilians, but internal risks to military forces themselves. The ‘threat multiplier’ concept introduces a crucial dual perspective that challenges the common narrative of AI as purely advantageous in military contexts.


Impact

This shifted the conversation from viewing AI governance as primarily about protecting others to recognizing it as essential for protecting one’s own forces. It provided a strategic incentive framework that could motivate military adoption of responsible AI practices based on self-interest rather than just ethical obligations.


Governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle… we have a series of things which is simply checked off as complete without sufficient evidence to the fact without the ability to understand is this system suitable for what it’s being used for.

Speaker

Alexi Drew


Reason

This fundamentally challenges the conventional approach to AI governance by arguing against treating it as a compliance checklist. It introduces the critical insight that governance must be embedded throughout the development process, not retrofitted, and warns against the dangerous illusion of safety through checkbox exercises.


Impact

This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. It influenced subsequent speakers to address implementation gaps and moved the conversation from ‘what should be done’ to ‘how governance actually fails in practice’ and why current approaches are insufficient.


Who is responsible for the mitigation of AI risks? Is it high tech big companies who are creating AI and developing AI? Because it is not in the hand of the government, especially in the developing countries right now… we can see how AI is being used in current ongoing wars. And the victims behind the use of AI technology in autonomous weapon, for example, how civilians are being killed without accountability.

Speaker

George Aden Maggett (Egyptian Supreme Court Judge)


Reason

This comment powerfully highlighted the global power imbalance in AI governance and connected abstract policy discussions to real-world consequences. Coming from a judicial perspective from the Global South, it brought urgent moral clarity about accountability gaps and the disconnect between those who develop AI and those who suffer its consequences.


Impact

This intervention fundamentally shifted the tone from technical optimization to urgent ethical accountability. It forced all subsequent speakers to address the responsibility question directly and grounded the abstract governance discussion in current conflict realities. It also highlighted the Global South perspective that had been somewhat absent from the technical discussions.


I bring concern about how AI is being used in a way that some of the commander or the user within the military domain is unaware of the possibility that AI might be corrupted during the use… And I also would like to bring the concern about the possibility of… AI being psychopath in a way that… would provide the answers that the users would like to seek. And being in the battlefield, that kind of tendency would be very, in a way, very risky and maybe dangerous.

Speaker

Commander Bagus Jatmiko (Indonesian Navy)


Reason

This comment introduced the critical concept of AI systems potentially being designed to tell users what they want to hear rather than what they need to know, especially dangerous in high-stakes military decisions. The ‘psychopath’ characterization, while provocative, highlighted how AI systems lack genuine critical thinking and may enable confirmation bias in life-or-death situations.


Impact

This shifted the discussion from technical reliability to psychological and cognitive risks in human-AI interaction. It introduced the concept of AI as potentially manipulative rather than just unreliable, adding a new dimension to the governance challenge that subsequent speakers had to address in their responses about training and system design.


You can in fact have security and innovation with adherence to values… innovation is being posed as a solution to our increasing state of insecurity and a risk to peace. And it’s been posited as a zero-sum game between innovation and security or insecurity and constraint on innovation. That is not the case.

Speaker

Alexi Drew


Reason

This comment directly challenged the false dichotomy often presented in AI policy discussions – that we must choose between innovation and safety/ethics. It reframed the entire governance challenge as a design problem rather than a trade-off, suggesting that responsible development is not inherently constraining but rather a different approach to innovation.


Impact

This provided a positive, solution-oriented conclusion that synthesized the various concerns raised throughout the discussion. It shifted the final tone from problem-focused to possibility-focused, suggesting that the governance challenges discussed were solvable through better design rather than fundamental limitations on AI development.


Overall assessment

These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, accountability, and practical implementation challenges. The progression moved from technical considerations (lifecycle management, signatures) to strategic reframing (threat multiplier concept), to urgent moral questions (Global South accountability concerns), to psychological risks (AI manipulation), and finally to a synthesis that rejected false trade-offs. The most impactful comments came from practitioners with direct experience (military officer, judge) who grounded abstract governance concepts in real-world consequences. This created a discussion that was both technically informed and ethically urgent, with each major intervention building complexity and shifting the conversation toward more fundamental questions about power, responsibility, and the human costs of AI deployment in security contexts.


Follow-up questions

How can we make multi-stakeholder AI governance platforms like RAISE more sustainable and address funding challenges?

Speaker

Jingjie He


Explanation

She noted that platforms like RAISE face funding issues and sustainability concerns, emphasizing this should be a collective effort requiring more resources and contributions from all stakeholders.


How can we better address adversarial attacks on AI systems used for peace and security monitoring?

Speaker

Jingjie He


Explanation

She mentioned that adversarial attacks pose challenges to AI systems used in satellite imagery analysis for conflict monitoring, making discussions more complex and requiring further research.


What specific technical standards and accountability mechanisms should be developed for AI systems in security applications?

Speaker

Michael Karimian


Explanation

He emphasized the need for clear standards ensuring transparency in AI capabilities and limitations, with robust documentation, monitoring, and auditing capabilities.


How can we develop more effective technical, ethical, and humanitarian governance that intersects with all stages of the AI lifecycle?

Speaker

Alexi Drew


Explanation

She stressed the need for governance mechanisms that work across development, validation, and deployment stages rather than being added as an afterthought.


How can AI-generated content be reliably identified and distinguished from human-generated content to prevent misinformation and violence?

Speaker

Francis Alaneme


Explanation

He raised concerns about AI-generated video content being used to spread false information and instigate violence, emphasizing the need for signature systems to identify AI-generated content.


How can we address emergent misalignment and the risk of AI systems becoming ‘psychopathic’ in military decision-making contexts?

Speaker

Commander Bagus Jatmiko


Explanation

He expressed concern about AI systems potentially being corrupted or misaligned during use in battlefield conditions, and the tendency of AI to provide answers users want to hear rather than accurate assessments.


Who should be held responsible for mitigating AI risks, particularly when big tech companies from powerful countries control the technology while developing countries bear the consequences?

Speaker

Judge George Aden Maggett


Explanation

He raised concerns about accountability for AI-related civilian casualties in current conflicts and the power imbalance between tech companies in developed countries and affected populations in developing countries.


Will there be a policy shift toward greater AI openness in peace and security domains, similar to civilian contexts?

Speaker

Rowan Wilkinson


Explanation

The question explores whether open-source approaches and community oversight models used in civilian AI safety could be applied to AI systems used for peace and security purposes.


How can we improve access to meaningful and trustworthy use cases to better understand how AI is actually being used in security domains?

Speaker

Michael Karimian


Explanation

He noted that the academic community, civil society, industry, and governments currently rely on limited examples that may not be reflective of actual AI use in security contexts.


How can procurement standards be used as a lever to ensure AI systems comply with international humanitarian law and ethical standards?

Speaker

Alexi Drew


Explanation

She suggested that even countries without direct control over AI development could use procurement conditions to enforce governance standards, requiring further exploration of implementation mechanisms.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #263 Public Service Media and Meaningful Digital Access

Day 0 Event #263 Public Service Media and Meaningful Digital Access

Session at a glance

Summary

This discussion at the Internet Governance Forum focused on the role of public service media in providing meaningful digital access, particularly in contexts where internet censorship and digital authoritarianism are growing concerns. The session was organized by the BBC and Deutsche Welle, with panelists including Patrick Leusch from Deutsche Welle, Abdallah Al Salmi from the BBC, Paula Gori from the European Digital Media Observatory, and Poncelet Ileleji from Joko Labs in Gambia.


The conversation began by distinguishing between basic internet connectivity and meaningful digital access, which encompasses reliable and affordable connectivity, appropriate devices, digital literacy, relevant local content, and safe digital environments. Leusch presented how international broadcasters face increasing censorship challenges, particularly in countries like Iran, Russia, and China, requiring sophisticated circumvention technologies to reach audiences seeking independent information during crises. Deutsche Welle and BBC invest heavily in tools like VPNs, proxy servers, and mirror sites to bypass censorship, with legal justification based on Article 19 of the UN Declaration of Human Rights regarding access to information.


Gori emphasized the connection between meaningful connectivity and disinformation, noting that public service media serve as crucial solutions to combat false information while maintaining transparency in ownership and funding. She highlighted how crisis situations demonstrate the vital role of trusted public media sources. Ileleji brought a grassroots perspective from Africa, advocating for strengthening community radio stations and local media partnerships to serve rural populations who lack broadband access but rely on radio for essential information about health, education, and agriculture.


The discussion revealed that current regulatory frameworks, including the EU’s Digital Services Act, face implementation challenges, particularly regarding data access for researchers studying platform algorithms. Participants agreed that a strengthened multi-stakeholder approach, updated international human rights frameworks, and better support for local media infrastructure are essential for achieving meaningful digital access globally.


Keypoints

## Major Discussion Points:


– **Meaningful Digital Access vs. Basic Connectivity**: The distinction between simply having internet access and having meaningful digital access, which includes reliable/affordable connectivity, appropriate devices, digital literacy, relevant local content, and safe digital environments. This concept goes beyond just being connected to focus on the quality and utility of the internet experience.


– **Internet Censorship and Circumvention Technologies**: How authoritarian governments are increasingly blocking access to independent media content, and the technical and ethical challenges faced by public service broadcasters like BBC and Deutsche Welle in developing circumvention tools (VPNs, proxies, mirror servers) to reach audiences in countries like Iran, Russia, and China.


– **Community-Level Media Infrastructure**: The critical role of community radio stations, particularly in rural Africa, as intermediaries for delivering reliable information to populations with limited broadband access. The need to strengthen these local media outlets through partnerships with international broadcasters and digital literacy training.


– **Platform Transparency and Algorithmic Accountability**: The challenges researchers and media organizations face in understanding how social media algorithms work, the lack of data access despite regulations like the EU’s Digital Services Act, and how algorithmic preferences for emotional/sensational content can amplify disinformation.


– **Regulatory and Policy Framework Gaps**: The need to update international frameworks like Article 19 of the UN Declaration of Human Rights, strengthen multi-stakeholder governance models, and implement existing policies like the Global Digital Compact to better protect internet freedom and access to information.


## Overall Purpose:


The discussion aimed to explore how public service media can enhance meaningful digital access globally, examining both the technical challenges of reaching audiences under authoritarian censorship and the broader policy frameworks needed to ensure equitable, safe, and useful internet access for all populations.


## Overall Tone:


The discussion maintained a professional, collaborative tone throughout, with participants sharing expertise and building on each other’s points constructively. While addressing serious challenges like censorship and disinformation, the speakers remained solution-oriented and emphasized the importance of multi-stakeholder cooperation. The tone was urgent but not alarmist, reflecting both the gravity of digital rights issues and optimism about potential solutions through coordinated action.


Speakers

– **Mr. Patrick Leusch** – Head of European Affairs at Deutsche Welle (Germany’s international broadcaster), Session Moderator


– **MODERATOR** – Online moderator (Oliver Ings, Distribution Manager at BBC)


– **Audience** – Various audience members and participants


– **Mr. Poncelet Ileleji** – CEO of Joko Labs in Banjul, Gambia; ICT expert with extensive experience in ICT development


– **Giacomo Mazzone** – Representative from Eurovision


– **Mr. Abdallah Alsalmi** – Policy Advisor at the BBC, Session Co-organizer


– **Ms. Paula Gori** – Secretary General and Coordinator of the European Digital Media Observatory (EDMO)


**Additional speakers:**


– **Thora** – PhD researcher from Iceland studying how very large platforms (VLOPS and VLOCE) are undermining democracy in the EEA


Full session report

# Public Service Media and Meaningful Digital Access: IGF Session Report


## Executive Summary


This Internet Governance Forum session, organized by the BBC and Deutsche Welle, examined how public service media can provide meaningful digital access in an era of increasing internet censorship. Moderated by Patrick Leusch from Deutsche Welle, the discussion featured Mr. Abdallah Alsalmi from the BBC (participating remotely from London due to flight cancellations), Mr. Poncelet Ileleji from Joko Labs in Gambia, Ms. Paula Gori from the European Digital Media Observatory, Giacomo Mazzone from Eurovision, and Thora, a PhD researcher studying platform impacts on democracy.


The session explored the distinction between basic connectivity and meaningful digital access, examining technical circumvention strategies, community media infrastructure, and platform governance challenges. Participants revealed significant disagreements on content regulation approaches while finding common ground on the importance of multi-stakeholder governance and public service media’s crisis response role.


## Defining Meaningful Digital Access


Mr. Alsalmi opened by distinguishing meaningful digital access from simple connectivity: “We need to go beyond just simple connectivity and beyond just having a device that is connected to the internet because it’s all about the experience, it’s all about what the internet users can make of the internet.”


He clarified that while the UN’s Universal Meaningful Connectivity (UMC) provides specific development metrics, meaningful digital access focuses on the qualitative user experience and practical utility of internet services. This encompasses reliable connectivity, appropriate devices, digital literacy, relevant local content, and secure digital environments.


## Circumvention Technologies and Legal Framework


### Deutsche Welle’s Technical Approach


Patrick Leusch detailed Deutsche Welle’s substantial investment in circumvention technologies to reach audiences in countries like Iran, Russia, and China. The broadcaster employs VPN services, proxy servers, mirror websites, and tools like Psiphon and Tor. He highlighted their collaboration with Italian organization UNI to develop the News Media Scan tool.


Leusch provided specific examples of usage spikes during crises, including the Prigozhin coup attempt and Navalny’s death, demonstrating increased demand for alternative information sources during political upheavals. The technical work requires permanent adaptation as censorship methods vary significantly between countries and evolve continuously.


### Legal Justification and Challenges


Deutsche Welle’s legal team, consulting with the German Bundestag Legal Service, established justification for circumvention tools under Article 19 of the UN Declaration of Human Rights. However, Alsalmi argued for updating this framework: “Article 19 is really, I would say, is really outdated and we need to have another look at it, update it, and renew commitments to it… Any government can shut down the internet at any time without due recourse to legal background or text.”


## Community-Level Infrastructure Perspective


### African Connectivity Challenges


Mr. Ileleji provided grassroots perspective from sub-Saharan Africa, where approximately 37% of the population has broadband connectivity according to ITU statistics. He emphasized community radio’s continued importance: “I like to look at it from a grassroots level. What information do community radios are able to provide to their citizens?”


He described how community radio stations serve as intermediaries for rural populations, providing information about health, education, and agriculture while combating fake news spreading through WhatsApp. Ileleji noted that major tech companies like Meta and Google have launched connectivity projects in Africa using balloons and drones, but these often provide access to only limited websites.


### Partnership Approach


Rather than waiting for comprehensive broadband deployment, Ileleji advocated strengthening partnerships between international broadcasters and community radio stations, emphasizing digital literacy tools and training. This approach recognizes existing technological and economic constraints while building on established community media infrastructure.


## Platform Governance and Data Access


### Research Challenges


Ms. Gori highlighted obstacles in understanding platform operations despite regulations like the EU’s Digital Services Act (DSA). She expressed particular concern about AI systems: “We even don’t know the answers that Gen AI is giving to people. So whenever you ask an AI chatbot about something, it is giving you an answer and no one knows it. It is like between you and the chatbot, which is creating an additional element, probably even more scary.”


The DSA’s data access provisions remain hindered by the unpublished Delegated Act, preventing researchers from accessing platform data necessary for studying algorithmic behavior and disinformation patterns. Gori worried about creating a “two-speed system” where only well-funded institutions can analyze platform data.


### Regulatory Approaches


Gori used a highway metaphor to explain connectivity and content regulation, advocating for risk-based regulation targeting platform operations rather than content itself. She noted increased reliance on trusted sources during COVID-19, highlighting public service media’s stabilizing role during information uncertainty.


## Content Regulation Debate


A significant disagreement emerged on content regulation approaches. Mr. Ileleji took a firm stance: “We shouldn’t have regulation on content. It goes against freedom of speech. So immediately you start trying to regulate content, then you are infringing on the rights of people.”


In contrast, Giacomo Mazzone suggested that fact-checking alone proves insufficient, questioning the effectiveness of industry pledges by organizations like the European Broadcasting Union and newspaper associations to platforms. This disagreement reflected broader tensions between combating disinformation and preserving freedom of expression.


## Multi-Stakeholder Governance


### Strengthening International Cooperation


Participants agreed on strengthening multi-stakeholder governance models. Alsalmi advocated for re-energizing local Internet Governance Forum (IGF) forums and coalition building to prevent internet fragmentation. Ileleji proposed combining Global Digital Compact implementation with World Summit on the Information Society (WSIS) review to strengthen the IGF framework.


Gori emphasized involving municipalities as key players closer to citizens in digital rights advocacy. Alsalmi stressed the importance of civil society working at local levels and engaging judicial systems when governments don’t support digital rights initiatives.


### Research and Democracy Focus


Thora, referencing Time Magazine’s “Person of the Year 2006” recognition of internet users, focused her research on how Very Large Online Platforms (VLOPs) and Very Large Online Content Engines (VLOCEs) undermine democracy in the European Economic Area. Her work examines the intersection of platform governance and democratic processes.


## Key Outcomes and Ongoing Challenges


The session revealed both consensus and significant disagreements among participants. While there was agreement on the importance of multi-stakeholder governance, opposition to internet shutdowns, and public service media’s crisis response role, fundamental disagreements emerged on content regulation approaches.


Implementation challenges persist, including delayed DSA data access provisions, capacity gaps between large and small organizations, and the ongoing technical arms race between censorship and circumvention technologies. The discussion highlighted how different regional contexts require adapted strategies rather than universal solutions.


## Conclusion


This IGF session demonstrated the complexity of achieving meaningful digital access amid increasing digital authoritarianism. The combination of high-tech circumvention strategies from international broadcasters, community-level media strengthening in developing countries, and evolving regulatory frameworks in Europe suggests that meaningful digital access requires diverse, coordinated approaches.


The path forward involves both immediate practical actions—such as implementing existing data access provisions and strengthening community media partnerships—and longer-term framework development to update international human rights law for the digital age. Success will depend on navigating tensions between competing values while maintaining collaborative approaches to digital governance challenges.


Session transcript

Mr. Patrick Leusch: A very warm welcome everyone here in the room, in the workshop room two in Lilleström in Oslo at the IGF and remote wherever you sit around the globe. My name is Patrick Leusch, I’m head European affairs at Deutsche Welle, which is the Germany’s international broadcaster and I’m very happy to motivate this session here that has been organized, co-organized by the BBC and Deutsche Welle. So public service media and meaningful digital access, this workshop will deal with the lessons learned so far from policies followed by public service media to each audience via the internet and the challenges they face particularly in reaching global audiences. You might understand that for at least for Europe for public broadcasting internet censorship is a growing issue but not so important issue so far but potentially it is when you look at some some countries that start really limiting the access to information to put it that way but on a global scale there is a growing digital authoritarianism and this poses a challenge for information providers, for media makers, for human rights communities, we’re talking about safe space communication but we’re talking about journalism also brought to audiences to a less and less free extent. So what is the link to the concept of meaningful connectivity that will be explored in a minute and then we will step through different aspects of this challenge we are facing. We will explain a little bit practically how international broadcasters are running this problem and on the other hand at the second part let’s say what’s important is to understand what is the policy implications and is the regulatory implications and where this meaningful digital access needs to be strengthened from a policy and a regulatory and a legal framework and there is room to improve by far obviously and that is what we will discuss with the following protagonists and you because we consider you as an expert community, be you online or be you in the room so you will have space to discuss among yourselves and with us obviously. So the panelists so far are first of all I mentioned Abdallah Al Salmi, he’s policy advisor at the BBC and he was supposed to sit next to me because he’s the real organizer of that session but his flight was first delayed and then cancelled so no chance to come over from London. Hello Abdallah in London, very warm welcome. So I’m turning to the second speaker on screen, Paula Gori, who is the secretary general and the coordinator of the European Digital Media Observatory ADMO. Hello Paola, thank you very much for joining. Thank you. And last but not least with me is Poncelet Ileleji, he is the CEO of Joko Labs in Banjul, Gambia and he’s an outstanding ICT expert. When you look at his LinkedIn track down, he has been a member of a lot of boards and experts groups that deal particularly with ICT in development. Thank you very much for making the way, Poncelet. And last but not least, we have Oliver Ings, he is distribution manager at the BBC and he is the online moderator. I hope he’s there and we will get the questions from the audiences via Oliver. Now let’s simply start. Abdallah, give us an overview, what are we talking about when we talk about meaningful connectivity or meaningful digital access? Are you going to share your screen with your presentation by yourself or you want me to do that? So I’m sharing my screen now and I wanted


Mr. Abdallah Alsalmi: to ask if you could see it. Should come. So far we see you. So the IGF is telling me that they can’t see it. So I cannot see it on screen. Now we see it. Okay, perfect. All right, I’ll get started and thank you, Patrick, for the introduction and again I’m very happy to be here. So I’m going to start with the thank you, Patrick, for the introduction and again apologies for not being able to be physically in workshop room two. I’m going to arrive later tonight so hopefully we will meet some of you over the week. So to begin to talk about meaningful digital access, it’s really a good start to think of how technology and communications evangelists tended to lump some old internet users in one group. So for example, someone who can send only text messages on WhatsApp over a 2G connection is put or placed in the same group with someone who has a super fast broadband and can use Apple’s latest VR headset to play games. So over the last few years, some civil society groups such as the Alliance for Affordable Internet came up with this concept of meaningful digital access and the idea behind it was that we need to go beyond just simple connectivity and beyond just having a device that is connected to the internet because it’s all about the experience, it’s all about what the internet users can make of the internet. And the meaningful digital access has a number of elements. The definition is not really set in stone so it’s a bit flexible at the moment but the first one is about reliability and affordability of the connectivity. Again, here we’re talking about the costs of data that vary largely between one country and another. Probably it’s getting cheaper but still in some geographic contexts it’s a prohibitive aspect of using the internet. The second element is the appropriate devices and the idea here is about how many devices does a person have and do they have a keyboard in their device or not when they are using the internet. So the more devices they have, the better specifications of the devices, their internet experience is going to improve. And number three which touches on the issue of the digital divide and issues of development which is very important is digital literacy and skills and again the UN and a large number of other organizations have been working on this and it’s a huge subject. It also touches upon one of the points that some of our panelists will speak about today which is disinformation. To what extent is the user able to enjoy their internet experience without being subjected to organized disinformation campaign either by governments, by companies or even by individuals. Fourth is the relevant content in local languages and again here I think we have made huge strides in this but again more work needs to be done in making online content available in languages where people find it easy to speak and to use the internet. And last but not least number five is the safe and inclusive digital environments and this is here we get into the area of cyber security, we get into the area of access and continuity of internet access. Is there an internet shutdown? Is there censorship and blocking? And all of these elements come under as one of the requirements for a meaningful digital access. Now the UN has their own standard which is called universal meaningful connectivity and largely it’s very similar to meaningful digital access but there are differences. So a universal meaningful connectivity or in short UMC is more of a development goal that some UN organizations such as the ITU, the International Telecommunications Union works on in cooperation with governments, in cooperation with civil society and the idea is to upgrade the experience of online access based on specific metrics that have to do with how many people are connected to the internet and to what extent they are able to use data on a day-to-day basis, and what purposes they use the internet for, is it for business, is it for social networking or for looking for a job. The aim of the universal meaningful connectivity is the same as the meaningful digital access in the sense that If people don’t have access to a good connection, they can’t look for a job, they can’t keep in touch with their family, they can’t express their opinions freely on the Internet. However, the only difference here is that the MDI is an outcome. It really relates to the experience and the quality of it, of using the Internet, while UMC is a goal in itself and policy. And so the UMC as a metric, there’s a lot of data that’s available already. If you go to the ITU data hub online, you will find really a good dashboard that shows you the scores of all the countries in the world which are members of the ITU and where you can really see where more work needs to be done. For example, some of these scores, if it’s up to 100, if it’s 40 to 50, the UMC metric is limited. If it goes all the way up to 95 and 200, then it means that the target has been achieved. And yeah, so the last slide is about really this session, and we’re trying to look here at how public service media work to enhance and respond to these various challenges in attaining the status of meaningful digital access. So I’m going to stop here and get back to you, Patrick.


Mr. Patrick Leusch: Thank you very much, Abdallah, for that first introduction to camp a little bit this scene. I think it’s very important to distinguish between an outcome and an objective. And I think we will come back to both of it, because we would like to look at it from a comprehensive point of view. You have mentioned different use cases which play with meaningful digital access, let’s say in the exchange between people looking for a job, for instance, inform themselves. From another perspective, it’s also from the sender side. You can say that there is one issue which is really a bigger issue, is the access to information on a global scale, which is limiting more and more. And I would like to jump in and show you now a little bit what does that mean to public service media like the BBC, us and others, and if Laura could launch my presentation, that would be great. So thank you very much. So as I said, we are an international broadcast. I think you know roughly what we do. The map of press freedom that you see here is the guiding line for what we do. We provide unbiased information for free minds, and in a similar logic BBC and the former USAGM, at least with some of their grantees, have been underway to provide independent information, reliable information, where there is limitations to that, particularly from local media or state media or whatever. So as BBC and others, we provide this information in these local languages and made by teams from these countries. So we are not reporting about Germany to Gambia or Senegal. We are informing people in Russia about what’s happening in Russia and in Ukraine, right? And obviously, there is interest on the global scale. We reach 320 million users a week. And when you look at the geographical dispatching, then you see also that most of them are reached on continents where there is a, let’s say, limited space of information, be it by technical means, be it by market means, or be it by digital order. We will have a closer look at Eastern Europe and Central Asia, because there is where the game plays at the moment when you look at censorship. Independent content that would otherwise be denied through censorship, disinformation and one-sided reporting, that’s what our final impact should be. Now, on one hand, we are all journalists. We are used to create great journalistic content. But what are we telling taxpayers when they pay for this great content, when we cannot get it through to the audiences? So since many years, we and others invest also in censorship, understanding and teaching people circumventing censorship, because otherwise they cannot access these contents. And by the way, this research on circumvention, for instance, is nothing we do exclusively for our own company. We share with the BBC and others, for instance, and that relates also to a lot of exile media or media that are outside of the country where they report for. You can mention Medusa, for Russia and others, for instance. So that’s not only a matter for public service media like us, it’s a matter for a lot of free and independent media. The Deutsche Welle is blocked in China, in Iran, in Egypt, in Belarus, in Russia, in Turkey to some extent. And since 2012, we started looking into circumvention technologies. And as for Iran, we are very successful together with partners, obviously. And you understand that since a couple of days, there’s a complete shutdown in Iran and there are expert groups working on this issue. And Iran is a good example where over a long period of time, you really can build a skilled community that is able to access these contents via a range of tools, while the censorship is very efficient. Iranian censorship is quite efficient, let’s say. Maybe not so efficient with the Chinese one, but that’s a matter of how the Internet has been constructed. The Chinese Internet has been built as an inner Internet from the beginning, and the Iranian Internet was an open one, let’s say, but in a developing phase with limited connections to the outside global Internet at a certain point of time, where it was easier to cut it or control it. When you look at the Russian Internet, for instance, that is a different story because this was a fully-fledged, connected, globally interconnected Internet, which is now censored step by step on a testing basis also, because the Russians cannot know exactly what else works still when they cut something on another hand and they want to avoid. So it’s a kind of testing, but they are moving forward step by step, and even speed up the process to deconnect everything. So in Iran, we touch millions of people on a weekly basis, and it’s not only digital natives that can use these technologies needed to access these contents. So Internet censorship, that’s something you have to understand, is a complex issue. It’s technically and politically a complex issue. It relates on a variety of technologies, policies, means and methods, and it’s a permanent cat and mouse to understand what precisely is the technology used to censor content, filter content, block content, or throttle content, or whatever, and then the mitigation measure is also adapted to this variety of methods. These methods vary from country to country, and they vary from, let’s say, censorship policy to censorship policy. You cannot remove digital censorship from the outside of the technical center, so to say, and you cannot dig holes in the censorship wall. You can try to get around the wall. That’s something that’s very important to understand. So we are not counter-hacking or counter-attacking, but we try to provide tunnels, funnels, or whatever to make people access to Internet they are kept away from. The second condition is people in this country want access to that content. It’s their decision at the end of the day. We provide the content. We provide an explanation how to access it, but it’s their decision at the end of the day. I can tell you that that poses ethical questions on those doing this kind of stuff on one hand, and it’s also legal questions. Why a public service media like Deutsche Welle or BBC is able in front of a financial court also, and according to the law that identifies the mandate of a public broadcaster like with us, what is the legal basis on which… which we can provide explainers to audiences that explains how to access VPNs or apps that has been co-developed with IT specialists from our specialists that gives them access to that content. What is the legal basis on that? We did a research on that and we asked the German Bundestag Legal Service, Scientific Service to give an answer and the answer was it is Article 19, the access of information, Article 19 UN Carta on Human Rights. That is the legal basis and all the countries we are talking about that are censoring contents from us and others have signed this Carta and international law is breaking national law. So from a legal perspective, this is safe play. Simply on this article. So let’s start a discussion. What else? Just to speed up a little bit, we provide internet freedom via our app, Psyphone Tour. The session before here was from our friends with Tour. We work with them obviously. They host also our contents on their tour servers. When you access tour, you see our content for instance which is very important. We work with proxies, mirror servers and a lot of other means to get people access to our content. So we are quite skilled in reading what needs to be done to get people access to these contents because we do that work since 12 or 13 years. Nevertheless, it is always a kind of challenge because it is costly. You need server space for mirror server, for mirroring content for instance and you pay to Amazon or whoever for the server space. That’s really costly. But that circumvention works. Can you see from the access? This is a chart from the protests in Iran two years ago and you see clearly where the peaks are. That’s clearly every time when there was shut down, when there was protests, when there was limitations on the internet, people start seeking for information. You can see also this chart from Russia. You see that there is a peak around a weekend in June. What happened at that weekend in June? It was the coup by Prigozhin. Then there is crisis in the countries. It was the same when you look at the day that Navalny died. You see the same peak. When there is crisis, people in these countries start seeking for let’s say alternative information and that’s why public service media and exalt media are so important for these audiences. Okay, thank you very much. By the way, this is a small tool we co-developed with an Italian organisation, UNI. It’s called News Media Scan and if you install it, it shows you which websites in a given country are currently blocked, effectively blocked and which ones are free accessible. Nice monitoring tool that gives you a glimpse on what’s going on in your country. So this is to give you an overview over what we do and how that relates to the very practically to the concept of meaningful digital access. Great. I would like to hand over to Paula now to give us a glimpse on policy aspects. Hi, Paula. Yes,


Ms. Paula Gori: we can hear you. Go ahead, please. Thank you very much for having me and thank you for your great presentation. I noticed some keywords which I will try to take also in my presentation. So for those who do not know, ETMO stands for European Digital Media Observatory and we deal with disinformation and we are one of the pillars of the EU actually to tackle disinformation. But really in a nutshell, you can see us as a multi-stakeholder and multidisciplinary platform that tries to understand disinformation. Now Paula is frozen. Can you hear me? Can you see me? Yes, you are back. Okay, very good. Just very quickly, I wanted to reflect a little on the link between UMC and disinformation because first a step back, I mean Abdalla presented it very well and I was thinking of a metaphor and it’s like when you build connectivity, it’s like when you build the infrastructure which means think of for example the highway. Now we are all happy that we have a highway but if without rules it wouldn’t be so useful because actually there would be a risk of having accident or I don’t know, people walking on the highway and then having actually death accidents and so on. So there are a few rules. We are all free on the highway but there are still a few rules and this is somehow the same that happens with connectivity and content in the sense there is an infrastructure but we need not so many but at least a few rules and at least principles which are globally shared otherwise it’s hard to manage. And so this is somehow if you want my starting point. Now when it comes to disinformation and we discussed that in a prior session, I mean the whole issue is quite complex and also the solution is complex and it’s a multi-solution if you want with a full respect of fundamental rights and freedom of expression but now linking it to public service media which is if you want the core here in this session, I think there are a few reflections to be made. The first is that public service media is often seen as one of the solutions to tackle disinformation and I think this is rightly so and indeed at least in the EU there is the policy of the EU is to invest a lot and to support quality independent quality journalism and to support infrastructure for this journalism to actually be accessible. We also have to be honest, on some occasions unfortunately public service media are also sharing disinformation. We should be blind on that. There were a few occasions in some countries in which this is happening or happened and is happening but I think that once we are honest on that I think we can clearly invest also on the solution side where public service media play a key role and as you rightly said I think that crisis situations are in those moments in which we really have the evidence that they are playing a key role. I think when we saw it all during the COVID crisis we were accessing public service media more than before because we were all looking for information, we were all lost. We were also accessing quite a lot of disinformation and online content but public service media were in the end the media that everybody was relying on actually to get safe information. Now for public service media to work I think that or to be reliable I think what is very important is transparency. You may be familiar with what we have in the EU which is the European Media Freedom Act among others and according to the European Media Freedom Act there should be transparency on the ownership, on the structure, on the funding of the PSM. Why am I saying that? Because as you rightly said the choice is on the users, is on the citizens and so while public service media are not imposing them, they are just being there as an alternative or as one of the alternatives. It is important for the citizens to be sure about who is behind, how they are funded, how they are working because I think this is an element that gives reliability and that helps the users trust. Then of course as we were saying citizens can access any information they want and any source they want and this stays. I just wanted to maybe also close because I know that we are a little late in this session but I wanted to say something which is I think very important. I think there was an unfortunate, how can I say, coincidence between if you want an issue in the business model of traditional media including the PSM and in parallel as we all know a shift from advertisers from offline to online but also if you want in the way that public service media produced and shared the news. I talked to many journalists of PSM and there is a sort of mea culpa in the sense that it is important to have a more innovative but also positive approach to to sell news because otherwise there is a risk that the users and the citizens actually are not interested in quality news which is honestly a pity. So this is something where I see for example BBC and Deutsche Welle are actually quite good examples of very positive examples because they invest a lot on if you want new ways of producing content and of sharing content and also try to be less sensational. We have a lot of people who are journalists in their content and in their headlines and so on. But somehow it is important that PSM play a role also in not only being trustworthy because of the structure but also in being attractive because of their content. And I will close it here. And I just wanted to thank you really for the work that you are doing because it needs some courage to do what you are doing. And this is really in the interest of citizens.


Mr. Patrick Leusch: Thank you very much, Paula. Very interesting points. I think we will come back to one or two, particularly the building blocks you mentioned on this trustworthiness, which is extremely important when you look at content shared via the variety of distribution forms you don’t own. I’m saying the platforms, for instance, and particularly the user habits, which play an incredible role in all that. Let’s come back to that later. But first of all, I made you wait here on the screen. Poncelet, go ahead. What do you think?


Mr. Poncelet Ileleji: I think personally, good morning, everybody. When we look at public service media and meaningful digital access, I like to look at it from a grassroots level. What information do community radios are able to provide to their citizens? In most of the cases, you look at sub-Saharan Africa, for example, my beloved continent, where only about 37% of the population have broadband connectivity. In most cases, if I take the Gambia, my home, whereby you have people who live in rural areas, they have Internet connection, but they don’t have meaningful connectivity. Because sometimes most of the big telcos, what do they do? They put their towers in most of the big cities, municipal cities. So people in the villages and in rural areas in most parts of Africa, they get their information from all these community radio stations. Some of these community radio stations, they also link up with the BBC or Dutch Orwella to produce information. So the most important thing is that with public media access, we have to strengthen our community radio stations. We have to give them more digital literacy tools and link them up to community network centers, whereby they can be able to download information relevant from big media houses to disseminate to their population. I’m looking at it from a grassroots perspective. We have to know that what does the average common man want in a rural area? He wants to get information on education, health, agriculture. Those are the basic information he needs to live his life and contribute to the well-being. Now, in terms of what Paolo talked about and when you look at disinformation, yes, if you don’t equip community-based public media with the right tools to be able to provide good news and updated news that is not disinformed, what have a lot of people now doing? They get their information, fake news spreads through mainly messaging apps like WhatsApp. So someone just sends a message and it goes viral and it’s fake. But who debunks all this information? Is the community radio saying, oh, that is not true, that X and Y activist has been arrested, blah, blah, blah. It’s not true. This happened and everything. So the strength for meaningful connectivity on information is strengthening our community-based radio stations, and that can be made possible through what I would call big public media like BBC, like Deutsche Welle, who work around different parts of the world. So they have to have partnerships so that with local community radio stations, give them digital literacy tools for this to be achieved. I’ll lastly say that if you look at the global digital compact implementation, one of the key things there is the digital divide. We still have 2.6 billion people in the world that are not connected. And if you want people to be connected, once you equip them with the right information through public media, indirectly they’ll be connected.


Mr. Patrick Leusch: Thank you. Thank you very much, Poncelet. That is a very important point. Just to put a question to understand correctly, what you’re referring to is the, let’s say, the technical infrastructure first, getting more people, more speedy, technically access to information via a policy to provide, let’s say, high-speed Internet. In rural areas, particularly in Africa, do I understand correctly that you are pledging for a speed-up infrastructural approach also to provide this right?


Mr. Poncelet Ileleji: Yes, in a way, yes. But we have to live with the reality on the ground. The reality on the ground is that we are still a long way for achieving meaningful connectivity on broadband in most parts of Africa. That is the reality. But how do you do it is by developing the capacities of community radio stations so that they have that capacity. They have, I mean, linking it up with a community network center. They have Internet connectivity to get good information to disseminate to that community. So when you do that, the information that the average person in a rural setting might not be able to get because he doesn’t have meaningful connectivity through his local radio station, he will be able to get this information because they are equipped. So that’s why I’m linking up that the meaningful access I’m talking about, I’m linking it up with public media. And that public media I’m referencing is what is at the community level. And that’s the community-based radio stations who, again, for world news, for other things, can link up to BBC, Deutsche Welle. You have all these learning platforms. They can link up to these learning platforms to provide other services in education, health, agriculture that people need to have.


Mr. Patrick Leusch: Exactly. That’s absolutely right. And that’s what’s happening, by the way, because we, for instance, we work with partners. We can pick content and distribute that content on our own platforms. We co-develop content. And, by the way, like BBC with the media action, we provide trainings. And that training relates also on shifting, making shifting local media into online reporting and everything that comes with it. So let me turn to you guys here in the room. And let me also ask our online moderator, Oliver Inks, if there is a question that has been put forward so far from those who are connected online.


MODERATOR: Good day to everybody. Thank you. I can’t see any questions in the Zoom chat at the moment, but I do see that Paula has her hand up. So perhaps we should give the floor to her.


Mr. Patrick Leusch: Go ahead, Paula. Only at the condition that there are no more questions, of course, from the audience. I just wanted to kind of go back to what Ponce was saying, because I think it’s very important. And I think there is another two elements. One is we don’t know. I mean, we are not fact checking or we don’t know what is going through private messaging app, which is absolutely correct. I will add another layer, which is we even don’t know the answers that Gen AI is giving to people. So whenever you ask an AI chatbot about something, it is giving you an answer and no one knows it. It is like between you and the chatbot, which is creating an additional element, probably even more scary. And the second element is, and I wanted, I mean, but you, Ponce, are more familiar with the African continent. But I remember some years ago, Meta and Google were sending balloons and drones to provide connectivity in some African regions. And among the condition was the fact that you could only access a limited number of websites, including, of course, Facebook. Then if this is the case, and then if there is lots of disinformation, but also hate speech and so on, on those platforms, then somehow the users, they are locked in, the citizens, because they have the connectivity, but it is by no means meaningful nor safe because you are accessing content, which is disinformation or even worse, illegal speech. So I think this is quite important. And just very quickly, again, on the messaging, I think it is very linked to the urban and rural areas and also to the fact that as human beings, we trust our families, we trust our friends. So it is somehow replicating the word of mouth situation that we also had in the past, but as you rightly say, in a way more scary way because there is still this convincing element that if it comes from the online, it is trustworthy. Thank you very much. We have a question in the room here. Sir, go ahead.


Giacomo Mazzone: Yeah, it’s working. Giacomo Mazzone from Eurovision. I have a question in general to all the speakers that is about the… It seems to us that… The fact-checking is not enough. We need to go towards a more comprehensive approach, more holistic approach. That means having regulation that will help in the negotiation with the platforms in order to be more effective in the work that you do. I know that recently there has been a pledge launched by EBU and the association of other newspapers to the platforms. Can you tell more about that, if you are aware?


Mr. Poncelet Ileleji: If I were to comment, I would first want to talk about when you mentioned regulation. We shouldn’t have regulation on content. It goes against freedom of speech. So immediately you start trying to regulate content, then you are infringing on the rights of people. Yes, there is a moral issue on what kind of content you produce, and we have to be able to fact-check information, and that is why a lot of countries, a lot of organizations are just fact-checking information. And the last thing I will say, it’s also the moral duty for people, like where you see most messages, whether it’s on TikTok or on a WhatsApp messaging app, and you get an information, you just post it to send it to other people without even fact-checking it, and you are supposed to be the educated one. In most cases, most of the people that carry all this hate speech or disinformation are the educated folks, and so we have to do a lot of stuff whereby so-called educated folks are now using all these platforms to misguide the majority of the populace, and we have to work hard in changing that. But to bring about regulation of content is a no-go for me. Thank you.


Mr. Patrick Leusch: Okay, very strong commitment. Thank you very much. Paula, I have this question from Giacomo in mind that relates to the role of the platforms. I can say from my experience, for public broadcasters, and I think you know that very well, Giacomo, because that’s true for public broadcasters as well, or for most of the media, but a little bit different for commercial media than for the public media, I guess, is to play the platforms. I mean, you can’t read the platforms. We don’t know what the platforms are doing with our content. You don’t know what is in the black box. We have expert teams sitting that check what goes from our newsrooms in the black box, and they can check what comes out of the black boxes in terms of audience, and then they guess what the algorithm is doing why with your content, and then they advise on the newsrooms to adapt the content according to their guessing without knowing really what the platform is doing with your content. And obviously, many journalists and content producers are tilting between, we have to get rid of the platforms and how we can play them best, and that is very difficult to play. But from your perspective, because Edmo is really at the heart of assessing also what that means, what is your assessment on that from a regulatory point of view? And I know that we are tilting back a little bit to the European context that will widen up again for the global context in a minute. Paula. Yes, you got a little


Ms. Paula Gori: frozen, but I hope I got the question. So first I wanted just to reinforce what was said. We all get always freezing when we talk about the platforms, you know, that’s normal. No, I just wanted to say that I first wanted to reinforce what was said. Regulations should not be on the content, and this is very important, and this is also, for example, the approach of the Digital Services Act. It’s not on the content, it is on the risks that the way the platform work actually can pose. So this is very important. Now, on what you were saying on content that is on platforms, I think this is the overall point that we are making since years, that there is no transparency in the algorithmic decisions. So we are really not fully aware why we are seeing a given piece of news rather than the other. And let me also say that I’m probably even the platform still don’t know, because as far as I know, it’s an algorithm that works on an algorithm that works on another algorithm. So they somehow tweak it so much that, honestly, I fear that in some occasions they even lost control on these whole tweaks on the algorithms. But clearly what we know is that emotions fuel negative content, and especially negative emotions. So whenever a content is emotionally strong, it is based on fear, on division, on threat, and so on, it becomes more viral. And this is a way to tweak the algorithm. And this, in my opinion, is why unfortunately some media then move to sensationalism content, because it actually moves the algorithm more than a plain information that is without emotions and is not emotionally emotional. But going back to the regulation that we are seeing, I think that we are going, and I was saying that in the previous session, with the global principles that we are seeing, with the Global Digital Compact, with the UNESCO guidelines, and so on, I think we are agreeing on basic principles. And in the EU especially, as I was saying earlier, what we are doing is we are looking at if the way the platform works can actually be mis-abused. Let’s put it in a very simple way. So it’s not about the content. It’s just the way you are working can be mis-abused by malign actors. And this creates risks to public security, public health, civic discourse, and so on. And that’s where the regulation is, because as it was said, we could not get into the content. Thank you very much, Paula, so far.


Audience: So there’s a lady standing since minutes on the microphone. I didn’t want to interrupt. Good answers. Very kind of you. Thank you. The floor is yours. My name is Thora. I’m a PhD researcher coming out of Iceland. I’m studying how VLOPS and VLOCE, very large platforms, are undermining democracy in the EEA. And my problem is, of course, scarcity of data and the black box. Now, I have a 20-year experience working in IT and building large systems. So in my mind, I see it. I see the black box. But as an academic, now the DSA is supposed to give me access, but it is not. The platforms are dragging their feet. And I, again, am asking here and sort of lobbying on behalf of academia, are you guys doing any academic work and demanding data through the DSA? If not, what is hindering it? And what can we do to fix this problem so we don’t always are theorizing with the law? Then comes the black box and we are studying the outcome, because that’s, of course,


Mr. Patrick Leusch: a futile thing. Thank you. Thora, thank you very much. That’s really the right question at that moment, because I was just about to try to link the different aspects that we have had now, right? So the access to this data is one regulatory thing, and the DSA is at the heart of it. And we understand that the EU is slow in, you know, pushing that forward. Maybe there is an overarching political item in it, tax or something like that, I don’t know exactly. But the question is the following, and this question goes to Poncelet and goes also to Paula, but also to you guys here in the room and online. So we have touched based on the censorship issue at the beginning, which is it’s a part of meaningful access to digital access, right? Then Poncelet has spoken about the challenges that you see in Africa, for instance, which lay partly on another level. I’m not saying there’s no censorship, there is, but meaningful access means much more and leads to skills of media, but also to technical development. The access to data on the large platforms and the regulatory questions that come with it is another aspect. So I have at least three different elements which are not easy to link together when we asked what needs to be done policy-wise or on a regulatory basis to push these challenges forward. So what do you think where to attack? You mentioned, Paula, the AMFA, for instance, and the DSA has been mentioned. So what I’m saying, there are things in place, why aren’t they working and what needs to be done to make them better work, right? UNESCO initiative, the Global Compact, all this has been mentioned. So tell us where to work on to make them better perform all these elements that are in place.


Mr. Abdallah Alsalmi: I would like to look at the issue of the international human rights aspect of access. Article 19 is really, I would say, is really outdated and we need to have another look at it, update it, and renew commitments to it. The other issue is that the multi-stakeholder model, since we are at the IGF, it’s good to mention it. We have been, for a number of years, we have been hearing a lot about support for the multi-stakeholder model of governing the internet. Oftentimes it comes as a response by some civil society groups and some governments to the efforts by particular governments to try and reshape the internet as we know it. We really need to energize efforts working towards a real multi-stakeholder model. My idea is that we can start with the local branch of your IGF by trying to build coalition, talk to your government, and try to push for an internet that is really open and in a way regulated to protect its current openness and the fact that it has no borders. We really cannot continue to work on this legal loophole. I’m going to make this comparison now. If you look at shortwave radio and satellite TV, they are protected by rules of the International Communications Union, so governments cannot jam them. Governments cannot disrupt these broadcasting technologies. But the internet now, it’s not protected. Any government can shut down the internet at any time without due recourse to legal background or text. Any government can block websites, again, at will, with no questions and no justification provided. So, my call to action is about re-energizing the local IGF forums and to start from that point.


Mr. Patrick Leusch: Thank you very much. If I may just follow up on data access. A tepid statement, particularly saying Article 19 is outdated. So, over to Paola.


Ms. Paula Gori: On data access, quickly, or I would love to have more time. So, first of all, you are completely right. The current policy framework establishes actually the obligation that the platforms give access to researchers, to independent researchers, both access to public and to private data. What is still missing is the Delegated Act. So, the Commission should publish the Delegated Act, which makes this operational, and then there should be no excuses. So, we really hope that this will be active. What Aetmo did already years ago, we did a legal analysis on whether having access to private data would infringe GDPR. And you can find on our website a good report saying that actually it is fine to get access to those data. We are also covered on that side, and we actually even worked on an independent intermediary body that could work in between digital services coordinators, so kind of with the digital services coordinators in between researchers and platforms. So, this is, I fully agree with you. Just one thing I wanted to say is that once we will get access to those data, there will be two main issues. The first is, will all organizations be equipped financially and infrastructurally to deal with all this data? Because there is a risk of two-speed academic institutions and civil society organizations. The big and rich ones, they will make it. The smaller ones not, which is an issue also if you look at some specific countries, not only in Europe, but now the DSA is for Europe, so think of, for example, the Eastern countries and so on. But also countries like Italy, I’m not sure if many universities would be able to do that. The second, policymakers shall be ready, because if we really access those data, we will understand so many things about this information, and also about the impact, that probably we will have to change the whole policy framework once we will have the knowledge of what is happening really online, because it’s only through those data that we will have those knowledge. So, just those two points to quickly close. Thank you very much, Paula.


Mr. Poncelet Ileleji: Oh, yeah. If I look at it, I totally agree with what Paula said and Abdullah. I will say the multistakeholder process is the key to all we are discussing here, especially you look at data governance, like the PhD researcher talked, data governance has now become a key component in all the work we do. But this multistakeholder process involves us being able to dialogue with our governments, with civil society, with academia, with legal people, with technical people, so we have to sit with equity and hear each other out and agree to disagree. If we don’t do that, starting at a grassroots level, we will continue to be dividing ourselves, and trying to, instead of building an Internet that is not fragmented, we will continue to have fragmentations at various levels, and that is why disinformation now is now a big thing. If you go back to 2006, Time Person of the Year in 2006 was you. That you, when Times Magazine made the you, us, as Person of the Year, still applies today, and we have to know how to, the information we give out, we have to know that it is correct and it is impactful to our society, and that is the key of this session we have had today. And it also links up with meaningful connectivity. We should never forget that the majority of the people that want to impact lives, they don’t have connectivity. Look at it, 2.6 billion people, according to the statistics from the ITU. So let us go back to the basics and try to use our public media, especially those at the grassroots level, equip them well to build the world we want, that we get better information for socio-economic development. Thank you.


Mr. Patrick Leusch: Thank you very much for that strong pledge, Ponceled. Last question from my side, two minutes left, very short one. What is the biggest block we have to move away to go that path, to strengthen the multi-stakeholder approach, to look deeper into the challenges that are in the digital divide area, and to come up with a better version of, I put it very simple, with a better version of Article 19. What is the biggest block you have to move away to go that path?


Mr. Poncelet Ileleji: If I start, I have one simple equation. The global digital compact implementation plus WSIS recrafted, we are coming to 20 years of the WSIS, the World Summit on Information Society, is the WSIS that led to the IGF. If we have this global GDC implementation plus the WSIS recrafted, in my mind in July, it equates to a stronger and strengthened IGF that will improve lives.


Mr. Patrick Leusch: Thank you very much. Very precise. Abdallah?


Mr. Abdallah Alsalmi: I mean, I agree with Poncelet here about the importance of the huge pledges made within the global digital compact, as well as the upcoming review of WSIS. My main concern here is that we rely too much on governments, and as we can see in democracies, you might end up with a government that doesn’t like what you’re doing, and as such they either oppose what you’re doing, or they don’t help you. I look at the example of the recent ruling by the Supreme Court in India, which made a landmark verdict regarding the access to digital life as part of the individual’s own right to life. I think the civil society could start really by working at the local level, and if, with the governments, if the government doesn’t lend an ear to them, then they can always work towards the judiciary sector, and find some supporters in other aspects of their societies.


Mr. Patrick Leusch: Thank you very much, Abdallah. Nine seconds for you, Paula.


Ms. Paula Gori: Okay, so I fully endorse what was said. I would just say, change the narrative, change the way we put this whole conversation, let it make attractive also for those who don’t believe in democracy, and on the other side, involve municipalities. I think they could play a key role here. They are way more close to citizens, and they could be very active in this field.


Mr. Patrick Leusch: Thank you. Thank you very much. I would like to thank my panellists, Poncelet here on stage, Paula and Abdallah remotely. Thank you very much for your insights, for that great discussion. Thank you all, the audience here, for your questions and comments, and for participating online to this session on meaningful digital access and what role for PSM public service media in it. And thanks to Flora and her team for this great framing here, organising the technical means for this session. Thank you very much. Thank you. Thank you.


M

Mr. Abdallah Alsalmi

Speech speed

145 words per minute

Speech length

1392 words

Speech time

573 seconds

Meaningful digital access goes beyond simple connectivity to include reliability, affordability, appropriate devices, digital literacy, relevant content in local languages, and safe digital environments

Explanation

Alsalmi argues that meaningful digital access is a comprehensive concept that encompasses multiple elements beyond just having an internet connection. He emphasizes that it’s about the quality of the internet experience and what users can actually accomplish online, not just technical connectivity.


Evidence

Example comparing someone who can only send text messages on WhatsApp over 2G connection versus someone with super fast broadband using Apple’s latest VR headset. References Alliance for Affordable Internet’s work on this concept.


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Human rights | Infrastructure


UN’s Universal Meaningful Connectivity (UMC) is a development goal with specific metrics, while meaningful digital access is an outcome focused on user experience quality

Explanation

Alsalmi distinguishes between UMC as a policy goal that organizations like ITU work toward with governments and civil society, versus meaningful digital access which represents the actual outcome and experience quality. Both aim to enable people to use internet for jobs, family communication, and free expression.


Evidence

References ITU data hub dashboard showing country scores from 40-50 (limited) to 95-200 (target achieved). Mentions specific metrics about daily data usage and internet purposes (business, social networking, job searching).


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Legal and regulatory | Infrastructure


Article 19 of international human rights law is outdated and needs renewal, with stronger commitments to protect internet openness unlike current legal loopholes

Explanation

Alsalmi argues that current international human rights frameworks are insufficient to protect internet access and that governments can shut down or block internet content without legal recourse. He calls for updated international commitments and legal protections similar to those that exist for shortwave radio and satellite TV.


Evidence

Comparison with shortwave radio and satellite TV which are protected by International Communications Union rules preventing government jamming, while internet has no such protections allowing governments to shut down internet or block websites at will.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Human rights | Legal and regulatory | Infrastructure


Multi-stakeholder governance model needs energizing through local IGF forums and coalition building to prevent internet fragmentation

Explanation

Alsalmi advocates for strengthening the multi-stakeholder model of internet governance by starting at local levels through IGF forums. He sees this as a response to efforts by some governments to reshape the internet and as a way to maintain an open, borderless internet.


Evidence

References the IGF context and mentions building coalitions to talk to governments and push for open internet regulation that protects current openness.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Legal and regulatory | Human rights | Infrastructure


Agreed with

– Mr. Poncelet Ileleji

Agreed on

Multi-stakeholder governance model is essential for internet governance and addressing digital challenges


Civil society should work at local levels and engage judiciary systems when governments don’t support digital rights initiatives

Explanation

Alsalmi suggests that when governments don’t support digital rights efforts, civil society should turn to judicial systems for support. He emphasizes the importance of not relying too heavily on governments since they may change and oppose digital rights work.


Evidence

Cites recent Supreme Court ruling in India that recognized access to digital life as part of individual’s right to life as a landmark example of judicial support for digital rights.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Human rights | Legal and regulatory | Development


M

Mr. Poncelet Ileleji

Speech speed

150 words per minute

Speech length

1281 words

Speech time

509 seconds

Community radio stations need strengthening with digital literacy tools and partnerships with international broadcasters to serve rural populations effectively

Explanation

Ileleji argues that in sub-Saharan Africa where only 37% have broadband connectivity, community radio stations are crucial information sources for rural populations. He advocates for strengthening these stations through digital literacy training and partnerships with major international broadcasters like BBC and Deutsche Welle.


Evidence

Statistics showing 37% broadband connectivity in sub-Saharan Africa. Example from Gambia where rural areas get information from community radio stations that link with BBC or Deutsche Welle. Mentions people need information on education, health, and agriculture.


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Sociocultural | Infrastructure


Infrastructure development must be realistic – focus on equipping community-based media with internet connectivity to disseminate information to those without meaningful broadband access

Explanation

Ileleji acknowledges that achieving meaningful broadband connectivity across Africa will take time, so proposes a practical interim solution. He suggests equipping community radio stations with internet connectivity and linking them to community network centers so they can access and disseminate quality information to their communities.


Evidence

References the reality that meaningful broadband connectivity is still far away in most parts of Africa. Mentions linking community radio stations to community network centers and learning platforms.


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Infrastructure | Sociocultural


Community radio stations play a crucial role in debunking fake news that spreads through messaging apps like WhatsApp in rural areas

Explanation

Ileleji explains that without proper information sources, fake news spreads rapidly through messaging apps in rural communities. Community radio stations serve as trusted local sources that can fact-check and debunk misinformation, providing accurate information about local events and issues.


Evidence

Example of fake news spreading through WhatsApp about activist arrests, with community radio stations providing corrections and accurate information about what actually happened.


Major discussion point

Disinformation and Public Service Media Role


Topics

Sociocultural | Human rights | Development


Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users

Explanation

Ileleji strongly opposes content regulation, arguing it violates freedom of speech rights. Instead, he advocates for fact-checking mechanisms and emphasizes the moral responsibility of educated users who often spread disinformation through social platforms without verification.


Evidence

Points out that educated people are often the ones spreading hate speech and disinformation on platforms like TikTok and WhatsApp without fact-checking before sharing.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Ms. Paula Gori

Agreed on

Content regulation should be avoided in favor of other approaches to address disinformation


Disagreed with

– Giacomo Mazzone

Disagreed on

Content regulation approach


Global Digital Compact implementation combined with WSIS review could strengthen IGF and improve lives globally

Explanation

Ileleji proposes that implementing the Global Digital Compact alongside a recrafted World Summit on Information Society (approaching its 20-year anniversary) would create a stronger IGF framework. He sees this combination as key to addressing digital divide challenges and improving global connectivity.


Evidence

References the upcoming 20-year anniversary of WSIS and notes that WSIS led to the creation of IGF. Mentions 2.6 billion people still lacking connectivity according to ITU statistics.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Mr. Abdallah Alsalmi

Agreed on

Multi-stakeholder governance model is essential for internet governance and addressing digital challenges


M

Ms. Paula Gori

Speech speed

174 words per minute

Speech length

1908 words

Speech time

657 seconds

Public service media serves as a solution to tackle disinformation, particularly during crisis situations like COVID-19 when people seek reliable information sources

Explanation

Gori argues that public service media plays a crucial role in combating disinformation, especially during crises when people desperately need trustworthy information. She notes that during COVID-19, people increasingly turned to public service media for reliable information despite also accessing disinformation online.


Evidence

COVID-19 pandemic example where people accessed public service media more than before because they were seeking reliable information during uncertainty, even while disinformation was also prevalent online.


Major discussion point

Disinformation and Public Service Media Role


Topics

Sociocultural | Human rights | Development


Agreed with

– Mr. Patrick Leusch

Agreed on

Public service media plays a crucial role during crisis situations


Transparency in ownership, structure, and funding of public service media is essential for building citizen trust and reliability

Explanation

Gori emphasizes that for public service media to be effective, citizens must understand who owns them, how they’re structured, and how they’re funded. This transparency is crucial for building trust and allowing citizens to make informed choices about their information sources, as mandated by frameworks like the European Media Freedom Act.


Evidence

References the European Media Freedom Act requirements for transparency in PSM ownership, structure, and funding. Emphasizes that choice remains with users/citizens who need this information to trust sources.


Major discussion point

Disinformation and Public Service Media Role


Topics

Legal and regulatory | Human rights | Sociocultural


Public service media must adopt more innovative and positive approaches to content production to remain attractive to audiences

Explanation

Gori acknowledges that traditional media, including public service media, face challenges in their business models and content approach. She argues that PSM must innovate in content production and sharing while avoiding sensationalism to maintain audience interest in quality journalism.


Evidence

Mentions conversations with PSM journalists who acknowledge a ‘mea culpa’ about needing more innovative approaches. Cites BBC and Deutsche Welle as positive examples investing in new content production and sharing methods while avoiding sensationalism.


Major discussion point

Disinformation and Public Service Media Role


Topics

Sociocultural | Economic | Development


Regulation should target risks posed by platform operations rather than content itself, as demonstrated by the EU’s Digital Services Act approach

Explanation

Gori advocates for regulation that focuses on the risks created by how platforms operate rather than regulating content directly. She explains that the Digital Services Act approach examines whether platform operations can be misused by malign actors to create risks to public security, health, and civic discourse.


Evidence

References the Digital Services Act as an example of risk-based regulation rather than content regulation. Explains focus on risks to public security, public health, and civic discourse from platform operational methods.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Mr. Poncelet Ileleji

Agreed on

Content regulation should be avoided in favor of other approaches to address disinformation


Lack of algorithmic transparency prevents understanding of why certain content is promoted, with platforms potentially losing control over their own complex algorithmic systems

Explanation

Gori highlights the problem of algorithmic opacity, explaining that neither users nor possibly even platforms themselves fully understand how algorithmic decisions are made. She suggests that platforms may have lost control over their own systems due to excessive tweaking of algorithms built upon other algorithms.


Evidence

Explains that algorithms work on algorithms that work on other algorithms, with so much tweaking that platforms may have lost control. Notes that negative emotions fuel content virality, leading some media toward sensationalism.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Sociocultural | Human rights


Municipalities should be involved as key players closer to citizens who can be active in digital rights advocacy

Explanation

Gori suggests that local municipalities should play a more active role in digital rights and meaningful connectivity issues because they are closer to citizens than national governments. She sees them as potentially more responsive and effective advocates for citizen needs in the digital space.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Legal and regulatory | Development | Human rights


M

Mr. Patrick Leusch

Speech speed

134 words per minute

Speech length

3652 words

Speech time

1625 seconds

International broadcasters invest in censorship circumvention technologies to reach audiences in countries with limited press freedom, sharing research with other independent media

Explanation

Leusch explains that public service media like Deutsche Welle and BBC invest significantly in understanding and circumventing censorship to reach audiences in countries with restricted information access. This research and technology is shared not only between major broadcasters but also with exile media and independent outlets.


Evidence

Deutsche Welle blocked in China, Iran, Egypt, Belarus, Russia, Turkey. Started circumvention work in 2012. Mentions collaboration with BBC and sharing with exile media like Medusa for Russia. Reaches millions weekly in Iran despite censorship.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Human rights | Infrastructure | Cybersecurity


Internet censorship is technically and politically complex, requiring permanent adaptation of mitigation measures as censorship methods vary by country and policy

Explanation

Leusch describes internet censorship as a complex, evolving challenge that requires constant adaptation. He explains that censorship methods differ significantly between countries and policies, requiring a ‘cat and mouse’ approach to develop appropriate countermeasures for each specific situation.


Evidence

Compares Iranian censorship (efficient but built on originally open internet), Chinese censorship (inner internet from beginning), and Russian censorship (step-by-step testing approach). Mentions variety of technologies, policies, and methods used.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Cybersecurity | Human rights | Infrastructure


Circumvention work is legally justified under Article 19 of the UN Human Rights Charter regarding access to information, which supersedes national censorship laws

Explanation

Leusch addresses the legal and ethical questions surrounding circumvention work by public broadcasters. He explains that German Bundestag Legal Service confirmed that Article 19 of the UN Human Rights Charter on access to information provides legal basis for this work, as international law supersedes national censorship laws.


Evidence

German Bundestag Legal Service research confirming Article 19 as legal basis. Notes that countries engaging in censorship have signed the UN Human Rights Charter, making international law applicable over national censorship laws.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Human rights | Legal and regulatory | Infrastructure


Crisis situations drive increased demand for alternative information sources, as evidenced by usage spikes during protests and political upheavals

Explanation

Leusch demonstrates that during times of crisis, internet shutdowns, or major political events, people in censored countries actively seek alternative information sources. This pattern shows the critical importance of maintaining access to independent media during crucial moments.


Evidence

Charts showing usage spikes during Iran protests two years ago, Prigozhin coup attempt in Russia, and when Navalny died. Clear correlation between crisis events and increased circumvention tool usage.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Human rights | Sociocultural | Infrastructure


Agreed with

– Ms. Paula Gori

Agreed on

Public service media plays a crucial role during crisis situations


A

Audience

Speech speed

156 words per minute

Speech length

181 words

Speech time

69 seconds

Academic researchers need better access to platform data through proper implementation of DSA provisions to study platform impacts on democracy

Explanation

An academic researcher (Thora) studying how large platforms undermine democracy in the EEA argues that the Digital Services Act should provide data access but platforms are not complying. She emphasizes that without this data, academic research remains limited to theorizing about inputs and studying outcomes without understanding the ‘black box’ operations.


Evidence

PhD research on VLOPS and VLOCE undermining democracy, 20-year IT experience, platforms dragging feet on DSA data access requirements, current research limited to studying outcomes rather than processes.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Development


M

MODERATOR

Speech speed

170 words per minute

Speech length

37 words

Speech time

13 seconds

Academic researchers need better access to platform data through proper implementation of DSA provisions to study platform impacts on democracy

Explanation

The moderator facilitated a question from an academic researcher (Thora) studying how large platforms undermine democracy in the EEA, who argued that the Digital Services Act should provide data access but platforms are not complying. She emphasized that without this data, academic research remains limited to theorizing about inputs and studying outcomes without understanding the ‘black box’ operations.


Evidence

PhD research on VLOPS and VLOCE undermining democracy, 20-year IT experience, platforms dragging feet on DSA data access requirements, current research limited to studying outcomes rather than processes.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Development


G

Giacomo Mazzone

Speech speed

110 words per minute

Speech length

99 words

Speech time

53 seconds

Fact-checking alone is insufficient and requires a more comprehensive, holistic approach including regulation to negotiate effectively with platforms

Explanation

Mazzone argues that current fact-checking efforts are not adequate to address disinformation and platform-related challenges. He advocates for a broader approach that includes regulatory frameworks to strengthen negotiations with platforms and make content verification efforts more effective.


Evidence

References a recent pledge launched by EBU and newspaper associations to platforms, suggesting coordinated industry efforts to address platform accountability.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Sociocultural


Disagreed with

– Mr. Poncelet Ileleji

Disagreed on

Content regulation approach


Agreements

Agreement points

Multi-stakeholder governance model is essential for internet governance and addressing digital challenges

Speakers

– Mr. Abdallah Alsalmi
– Mr. Poncelet Ileleji

Arguments

Multi-stakeholder governance model needs energizing through local IGF forums and coalition building to prevent internet fragmentation


Global Digital Compact implementation combined with WSIS review could strengthen IGF and improve lives globally


Summary

Both speakers strongly advocate for strengthening multi-stakeholder approaches to internet governance, with Alsalmi emphasizing local IGF forums and coalition building, while Ileleji proposes combining Global Digital Compact implementation with WSIS review to strengthen the IGF framework.


Topics

Legal and regulatory | Human rights | Infrastructure


Content regulation should be avoided in favor of other approaches to address disinformation

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Regulation should target risks posed by platform operations rather than content itself, as demonstrated by the EU’s Digital Services Act approach


Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Summary

Both speakers reject direct content regulation as a solution, with Gori advocating for risk-based regulation of platform operations and Ileleji emphasizing that content regulation violates freedom of speech principles.


Topics

Legal and regulatory | Human rights


Public service media plays a crucial role during crisis situations

Speakers

– Ms. Paula Gori
– Mr. Patrick Leusch

Arguments

Public service media serves as a solution to tackle disinformation, particularly during crisis situations like COVID-19 when people seek reliable information sources


Crisis situations drive increased demand for alternative information sources, as evidenced by usage spikes during protests and political upheavals


Summary

Both speakers recognize that public service media becomes particularly important during crises, with Gori noting increased reliance during COVID-19 and Leusch providing evidence of usage spikes during political upheavals and protests.


Topics

Human rights | Sociocultural | Infrastructure


Similar viewpoints

Both speakers reference Article 19 of international human rights law as fundamental to internet access rights, though Alsalmi argues it needs updating while Leusch uses it as current legal justification for circumvention work.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Patrick Leusch

Arguments

Article 19 of international human rights law is outdated and needs renewal, with stronger commitments to protect internet openness unlike current legal loopholes


Circumvention work is legally justified under Article 19 of the UN Human Rights Charter regarding access to information, which supersedes national censorship laws


Topics

Human rights | Legal and regulatory


Both speakers recognize the challenge of misinformation spread through digital platforms and the need for trusted sources to counter it, though they focus on different solutions – Gori on algorithmic transparency and Ileleji on community radio fact-checking.

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Lack of algorithmic transparency prevents understanding of why certain content is promoted, with platforms potentially losing control over their own complex algorithmic systems


Community radio stations play a crucial role in debunking fake news that spreads through messaging apps like WhatsApp in rural areas


Topics

Sociocultural | Human rights | Development


Both speakers emphasize the importance of local-level approaches and working with available resources rather than waiting for top-down solutions, whether through local civil society engagement or community-based infrastructure development.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Poncelet Ileleji

Arguments

Civil society should work at local levels and engage judiciary systems when governments don’t support digital rights initiatives


Infrastructure development must be realistic – focus on equipping community-based media with internet connectivity to disseminate information to those without meaningful broadband access


Topics

Development | Human rights | Infrastructure


Unexpected consensus

Opposition to direct content regulation despite different professional backgrounds

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Regulation should target risks posed by platform operations rather than content itself, as demonstrated by the EU’s Digital Services Act approach


Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Explanation

Despite Gori working in European policy frameworks that could support regulation and Ileleji working in African development contexts, both strongly oppose direct content regulation, showing unexpected alignment across different regional and professional perspectives on fundamental free speech principles.


Topics

Legal and regulatory | Human rights


Acknowledgment of public service media limitations and need for improvement

Speakers

– Ms. Paula Gori
– Mr. Patrick Leusch

Arguments

Public service media must adopt more innovative and positive approaches to content production to remain attractive to audiences


International broadcasters invest in censorship circumvention technologies to reach audiences in countries with limited press freedom, sharing research with other independent media


Explanation

Both speakers, while advocating for public service media, acknowledge its current limitations and need for adaptation – Gori noting the need for innovation to remain attractive, and Leusch describing the extensive technical efforts required to reach audiences, showing realistic assessment rather than defensive positioning.


Topics

Sociocultural | Human rights | Infrastructure


Overall assessment

Summary

The speakers demonstrated strong consensus on fundamental principles including the importance of multi-stakeholder governance, opposition to direct content regulation, the crucial role of public service media during crises, and the need for local-level approaches to digital challenges. They also shared realistic assessments of current limitations and the need for innovative solutions.


Consensus level

High level of consensus on core principles with complementary rather than conflicting approaches. The agreement spans technical, policy, and implementation perspectives, suggesting a mature understanding of the complex challenges in meaningful digital access. This consensus provides a strong foundation for collaborative action across different sectors and regions, though implementation details may require further coordination.


Differences

Different viewpoints

Content regulation approach

Speakers

– Mr. Poncelet Ileleji
– Giacomo Mazzone

Arguments

Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Fact-checking alone is insufficient and requires a more comprehensive, holistic approach including regulation to negotiate effectively with platforms


Summary

Ileleji strongly opposes any content regulation as a violation of freedom of speech, advocating instead for fact-checking and user responsibility. Mazzone argues that fact-checking alone is inadequate and calls for more comprehensive regulatory approaches including platform regulation.


Topics

Human rights | Legal and regulatory | Sociocultural


Unexpected differences

Scope of regulatory intervention needed

Speakers

– Mr. Poncelet Ileleji
– Giacomo Mazzone

Arguments

Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Fact-checking alone is insufficient and requires a more comprehensive, holistic approach including regulation to negotiate effectively with platforms


Explanation

This disagreement is unexpected because both speakers are concerned about disinformation and platform accountability, yet they have fundamentally different views on the role of regulation. Ileleji, coming from an African development perspective, takes a strong anti-regulation stance emphasizing individual responsibility, while Mazzone, representing European broadcasting interests, advocates for stronger regulatory frameworks. This suggests different regional or institutional perspectives on balancing freedom of expression with platform accountability.


Topics

Human rights | Legal and regulatory | Sociocultural


Overall assessment

Summary

The discussion revealed relatively limited but significant disagreements, primarily centered on regulatory approaches to content and platforms. While speakers largely agreed on fundamental goals like combating disinformation, ensuring meaningful digital access, and strengthening multi-stakeholder governance, they differed on implementation mechanisms and the appropriate level of regulatory intervention.


Disagreement level

Moderate disagreement with significant implications. The main tension between pro-regulation and anti-regulation approaches reflects broader global debates about internet governance, freedom of expression, and platform accountability. These disagreements could impact policy development, as they represent different philosophical approaches to addressing digital challenges – one emphasizing regulatory frameworks and institutional solutions, the other prioritizing individual responsibility and minimal intervention. The disagreements also reflect different regional perspectives and institutional contexts, which could complicate international cooperation on digital governance issues.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers reference Article 19 of international human rights law as fundamental to internet access rights, though Alsalmi argues it needs updating while Leusch uses it as current legal justification for circumvention work.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Patrick Leusch

Arguments

Article 19 of international human rights law is outdated and needs renewal, with stronger commitments to protect internet openness unlike current legal loopholes


Circumvention work is legally justified under Article 19 of the UN Human Rights Charter regarding access to information, which supersedes national censorship laws


Topics

Human rights | Legal and regulatory


Both speakers recognize the challenge of misinformation spread through digital platforms and the need for trusted sources to counter it, though they focus on different solutions – Gori on algorithmic transparency and Ileleji on community radio fact-checking.

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Lack of algorithmic transparency prevents understanding of why certain content is promoted, with platforms potentially losing control over their own complex algorithmic systems


Community radio stations play a crucial role in debunking fake news that spreads through messaging apps like WhatsApp in rural areas


Topics

Sociocultural | Human rights | Development


Both speakers emphasize the importance of local-level approaches and working with available resources rather than waiting for top-down solutions, whether through local civil society engagement or community-based infrastructure development.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Poncelet Ileleji

Arguments

Civil society should work at local levels and engage judiciary systems when governments don’t support digital rights initiatives


Infrastructure development must be realistic – focus on equipping community-based media with internet connectivity to disseminate information to those without meaningful broadband access


Topics

Development | Human rights | Infrastructure


Takeaways

Key takeaways

Meaningful digital access requires going beyond basic connectivity to include reliability, affordability, appropriate devices, digital literacy, relevant local content, and safe digital environments


Public service media plays a crucial role in combating disinformation and providing reliable information, especially during crisis situations


Community radio stations are essential for reaching rural populations in developing countries and need strengthening through partnerships with international broadcasters


Internet censorship is a growing global challenge requiring sophisticated circumvention technologies, with legal justification under Article 19 of UN Human Rights Charter


Platform algorithm transparency is lacking, preventing understanding of content promotion mechanisms and creating risks for democratic discourse


Current regulatory frameworks like Article 19 are outdated and need updating to address modern internet governance challenges


Multi-stakeholder governance models need strengthening through local IGF forums and coalition building to prevent internet fragmentation


Resolutions and action items

Re-energize local IGF forums to build coalitions and push governments for open internet policies


Strengthen community radio stations with digital literacy tools and partnerships with international broadcasters


Implement proper data access provisions under the Digital Services Act for academic researchers


Work toward updating Article 19 of international human rights law to address modern digital access challenges


Combine Global Digital Compact implementation with WSIS review to strengthen IGF


Engage civil society at local levels and work with judiciary systems when governments don’t support digital rights


Involve municipalities as key players in digital rights advocacy due to their proximity to citizens


Unresolved issues

How to effectively balance content regulation with freedom of speech concerns


Addressing the financial and infrastructural capacity gaps between large and small organizations when accessing platform data


Determining what policy changes will be needed once full platform data access reveals the true extent of disinformation impacts


Resolving the tension between platform dependence and editorial independence for public service media


Bridging the digital divide for 2.6 billion people still without internet connectivity


Establishing effective mechanisms to prevent government internet shutdowns and website blocking


Creating sustainable funding models for circumvention technologies and mirror servers


Suggested compromises

Focus regulation on platform operational risks rather than content to preserve freedom of speech while addressing harmful effects


Develop independent intermediary bodies to facilitate data access between platforms, regulators, and researchers


Combine infrastructure development with community media strengthening as a realistic approach to meaningful connectivity in underserved areas


Balance transparency requirements for public service media with operational security needs for circumvention activities


Engage multiple stakeholders (government, civil society, academia, technical experts) in dialogue while accepting that parties may ‘agree to disagree’ on some issues


Thought provoking comments

We need to go beyond just simple connectivity and beyond just having a device that is connected to the internet because it’s all about the experience, it’s all about what the internet users can make of the internet.

Speaker

Mr. Abdallah Alsalmi


Reason

This comment reframes the entire discussion by distinguishing between mere technical access and meaningful digital experience. It introduces the crucial concept that connectivity without context, skills, and relevant content is insufficient for true digital inclusion.


Impact

This foundational insight set the framework for the entire discussion, leading other speakers to build upon this distinction throughout the session. It shifted the conversation from technical infrastructure to human-centered outcomes and user experience.


I like to look at it from a grassroots level. What information do community radios are able to provide to their citizens? In most of the cases, you look at sub-Saharan Africa, for example, my beloved continent, where only about 37% of the population have broadband connectivity.

Speaker

Mr. Poncelet Ileleji


Reason

This comment challenges the discussion’s implicit focus on high-tech solutions by grounding it in real-world constraints. It highlights how meaningful access must work within existing infrastructure limitations and emphasizes the continued importance of traditional media as bridges to digital access.


Impact

This perspective fundamentally shifted the discussion from theoretical policy frameworks to practical implementation challenges. It forced other participants to consider how solutions must be adapted to different technological and economic contexts, leading to more nuanced policy recommendations.


We even don’t know the answers that Gen AI is giving to people. So whenever you ask an AI chatbot about something, it is giving you an answer and no one knows it. It is like between you and the chatbot, which is creating an additional element, probably even more scary.

Speaker

Ms. Paula Gori


Reason

This observation introduces an entirely new dimension to the information access problem that hadn’t been previously discussed. It highlights how AI systems create invisible information silos that are even more opaque than social media algorithms.


Impact

This comment expanded the scope of the discussion beyond traditional censorship and platform algorithms to include AI-mediated information access. It added a new layer of complexity to the meaningful access challenge and influenced the conversation toward more comprehensive regulatory approaches.


Article 19 is really, I would say, is really outdated and we need to have another look at it, update it, and renew commitments to it… Any government can shut down the internet at any time without due recourse to legal background or text.

Speaker

Mr. Abdallah Alsalmi


Reason

This is a bold critique of fundamental international human rights law, arguing that existing legal frameworks are inadequate for the digital age. It challenges participants to think beyond current legal structures and consider more fundamental reforms.


Impact

This comment shifted the discussion from operational challenges to fundamental legal and rights-based frameworks. It elevated the conversation to question basic assumptions about how digital rights should be protected internationally, leading to discussions about multi-stakeholder governance and the need for new international agreements.


We shouldn’t have regulation on content. It goes against freedom of speech. So immediately you start trying to regulate content, then you are infringing on the rights of people.

Speaker

Mr. Poncelet Ileleji


Reason

This comment introduces a crucial tension in the discussion by firmly establishing the boundary between acceptable and unacceptable regulatory approaches. It forces the conversation to grapple with the fundamental conflict between combating misinformation and preserving free speech.


Impact

This strong position created a defining moment in the discussion, forcing other participants to clarify their regulatory proposals and distinguish between content regulation and platform behavior regulation. It led to more nuanced discussions about risk-based rather than content-based approaches to platform governance.


If we really access those data, we will understand so many things about disinformation, and also about the impact, that probably we will have to change the whole policy framework once we will have the knowledge of what is happening really online.

Speaker

Ms. Paula Gori


Reason

This comment reveals the profound uncertainty underlying current policy approaches and suggests that access to platform data might fundamentally change our understanding of digital information systems. It acknowledges that current policies may be based on incomplete information.


Impact

This insight added a meta-level perspective to the discussion, suggesting that the policy solutions being discussed might themselves need to be reconsidered once better data becomes available. It introduced humility into the policy discussion and emphasized the importance of evidence-based approaches.


Overall assessment

These key comments collectively transformed what could have been a technical discussion about internet access into a multifaceted exploration of digital rights, governance, and social equity. The progression from Alsalmi’s foundational distinction between connectivity and meaningful access, through Ileleji’s grassroots reality check, to Gori’s insights about AI and data transparency, created a comprehensive framework that addressed technical, social, legal, and ethical dimensions. The comments built upon each other to reveal the complexity of meaningful digital access, moving the discussion from simple solutions to nuanced understanding of interconnected challenges. The tension between Ileleji’s strong stance on content regulation and others’ regulatory proposals created productive friction that led to more sophisticated policy thinking. Overall, these interventions elevated the discussion from operational concerns to fundamental questions about digital rights, democratic governance, and global equity in the digital age.


Follow-up questions

What is the legal basis for public service media to provide circumvention tools and VPN explainers to audiences in censored countries?

Speaker

Mr. Patrick Leusch


Explanation

This raises important legal and ethical questions about the mandate and authority of public broadcasters to engage in circumvention activities, which was resolved through consultation with German Bundestag Legal Service citing Article 19 of UN Human Rights Charter


How can we update and modernize Article 19 of the UN Human Rights Charter regarding access to information?

Speaker

Mr. Abdallah Alsalmi


Explanation

Article 19 is described as ‘outdated’ and needs renewal to address modern digital access challenges and internet governance issues


When will the European Commission publish the Delegated Act to make platform data access operational for researchers?

Speaker

Ms. Paula Gori and Thora (PhD researcher)


Explanation

The DSA establishes obligations for platforms to provide data access to researchers, but the operational framework through the Delegated Act is still missing, hindering academic research


How can we establish legal protections for internet access similar to those that exist for shortwave radio and satellite TV?

Speaker

Mr. Abdallah Alsalmi


Explanation

Unlike traditional broadcasting technologies protected by International Communications Union rules, the internet lacks legal protection against government shutdowns and blocking


What are the answers that AI chatbots are giving to users about news and information?

Speaker

Ms. Paula Gori


Explanation

There’s no oversight or knowledge of what information AI systems provide to users, creating a potentially more concerning information gap than private messaging apps


How can smaller academic institutions and civil society organizations be equipped to handle large datasets from platforms?

Speaker

Ms. Paula Gori


Explanation

There’s concern about creating a two-speed system where only well-funded institutions can analyze platform data, particularly affecting Eastern European countries and smaller organizations


How can we strengthen partnerships between international broadcasters and community radio stations in developing countries?

Speaker

Mr. Poncelet Ileleji


Explanation

Community radio stations need digital literacy tools and connections to larger media organizations to provide reliable information and counter disinformation at the grassroots level


What is the EBU pledge to platforms that was recently launched?

Speaker

Giacomo Mazzone


Explanation

A specific initiative by the European Broadcasting Union and newspaper associations directed at platforms was mentioned but not elaborated upon


How can we change the narrative around digital rights and democracy to make it attractive to those who don’t believe in democracy?

Speaker

Ms. Paula Gori


Explanation

There’s a need to reframe conversations about digital access and rights in ways that appeal to broader audiences beyond those already committed to democratic values


What role can municipalities play in strengthening meaningful digital access and combating disinformation?

Speaker

Ms. Paula Gori


Explanation

Local governments may be better positioned than national governments to work directly with citizens on digital access and information quality issues


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #150 Digital Rights in Partnership Strategies for Impact

Day 0 Event #150 Digital Rights in Partnership Strategies for Impact

Session at a glance

Summary

This discussion focused on digital rights and partnerships, examining strategies for protecting human rights in online environments through cross-sector collaboration. The panel, moderated by Peggy Hicks from the UN Office of the High Commissioner for Human Rights, brought together representatives from civil society, tech companies, multi-stakeholder organizations, and the European Commission to address challenges in safeguarding digital human rights.


Ian Barber from Global Partners Digital highlighted significant challenges facing civil society organizations, including funding crises, capacity issues, and the erosion of multi-stakeholder governance approaches. He emphasized that civil society organizations are struggling to meaningfully engage in policy processes while facing resource constraints and burnout. Jason Pielemeier from the Global Network Initiative discussed how GNI has successfully expanded its membership globally, bringing diverse perspectives from over 100 organizations across all continents to address tech governance challenges collaboratively.


Alex Walden from Google outlined the technical and operational challenges companies face in content moderation, particularly balancing harm prevention with freedom of expression at scale. She emphasized the importance of stakeholder engagement through forums like IGF and organizations like GNI to incorporate civil society feedback into company policies. Esteve Sanz from the European Commission described the EU’s comprehensive approach to digital rights, including the Digital Services Act and efforts to address the gap between international commitments and actual practice regarding digital repression.


The panelists acknowledged the tension between human rights considerations and competing priorities like national security and economic innovation. However, they argued that these concerns are not mutually exclusive and that human rights approaches can actually reinforce security and innovation goals. The discussion concluded with examples of successful collaborative initiatives and emphasized the critical importance of transparency, accountability mechanisms, and continued multi-stakeholder engagement in protecting digital rights globally.


Keypoints

## Major Discussion Points:


– **Challenges facing civil society in digital rights advocacy**: Including funding crises, capacity issues, burnout, and the erosion of multi-stakeholder approaches in governance processes, particularly affecting organizations in the Global South who are already under-resourced.


– **Multi-stakeholder collaboration models and their effectiveness**: Discussion of how organizations like the Global Network Initiative (GNI) work to integrate diverse perspectives from civil society, companies, academics, and investors, with emphasis on expanding global representation beyond North America and Europe.


– **Technical and operational challenges for tech companies**: Balancing the prevention of online harms while respecting human rights, particularly freedom of expression, dealing with issues of scale, speed of content moderation, and navigating complex regulatory environments across different jurisdictions.


– **International cooperation and regulatory frameworks**: The European Union’s approach to digital rights through legislation like the Digital Services Act, the gap between diplomatic commitments and real-world implementation of digital rights protections, and the role of international processes like WSIS+20.


– **Accountability mechanisms and transparency in digital rights partnerships**: Discussion of how to ensure accountability in cross-sector partnerships, particularly when working in the Global South, including the need for transparency, ongoing engagement, and effective watchdog functions.


## Overall Purpose:


The discussion aimed to foster cross-sector collaboration between civil society, tech companies, governments, and international organizations to strengthen human rights protection in online environments. The panel sought to identify challenges, share good practices, and explore strategies for more effective partnerships in addressing digital rights issues globally.


## Overall Tone:


The discussion maintained a professional and collaborative tone throughout, though it acknowledged serious challenges in the field. While panelists expressed concerns about “digital depression” and the gap between commitments and reality, the conversation remained constructive and solution-oriented. There was a notable effort to balance realism about current difficulties with optimism about the potential for meaningful collaboration and the continued importance of defending digital rights. The tone became slightly more hopeful toward the end as panelists shared specific examples of successful initiatives and partnerships.


Speakers

**Speakers from the provided list:**


– **Peggy Hicks** – Works with the Office of the High Commissioner for Human Rights in Geneva


– **Alex Walden** – Global Policy Lead for Human Rights and Freedom of Expression at Google


– **Ian Barber** – Legal and Advocacy Lead at Global Partners Digital


– **Esteve Sanz** – Head of Sector for Internet Governance and Multi-Stakeholder Dialogue at the European Commission


– **Jason Pielemeier** – Executive Director of the Global Network Initiative


– **Audience** – Alejandro from Access Now (asked a question during the Q&A session)


**Additional speakers:**


None identified beyond those in the provided speakers names list.


Full session report

# Digital Rights and Partnerships: A Comprehensive Discussion Summary


## Introduction and Context


This panel discussion, moderated by Peggy Hicks from the UN Office of the High Commissioner for Human Rights in Geneva, brought together key stakeholders to examine strategies for protecting human rights in online environments through cross-sector collaboration. The conversation featured Ian Barber from Global Partners Digital, Jason Pielemeier from the Global Network Initiative, Alex Walden from Google (joining from Oslo), and Esteve Sanz from the European Commission.


Peggy Hicks opened by highlighting OHCHR’s recent work in digital rights, including a Brazil judiciary event and a MENA region study examining how digital technology affects human rights defenders. She noted Norway’s resolution calling for assessment of risks faced by human rights defenders through digital technology, setting the stage for a discussion on practical strategies for strengthening partnerships across sectors whilst addressing systemic challenges threatening effective digital rights advocacy.


The discussion took place against a backdrop of increasing digital repression worldwide, funding challenges for civil society organisations, and growing tensions between human rights considerations and competing priorities such as national security and economic innovation.


## The False Dichotomy: Human Rights vs. Security and Innovation


A central theme throughout the discussion was challenging the perceived tension between human rights considerations and other priorities. Alex Walden from Google articulated this position most clearly, arguing that “in order to achieve national security interests, in order to focus on ongoing innovation and have competition in the market, we have to ensure that human rights is integrated across those conversations and remains a priority… we have to do all of them at the same time.”


Ian Barber supported this perspective, arguing that human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts. This reframing challenges the prevailing narrative that positions human rights as an obstacle to security or innovation, and offers a strategic approach to addressing the funding crisis facing civil society organisations.


Esteve Sanz demonstrated this integrated approach through the EU’s legislative process for the Digital Services Act, which he described as “complex, almost miraculous” in successfully balancing multiple concerns using the Charter of Fundamental Rights as a framework. This example provided concrete evidence that comprehensive regulatory approaches can address multiple priorities simultaneously without sacrificing fundamental rights protections.


## Civil Society Challenges and the Funding Crisis


Ian Barber presented a sobering assessment of the challenges facing civil society organisations in digital rights advocacy. He identified what he termed a “narrative crisis,” where funding has increasingly shifted away from human rights approaches towards national security and economic impact priorities. This shift has created significant capacity issues for civil society organisations, leading to layoffs, burnout, and insufficient expertise to participate effectively in policy forums.


Barber emphasised that these challenges are particularly acute for organisations in the Global South, which were already under-resourced and now face even greater difficulties in meaningfully engaging with policy processes. The proliferation of forums and processes has created an additional burden, making it difficult for under-resourced organisations to keep up and participate meaningfully across multiple venues.


The civil society representative argued that effective collaboration requires moving beyond tokenistic engagement to genuine power-sharing arrangements. He stressed that “the most impactful forms [of collaboration] are going to be those that truly shift power and resources back to civil society,” challenging other panellists to consider concrete accountability mechanisms rather than remaining at the level of aspirational statements about partnership.


## Multi-Stakeholder Collaboration Models and Global Engagement


Jason Pielemeier from the Global Network Initiative provided a contrasting perspective, highlighting successful examples of expanding multi-stakeholder engagement globally. He described GNI’s intentional growth from its original North American and European focus to encompass over 100 members across four constituencies (companies, civil society organisations, academics, and investors) representing all continents.


Pielemeier acknowledged the emotional toll of working in digital rights, coining the term “digital depression” to complement discussions of digital repression. Despite these challenges, he maintained an optimistic perspective, arguing that “the Internet is still an incredibly vibrant and critical space, especially when you compare it to offline mediums for free expression and freedom of association and assembly.”


He also noted the misappropriation of language in policy discussions, specifically mentioning how the phrase “fork in the road” was being used inappropriately in some contexts. The GNI representative emphasised the importance of creating concrete forums that bring stakeholders together around specific, tangible issues rather than abstract discussions.


Importantly, Pielemeier highlighted ongoing collaboration between GNI and Global Partners Digital on WSIS engagement, including workshops in nine countries to involve wider stakeholders in World Summit on the Information Society input processes, demonstrating practical approaches to global engagement.


## Technology Company Perspectives and Operational Challenges


Alex Walden outlined the complex technical and operational challenges that technology companies face in balancing harm prevention with human rights protection. She emphasised the particular difficulty of respecting freedom of expression, privacy, and non-discrimination whilst preventing online harms at the scale and speed required by modern digital platforms.


Walden highlighted the challenge of content moderation, which increasingly requires AI assistance whilst maintaining human oversight for context-sensitive content. She stressed the importance of regulatory safe harbours that enable effective content moderation and policy iteration, noting that the complex regulatory environment across different jurisdictions creates significant operational challenges.


The Google representative described regional stakeholder meetings designed to ensure that feedback reaches both policy-drafting and product-building teams within companies. She specifically mentioned the Rights and Risk Forum held in Brussels the previous month as an example of creating transparent conversations between stakeholders using concrete regulatory artefacts, demonstrating practical approaches to multi-stakeholder engagement around specific policy challenges.


## European Union Regulatory Approaches and Global Diplomacy


Esteve Sanz from the European Commission provided insights into the EU’s comprehensive approach to digital rights protection through both legislative frameworks and international diplomacy. He described the EU’s focus on securing global agreements such as the Global Digital Compact and the Declaration for the Future of the Internet to commit states to respect digital rights internationally.


However, Sanz also presented a sobering assessment of current trends, noting that “we are in a new stage where the Internet is not only controlled, but it’s used for control, and what we see is a very depressing trajectory.” He identified a concerning gap between diplomatic achievements in securing commitments from powerful global actors and the reality of increasing digital repression on the ground.


Sanz mentioned an April conference on “governance of Web 4.0” that resulted in important principles, and described the EU’s public diplomacy efforts, including calling out internet shutdowns and funding projects like protectdefenders.eu to provide urgent support for human rights defenders.


The European Commission representative highlighted the Digital Services Act as a model for balancing human rights considerations with regulatory requirements, specifically noting its application to Very Large Online Platforms and Very Large Online Search Services. He emphasised the upcoming WSIS Plus 20 review as a critical opportunity, describing it as a “fork in the road” for determining future internet governance directions.


## Accountability Mechanisms and Transparency


The discussion of accountability mechanisms was significantly shaped by a question from Alejandro of Access Now, who asked about accountability mechanisms for partnerships, especially when working in the Global South where it’s easy for Global North actors to disengage. This question forced panellists to move beyond aspirational statements to concrete mechanisms for ensuring sustained commitment.


The panellists identified several approaches to accountability, though they acknowledged that current mechanisms remain insufficient. Pielemeier described GNI’s independent assessment process for companies, which includes detailed reviews of internal systems and policies, whilst noting that similar accountability mechanisms for state actors remain limited to “naming and shaming” through bodies like the UN Office of the High Commissioner for Human Rights.


Barber emphasised civil society’s watchdog role in bringing issues to light through transparency and ongoing iterative processes, arguing that transparency is fundamental to meaningful accountability. Walden pointed to the Digital Services Act as a beginning model for public risk assessments that provide accountability in regulatory settings, though she noted that the effectiveness of these new tools remains to be evaluated.


The discussion revealed consensus that transparency is fundamental to accountability across all sectors, whether through public reporting, independent assessments, or open dialogue. However, the panellists acknowledged that more tangible legal processes and structural supports are needed to ensure accountability, particularly in international partnerships involving Global South organisations.


## Global Engagement Challenges and Resource Distribution


The discussion highlighted significant challenges in ensuring truly global engagement in digital rights protection, particularly in meaningfully including voices from the Global South. Barber described the coordination challenges facing under-resourced organisations, whilst Pielemeier outlined the intentional work required to build global membership and ensure diverse perspectives in governance processes.


The panellists acknowledged that the proliferation of forums and processes, whilst potentially offering more opportunities for engagement, can create overwhelming burdens for under-resourced organisations. This creates a paradox where efforts to increase inclusivity may inadvertently exclude those with the greatest resource constraints.


The discussion revealed ongoing questions about how to scale multi-stakeholder approaches and ensure they reach beyond well-resourced organisations based in major policy centres, with no clear solutions emerging for addressing these structural challenges.


## Future Directions and Critical Junctures


The panellists identified the WSIS Plus 20 review as a critical juncture for determining future directions in internet governance, with the potential to either build on multi-stakeholder human rights values or move in directions that could further marginalise civil society voices. This process represents both an opportunity and a risk for the digital rights community.


Several concrete initiatives were highlighted as ongoing work, including the Global Digital Rights Coalition’s coordination efforts, continued Rights and Risk Forums for discussing regulatory implementation, and various EU diplomatic initiatives. However, the panellists acknowledged that these efforts, whilst valuable, remain insufficient to address the scale of challenges facing digital rights protection globally.


The discussion revealed particular concern about maintaining the internet’s role as a space for freedom whilst addressing legitimate security and innovation concerns. This balance requires continued collaboration across sectors, but the structural challenges facing such collaboration—including funding constraints, capacity limitations, and power imbalances—remain largely unresolved.


## Conclusion


This discussion revealed both the complexity of challenges facing digital rights protection and the potential for meaningful collaboration across sectors when properly structured and resourced. The panellists demonstrated strong consensus on core principles, including the importance of transparency, the need for genuine rather than tokenistic multi-stakeholder engagement, and the concerning gap between international commitments and actual implementation of digital rights protections.


However, the conversation also highlighted significant structural challenges that threaten the sustainability of current approaches, particularly the funding crisis facing civil society organisations and the erosion of inclusive governance mechanisms. The panellists’ emphasis on moving beyond symbolic engagement to genuine power-sharing arrangements reflects a mature understanding of what effective collaboration requires, even as they acknowledged the difficulty of achieving such arrangements in practice.


The discussion balanced realism about current challenges with constructive approaches for moving forward, rejecting the false dichotomy between human rights and other priorities in favour of integrated approaches that treat these concerns as mutually reinforcing. This framework offers promise for future collaboration, though significant work remains to translate it into sustainable practice that addresses the structural inequalities and resource constraints that currently limit effective global engagement in digital rights protection.


Session transcript

Peggy Hicks: Thanks everybody. Please take seats. I’m hoping that you can all hear me through your microphones or through the headsets. It’s all good. Wonderful. You’ll see we’re missing one of our panelists, but we decided to start off because we want to have as much time as possible for you all to hear from our wonderful panelists today and then also to have a chance to open up for your questions and comments as well. This event is focusing on digital rights and partnerships, strategies for impact, and we’re really looking today to have a really open conversation about the intersection of online experiences and fundamental human rights. We want to highlight the challenges that are faced by civil society, tech companies, and enforcement agencies in protecting these rights within what we all know is a complex and borderless online environment. We need to recognize, of course, that each of us come to this issue from a different place, that states are the ones that have the legal obligations to take action, that companies have a duty to respect human rights under the UN guiding principles on business and human rights, and civil society, of course, is there to keep everybody else honest on both of those obligations, one hopes. So we are going to have a chance to talk a little bit about some of the collaborative projects that are going on in this place, some of the good practices that are happening, and obviously the idea today is to really foster more cross-sector collaboration to strengthen human rights protection and online environments so that we all have a better sense of what we’ve learned, what we’re currently doing, and what we can do more. My name’s Peggy Hicks. I work with the Office of the High Commissioner for Human Rights in Geneva, and we, too, have been working in this space and trying to figure out what we can contribute. We had a recent event, for example, in Brazil, working with the judiciary on social media regulation. We’ve done a study within the MENA region, focusing on the experiences. And in particular, the idea within that document that we’re looking for a smart mix of mandatory measures and policy incentives that states can put in place that means that they’ll meet not only their obligations to respect human rights, but that they are doing what they need to regulate the space so that companies are also contributing to a more human rights protective environment. We have a project that we’ll probably hear a little bit about during this call, looking at the, which we call the BTEC project, that encourages cross-stakeholder, cross-sector multi-stakeholder engagement, and it’s focused on really trying to work with companies to answer some of the tough questions we see, including around AI and content moderation. And it remains a challenge to figure out how to do this, especially since we’re working with some of the largest companies. One of the big questions is how we make this experience more global, how we engage more with small and medium enterprises. And right now, for example, we have a track that’s focusing on how do we deal with investors within the tech space as well. We have found though, through our discussion with the companies, that the work together with them actually has strengthened the way that they work amongst each other, but also that we learn and have a privileged position through what we learn to be able to bring some of what’s happening within the companies to a more general audience, for which we’re very grateful. We’re also, of course, working with the International Institutions in this space including the UN Human Rights Council and some of the things that come out of that body and I’ll do a shout out now to our host of Norway for a resolution that they passed recently which is important that for example calls on us to assess the risks faced by human rights defenders through digital technology and do work on that issue. So we’re working across these different platforms with our trusted partners to try to have this type of conversation that we’re having today and we’re looking forward to doing it in more depth with you and in order to do that I’m super privileged to have with us a wonderful panel. I guarantee that the panel will not be composed only of men when Alex Walden from Google arrives soon so if anybody’s taking screenshots hold up we’ve got Alex coming soon so we’ll be a bit more balanced but today we’re very fortunate to have with us and I’ll present to you briefly now Jason Pielemeier who we’ve worked with very closely as the Executive Director of the Global Network Initiative and Barber to my right is the Legal and Advocacy Lead at Global Partners Digital and Esteve Sanz is the Head of Sector for Internet Governance and Multi-Stakeholder Dialogue at the European Commission. So you’ll see they come from very different perspectives as well I think so it’ll be really great to have their different contributions. Alex Walden who will join us as I mentioned is the Global Policy Lead for Human Rights and Freedom of Expression at Google. So we’re going to jump right into the conversation and I’m going to turn to you Len Manriquez first and ask you from the perspective of civil society what specific challenges do civil society organizations face in advocating for and protecting online human rights when confronted with these pressing issues. Please.


Ian Barber: Sure thanks Peggy. Good morning I hope you guys can all hear me okay. To answer your question there are a number of challenges that civil society is facing right now especially in the past few years it seems are advocating or pursuing for things along the lines of national security or kind of an economic impact, kind of looking for impact investment. So it’s reflecting kind of a narrative crisis I think I believe for the human rights based approach that needs a kind of a bit of a rethink at this point for us. And this has kind of impacts not just on civil society in the global north but particularly civil society in the global majority which are already you know less well resourced and able to make an impact so I think that’s critical to acknowledge. And this leads to I think some serious capacity issues so of course with lack of funding there’s less of an ability for civil society across the globe to be able to make an impact. We’re seeing this is resulting in layoffs you know burnout and also not having the expertise then to be able to come into these forums and spaces and be able to effectively advocate. We know that there’s been a proliferation of forums and processes in the past few years. It’s quite difficult to keep up with even kind of the standard ones we’ve had around for a while. One’s based in Geneva, UPR focused, one’s treaty based but now you know the UN Cybercrime Convention, the AHC, we have WSIS, we have AI governance efforts at this point. So keeping on top of all these things to be able to have them well resourced with your team is quite difficult and I think those are kind of the central things we’re seeing. And then another big one is also this general erosion I think or challenging of this multi-stakeholder approach to governance or policymaking. So whether it’s at the national level or the regional level or the global level. their CSOs are kind of not able to meaningfully engage and be a part of the decision-making process or be able to input and there’s kind of a lack or a closing of mechanisms that are inclusive and transparent for civil society to be able to engage and this is problematic because we’re seeing then these state-led processes or an increasingly tendency toward states-led processes that then don’t then include the expertise and the advocacy points of civil society, including those that are most impacted, including those that are on the ground and have the knowledge that’s needed to make effective decisions and frameworks. So I think that’s kind of a high-level point. It could go on for a long time, but I think I’ll stop there.


Peggy Hicks: Great. No, I think you’ve hit on many of the points that we’re going to dive into deeper during the conversation and you know, I want to say just on that last point you made, this idea that when civil society isn’t able to put their input, I really want to emphasize that that’s not just a disadvantage to civil society who wants to have their voice heard, but to the process itself and it and it itself is weakened by the lack of the expertise that civil society, a real experience that civil society can bring in. You’ve hit on some of the things that I think everybody is going to want to come back to eventually as well on the the main challenges that we see in the space, which unfortunately are shared I’m sure by all of us on the panel and many of you in the audience as well. But I’ll turn now to Jason and obviously for those that don’t know the Global Network Initiative, although I think most people at IGF do, it represents a unique coalition of civil society, academic, investor, and private stake sector stakeholders. And we’d like to hear more, Jason, about how GNI ensures that diverse perspectives and priorities from all these members are effectively integrated into your strategies for online and human rights protection and maybe give us a concrete example of successful collaborative efforts that you’ve engaged in. Thanks. Yeah, and welcome to Alex who’s joining us. Already introduced you, Alex, so you’re you’re with us.


Jason Pielemeier: Thanks so much, Peggy. It’s a pleasure to be here, I’m really glad to be a part of this panel, to be here in Norway, to be back at the IGF. So, hi to everyone in the audience, both in person and virtually. So, I appreciate the opportunity to share a bit more about the Global Network Initiative, GNI, and how we work, and how we try to create space for and amplify the voices in particular of a really diverse range of stakeholders. As people may know, GNI is a multi-stakeholder organization, so our membership falls into four categories. We call them constituencies, so we have academic members, we have companies, including Google, we have civil society organizations, including Global Partners Digital, and we have investors as members. So, it’s a very big tent, but it wasn’t always that way. When GNI started about 17 years ago, it was a relatively small set of mostly North American and some European organizations. But today, we have over 100 members from every populated continent, and we’ve really made some significant strides to put the global in Global Network Initiative. And that’s been very intentional. We’ve worked really hard over the last decade to reach out to organizations of all types in all kinds of different regions to be very conscious of the issues that we focus on, the spaces that we curate, the events that we attend, in order to really demonstrate our desire to be a part of a truly global conversation and to bring a diverse range of voices into those conversations. So, it hasn’t been straightforward or necessarily easy. to to grow the network the way we have but we’ve been we think quite successful and really appreciate the sort of range of intelligence and viewpoints and experiences that new members have brought into GNI and so that’s you know that’s really part of what we are about is trying to you know build this this space this trusted coalition of organizations that can come together and address difficult challenges in the in the tech governance realm and we we bring work bring our members together in various ways we we do learning sessions we we have a bespoke accountability process for our companies and we’ve made efforts to expand the opportunities for members from across the world to participate in those assessments that we conduct we also try and go out into the world attend other events like the IGF but also regional forums like the forum up for internet freedom in Africa the digital rights and inclusion forum regional IGFs all over the world and hold sessions with our with our members and with other stakeholders and partners in those settings as well in terms of an example I mean I think I guess you know one example of how we’ve grown the network in a way that I think hopefully is having impacts in jurisdictions outside of North America and Europe is the work that we did to bring MTN the South African telecommunications company into GNI MTN has been on a journey for several years now and I think it has worked with a range of actors including I think the BTEC project to understand their responsibilities under the UN guiding principles and other frameworks and to really build out their own approach to human rights. So they’ve developed a really robust human rights statement. They joined GNI in 2022. Their transparency report has gotten much deeper and much more detailed. I encourage folks to take a look at that as an example of a really good technology company transparency report. And they are now going through their first GNI assessment. And that has created a lot of opportunity for them to kind of look inward at their systems and policies and understand better the risks related to their business operations, the jurisdictions that they’re operating in, and to get important feedback from a wide range of stakeholders through GNI. So I’ll stop there. I’m happy to talk more about any of that as we go through the rest of the panel.


Peggy Hicks: Great. Thanks, Jason. And it’s really good to hear about the growth and the way that you’ve been able to do it. I think Len Manriquez had already raised the difficulty and sometimes there’s a commitment to a multi-stakeholder approach, but actually bringing everybody into the room is one of the challenges and doing it in a meaningful way. So your experience in doing that is really good to hear about. I think we’ll need to come back a bit more on some of the challenges, including in terms of some of the disincentives for companies to do it. But we’re going to actually turn to Alex now who’s got very direct experience with, you know, these challenges that companies face in navigating the space. So Alex, if we could hear from you a bit about the significant technical or operational challenges that Google faces in mitigating online harms while simultaneously respecting freedom of expression, including in response to national context and government requests. And after that, you get two questions. The second is how you’re also working to to incorporate feedback from civil society organizations and human rights experts into your policies and practices. Thanks, Alex.


Alex Walden: Thank you. Thanks for the question. And thanks for bearing with me on my travel from Oslo to Lillstrom. It’s a good question and I appreciate the framing because really the challenge is about how do you do, how do you prevent online harms while you are respecting human rights, in particular freedom of expression, privacy, and non-discrimination. So just to censor is not what’s difficult. What’s difficult is to ensure that you’re respecting rights while you are trying to take a tailored approach to removing content that is harmful. So in particular, the two things I want to flag are one, sort of the speed and scale. That is sort of a policy challenge and it’s also obviously a very kind of operational challenge. The amount of content that we have being uploaded to our products every day means that the volume is high and we need to figure out ways to address that at scale. And so we’re using, obviously there are human moderators that participate in that process, especially for content where that requires human kind of context to understand. But we use AI and we’re increasingly using AI to help us do that faster. So again, scale is always, and you’ll hear all the companies say this, scale is a challenge. And so figuring out how to address that scale in a responsible way remains an ongoing challenge that we are always sort of iterating on how to do better. The other piece is really the complex regulatory environment, which means that a few things. One, we need safe harbors in order to do this work effectively to make sure that we are able to implement content moderation practices that are effective and kind of iterate on our policies. And so one is safe harbors and ensuring that we have regulatory. work? In terms of how we engage stakeholders and take feedback, there’s a few things I’d say. On the largest scale, it’s important for companies to show up to venues where our stakeholders are so that we can participate in conversations with them and make sure that we’re hearing from them in that context. So things like IGF, venues like RightsCon, showing up there and being part of the conversation and hearing what the concerns are from stakeholders, those sort of being present is an important thing for us to do at those large venues. Then it’s about being part of organizations where sort of more curated versions of that conversation is taking place. So GNI is an important, being a member of GNI and engaging in GNI is a really important way in which we do that as Google, a place where we have core stakeholders that are talking about these issues and the trade-offs all the time. And then specifically, just Google as an individual company, we have programs in place, part of the human rights program as well as along with our trust and safety colleagues ensuring that we are doing regional stakeholder meetings and stakeholder meetings with our sort of global colleagues as well to make sure that we’re hearing directly from experts in the field about what’s happening in their region, what their experience is with our products, how things are working or are not working and ensuring that that feedback is going directly to the teams that are drafting our policies, enforcing our policies and building our products.


Peggy Hicks: Great, thanks very much Alex. I mean I think this area of stakeholder engagement and what works and what doesn’t is one of those that we have to keep iterating and improving on. We did a BTEC paper on this that people might want to refer to with sort of the five key principles but one of the things we found talking to all of you is that there are good practices and there are ways to improve and that I think there’s still a lot of work to be done. But we need to move over and I’m really glad to have with us another perspective coming from the European Commission, SFS. We’d really like to hear about sort of how does international cooperation play in the European Commission strategy to protect online human rights especially with countries and regions outside the EU and how that might contribute to the WSIS plus 20 process and how you’re looking at the EU’s role in this important space. Thanks.


Esteve Sanz: Thank you so much Peggy. I am very glad to be in this panel, the European voice in the panel. We, digital and human rights are an absolute priority for the EU. We’ve been working on it for a long long time. We have focused especially on getting agreements at the global level including the global digital compact, the declaration for the future of the internet etc that really commit states and critical actors to respect the digital human rights, not censor the internet, not doing internet shutdowns etc. a very important achievement that we did in the Global Digital Compact that commits states in the UN not to shut down the Internet. At the same time there is a gap here. We have done a lot of analysis also, engaged academics and civil society to help us understand what’s going on in the ground when it comes to states using the Internet for control. I think that we are in a new stage where the Internet is not only controlled, but it’s used for control, and what we see is a very depressing trajectory. So there is this gap that is very puzzling between the diplomatic achievements that we have managed to do in committing global actors, very powerful global actors, to respect fundamental freedoms online and what’s going on in reality. So this is very damaging, this is a diagnostic that we have on the table. We have engaged in several funding exercises, we have the Global Initiative for the Future of the Internet that has a project that we call Internet Accountability Compass that will help us precisely analyze this gap, what we are committing into and what’s really going on in terms of digital repression. This is extremely important for us. Every time that we engage on human rights and digital dialogues with countries, we bring up digital depression, that’s very important for us. When there is a big event, an Internet shutdown, we engage in public diplomacy as well, in Iran, in Jordan, so we have callouts for Internet shutdowns there. And there is a lot of investment as well, so we have, for example, projects like protectdefenders.eu which provides funding in case of urgent need for journalists and other civil society actors. We work a lot with you, Peggy, so we have a lot of funding and projects in common, one on Internet shutdowns, several funding projects that really aim at empowering OHCHR. to play a critical role in this in this field and so yes this is all going on we are very much aware of the funding situation there are a lot of internal discussions within the EU how we can step up our role in that area because we feel that this is it’s going to be really dramatic if we don’t act soon of course discussions related to funding are always extremely delicate in any public administration and it’s not difficult but I can assure you that we have achieved some successes already and more funding will be flowing whether the EU can cover all the funding that it’s being extracted from from those organizations it’s of course an open question but it has really sent a signal that the EU should step up I would say so on the wishes on the wishes plus 20 review this is I mean this is very important it links with what I was explaining at the beginning that we have actually achieved a lot of things when it comes to UN discussions about states committing to defend digital rights etc but then when we see it’s a bit puzzling wishes plus 20 review will double up on those efforts so what EU member states have have discussed and this is how we will go to the negotiations of the outcome document is to really take stock of the rise of digital authoritarianism this has been presented by our ambassador to the to the UN already so acknowledging the digital authoritarianism is on the rise and this has we has to be acknowledged and then based on that propose what we aim what we what we hope will be unprecedented language at the UN level in the wishes plus 20 resolution on digital human rights so this language it’s a still object of discussions internal discussions will probably publish an on paper with that language that again we hope it’s not part of any UN resolution because the challenges are so high that we need to move up. Part of that language will be for sure going much more concretely into statements that protect journalists, civil society, etc. from digital repression. But that’s our aim, it’s a public aim, it’s very ambitious, it’s very difficult to achieve and pull off, but we of course count on like-minded partners and stakeholders who will need to be participating very intensively in the WSIS plus 20 resolution to do that. We think that the context is really the good one, so that we can achieve at least that. But again, the reality might be different than whatever the outcome document of WSIS declares, so important to bear in mind that gap.


Peggy Hicks: Oh thank you Esteve. It’s really interesting to hear your comments about the disconnect between where we get to in terms of international commitments and what we see in the world, and I think we feel that on the financial side as well, where the demand for action, for work in this area just grows exponentially, but we are facing some of the challenges that you mentioned. I want to look back just quickly to Len Manriquez, and then I’m going to have a question for everybody, and then we’re coming directly to you all quickly. Len Manriquez, I wanted to ask you, you know, when you look at collaboration from a civil society perspective, what is civil society looking for? What does it need from governments, tech companies, and other stakeholders in order to advance human rights protection? Where have you seen good collaboration happening?


Ian Barber: Great. Well, I just want to say that there’s a lot of great collaboration already on the table from these individuals and from their remarks, so I just want to acknowledge that. But also, I think at the end of the day, the most impactful forms are going to be those that truly shift power and resources back to civil society and allow them to engage. So from governance, we’ve already alluded to this, it’s ensuring that those policy processes, whether it’s national, regional, or global, that they’re actually inclusive, they’re bringing in those voices, that there’s input that’s received, there’s acknowledgement, and there’s a feedback loop as well. So that’s key. I think also funding we’ve hit on already is a key metric, and also kind of recommitting to human rights obligations themselves, of course, when things do happen. from companies I think that you know they can really operationalize their their commitments through transparency and access that can come in a variety of forms and come to access to data it can be on their impact assessments could be enforcement practices also can be this kind of iterative multi-stakeholder engagement with you know groups that are in different regions that are more at risk those are going to be key as well and I think they kind of lead to this co-design and co-development of policy and governance and frameworks that we want to see and I think for more multi-stakeholder coalitions like GNI and again these things are already very much being done is there’s there’s definitely a collaboration deficit that I’m seeing so there’s a recognition that we have challenges but really there’s not always structural support then to address them so what you need to do is then champion equity in partnerships as Jason was alluded to it’s bringing in voices from the global majority of the Global South and civil society as co-leaders better than engaging advocacy setting not just you know kind of tokenism it’s facilitating access to knowledge and sharing that so that engagement can be effective and realized and it also there’s a need to do this to kind of like build a gap of trust I think among stakeholders across different areas because without it we’re going to have a situation where the structures and don’t support everyone and there’s no final effective impact and it’s again this symbolic means of doing things so I think that’s kind of a cross response there but yeah yeah.


Peggy Hicks: That’s great and I think it’s really important to make that point that it’s it’s got to be intentional you have to put the resources and effort into it if you’re going to really make things work in a more global way like like Jason talked about with with GNI so before I turn to the audience I do want to ask one sort of lightning round question of all of you because you started off the end by noting that we’re sort of we’re navigating this human rights field in the midst of of two really oppressive almost pressures from both the the securitization side where all that matters is is you know the cyber crime convention as we showed, you know, looking and David Kaye was just talking about how we make exceptions for anything that may be, you know, relevant from the national security side. And then I think even more prevalent now is this rationale around the competition, innovation, economic side where anything that stands in the way and human rights are sometimes seen as obstacles or barriers to come over means that companies and other stakeholders, including governments seem somewhat less invested in answering some of the questions we’re asking today than they have been for me at least at prior IGFs. So I wondered how you’re looking at that and when you get that type of pressure that, you know, why should we focus on doing it the multi-stakeholder way and bringing in civil society and why does it matter to make sure that we’re building in human rights within the digital tech work that we do given that we have these competing tensions around national security and the need for greater competition and and effective innovation. You know, give me, you know, your 30-second answer to that that you use, which I’m sure comes up quite frequently in everybody’s lines of work. So maybe just to go this way, start with you, Alex.


Alex Walden: Never comes up for me. No, no, never. I think, you know, you just hit on these things that are sort of part of my internal and external conversations every day. From my perspective and what I say to my colleagues inside the company and my stakeholders outside is we have to be able to focus on, we have to figure out how to and to focus on all these things at the same time. In order to achieve national security interests, in order to focus on ongoing innovation and have competition in the market, we have to ensure that human rights is integrated across those conversations and remains a priority. States have a duty to uphold their obligations to human rights and so it is imperative that they in those conversations about regulation, about how they use AI as part of their public sector, ensure that they’re upholding sort of that obligation. And companies also have a duty to do that too. But I think sort of there’s a role for everyone and it is imperative that governments do it first in order to sort of set the stage for all of the other actors to be able to show up and do their part. Companies are providing technology to governments for national security purposes. And we need to know that governments are thinking about their human rights obligations in the context of when they’re procuring that. So I think there’s a lot of good guidance out there. BTEC has done some of it already in thinking about procurement and how companies should be thinking about their human rights obligations. But really like we have to do all of them at the same time.


Peggy Hicks: Great, thanks Alex. Ian?


Ian Barber: Yeah, I think just building on what Alex said for me when I’m speaking to governments or any other stakeholder, I kind of challenge them to say that, actually I don’t think human rights approaches and outcomes and security or whatnot are even potentially even opposing things. They can be very much mutually reinforcing concepts and they can support one another. So to kind of fold them in is kind of a creative way sometimes to Trojan horse to get kind of this funding, which is essential. And I think that really what it comes down to is then as well as that you do need as a final point, civil society in the room to bring that expertise, to bring the knowledge and the know-how to be able to arrive at these solutions. So it’s kind of challenging and rejigging the narrative and then also ensuring that those people are at the table.


Peggy Hicks: Great, thanks. Jason?


Jason Pielemeier: Yeah, I mean, I guess two things. One, taking a step back, I just had sort of an interesting kind of mental moment because when you were talking, you said digital repression and I heard digital depression. And I think that’s because of the comments that we heard initially from Ian and just generally how a lot of us are feeling these days, which I want to acknowledge is real. So we’re dealing with both digital repression and digital depression. But I think it’s really important to remind ourselves. and the Internet is still an incredibly vibrant and critical space, especially when you compare it to offline mediums for free expression and freedom of association and assembly. And that’s something we sometimes forget. We can look at the annual Freedom in the Net reports, which are excellent, and see this trend towards declining freedom. And it’s real. And we have to acknowledge it. But if you compare offline and online realities for people in even and maybe especially the most repressed places on earth, there’s a real reason why they cling to the social media spaces, the open Internet that they are able to access, whether it’s finding cracks through the repressive laws in their country or using anti-censorship technologies to get access to the open Internet. And we don’t have to look far and just look at the example of Iran today to see that reality. So I want to just kind of infuse that sort of optimism or hope that, you know, there is still something worth fighting for. There’s a reason why it’s important to have these important statements from governments, even if they’re not always living up to them in practice. There’s a reason why we continue to get together in these multistakeholder settings to talk about what we can do, even if it’s easier sometimes to sort of give in to cynicism and digital depression. So not an answer to your question, but something that I feel like we needed to kind of just remind ourselves of.


Peggy Hicks: Very helpful. Esteve.


Esteve Sanz: Every time that there is a legislation that deals with digital in the EU, we strive, of course, to find the right balance between security, between all these elements. The legislative process in the EU, it’s a complex one. The parliament is involved, civil society, there are a lot of consultations, a council, the commission, there is a proposal. So it’s a very complex, almost miraculous way of doing legislation. that yields something like the Digital Services Act which is perhaps the cornerstone of our digital regulation right now, that as you well know it’s a null of society approach, that’s what we call it. The legislation itself has pieces that are aimed at involving civil society into the process of governance of the platforms themselves, there are transparency provisions, users can complain about takedowns of content etc. So this, the balance that we found in the legislative process when it comes to the Digital Services Act, we think it’s extremely valuable, of course we are pitching it to our partners globally bearing in mind that each region, each country has its own approach, but so far I think that we have managed to find that approach, that is something very important for us in the EU legislative system which is the Charter of Fundamental Rights. So whatever legislation we put on the table, whatever proposal it’s on the table, it needs to comply with the Charter and having that Charter as the ultimate element that frames everything that we do in the EU but especially on digital has been very valuable because in the end it shows us a path towards finding that balance correctly.


Peggy Hicks: Wonderful, thanks so much. So I’m gonna jump quickly now to our audience to see if any of you have any, we’ve provoked any thoughts from any of you that you’d like to put on the table or any questions for our panel here. I’m not exactly sure how the tech here works, it looks like there’s microphones alongside, I think you probably need to go to one of those, if anybody will give me a thumbs up that that’s how we’re supposed to do it. Yes, okay I see movement, looking forward to hearing the comment of the gentleman, nope he’s just leaving, bye. Anybody want to come in? Trust me we can keep the conversation going amongst ourselves, I know these guys but happy to hear from you. I know it’s a little awkward to have to get out of your chairs. All right, I’ll come back to you all. So I think Jason did something good, which is, I think it is one of those spaces where it’s important for us to look for good examples and to put ideas on the table that we think are things that we want to see replicated. So if you had to just give me an idea of an incentive or something that you think you want to see more of that you’ve seen, you know, either in a particular context in which you’ve worked, give me some good examples that we can leave our audience with today. Alex, can I start with you?


Alex Walden: Yeah, I mean, I think, well, one thing I’ll flag just because maybe it’s top of mind and recent, and because it hits on some of the DSA things too, is I think we, GNI and DTSP, who’s another organization that works with companies around risk assessment and harms issues, convened a risks and rights forum in Brussels last month. And that was an opportunity for all of the companies who are members of GNI and DTSP, who are also VLOPs and VLOSSs under the DSA, to come together and have conversations about the assessments that are now public and all the information that’s in that. And so to really kind of, we have a lot of actual sort of artifacts that we can discuss and talk about the challenges and what people want to see more of from companies. And so I think FORA, where we have a lot of sort of material that we can talk through and have really like open, transparent conversation between civil society and companies, it’s like, that was a really, I think it’s a really excellent example of how we can kind of, we have a piece of regulation where it’s in action and talking amongst the stakeholders about what’s working, what’s not, and how we can improve. So that’s just a recent one that I think is really pressing, especially for companies in particular.


Peggy Hicks: Great. No, I think that’s really an important point, Alex. And to me, it also gives rise to something that I often think. in this space, which is that evidence base, that idea of going beyond the general conversation to really talk about some specific case studies, something went wrong, putting what went wrong on the table sometimes and unpacking it and figuring out how to do better is really important. And I know within our work where we do peer review amongst companies similarly situated, we have some really, really frank and useful conversations that can push things forward. But you can’t do that if you stay at the, you know, 10,000 feet level. Ian?


Ian Barber: Yeah, I think I want to mention the kind of precedent of modalities and process and procedure that we’ve seen. So in the AHC, the negotiations, the UN Cybercrime Convention, in both a formal and informal way, there’s kind of evidence, even if the final output wasn’t what we would have been looking for, that you can use this kind of existing basis moving forward in other forums. So the modalities of the AHC was a bit more open for civil society and others to engage and provide input and have that be taken in and speak for the UN, which is great. And then also informally, there’s a brain trust organization group that was working with companies across the stakeholder lines to kind of advance our central aim. So I think that those two examples have been used then in other UN processes and forums to kind of replicate it to then build in a more multi-stakeholder approach to things, which I think is excellent. And also a selfless plug, which is GPD is now working for the WSIS Review, coordinating the Global Digital Rights Coalition, working with CSOs in the Global North and Global South, which you’ve seen as good practice, and other stakeholders. So we’ll be doing that moving forward. So another positive note to hopefully end on.


Peggy Hicks: Great, thanks. I’m going to skip over you, Jason. I’m going to go to Esteve, because you already put yours on the table. I’ll give you another chance, though.


Esteve Sanz: In April, we organized a global multi-stakeholder conference on what we call the governance of Web 4.0, which is essentially the impact of AI, quantum, et cetera, on the internet. So that global impact of those very powerful technologies, blockchain. and others into the global internet, not the governance itself of these technologies. This was a very well attended and very intense conference and there was a very prominent human rights angle. And what emerged from that is actually a series of principles that were object of consensus or rough consensus among the conference participants that basically set the ground so that we can continue being optimist in the context of this future internet, which is the stakes are much higher. What you can do with AI in terms of repression is massive. What you can do with AI in terms of freedom of speech and liberation and analysis of bureaucratic processes so that you empower citizens, etc., it’s also massive. So what we set up after that conference were this series of principles that set the ground to while we see the future internet emerging, if we want to continue seeing the internet because this is not a given, the internet, it’s what we make out of it, right? If we want to make that space to continue to be a tool for self-expression and for freedom and for democracy, etc., these are the principles that we think we should follow. So this was very, you know, it leaves us with a lot of optimism because it was relatively easy to, of course not every stakeholder was in that table, but it was relatively easy to come up with a series of principles that would chart a good path. So this is also impacting our position in the WSIS Plus 10D negotiations. We will bring up these higher stakes when it comes to these very powerful technologies impacting the internet, that if we don’t set things right, then things can go massively wrong very easily. And we hope that this is acknowledged in the UN context as well.


Peggy Hicks: Great. Back to you, Jason.


Jason Pielemeier: Yeah, maybe just mention one other collaboration across this table. The Rights and Risk Forum and the work we’re doing on the Digital Services Act and also trying to think about how we continue to… ensure that not just the risk assessments under the DSA, but those under the Online Safety Act and other digital regulations remain consistent with the UN guiding principles and broader international human rights frameworks, but also we’ve been working recently with GPD to empower civil society voices from the global majority to be more engaged in the WSIS process precisely so that we can support the kinds of initiatives that it sounds like the EU is eager to put forward and make sure that these are not just seen as sort of Western approaches that that don’t resonate and have support across the world. So just today, I think we’ll be publishing a series of reports from the partners in in nine different countries. We’ve done workshops at lightning pace over the last two months around the world with civil society actors in these different countries to help inform a wider audience and involve a wider group of stakeholders in the input processes to WSIS. Obviously that work will continue over the next several months until the end of this year when the WSIS process concludes. But I think it’s really important to emphasize WSIS as I mean here being here at the IGF as a just as such a critical moment for this community given that all of these new technologies are creating opportunities for governance to go in different directions and that direction could learn from and build on and incorporate the sort of multi-stakeholder human rights based values that we have successfully collectively pioneered as a community or they could go in a different direction. And so it’s it’s really a fork in the road. Not a phrase that I like to use anymore given the way it’s been misappropriated, but I think it’s a it’s a critical time for us to be here together at the IGF and really appreciate all of the panelists here sort of speaking about how we can continue to work towards that WSIS outcome that will sort of reinvigorate the multi-stakeholder


Peggy Hicks: And you jumped ahead again, which I think is really good. It shows we’re on the same track, because the next thing I wanted to ask, and I don’t see anybody lined up at the mic yet. Maybe somebody back there. Please come over and do it. I’ll throw out my question, too, and I’ll let you all choose. A number of you have focused on the difficulty sometimes in making sure that both the resources and the engagement is happening as effectively outside of Europe and a global north context, and figuring out how more can be done, both to reap the benefits of digital technology, but also to make sure that the tools and resources needed to have the types of conversations and engagement that we need in places without as many resources, how we can better make sure that that is happening. So I wanted to get your thoughts on that, but turning to our colleague here first. Please.


Audience: Thank you. Alejandro from Access Now, and I think very related to that comment is, what are the accountability mechanisms for these type of partnerships, especially when you’re working in the global south and it’s very easy for global north actors to disengage when these type of partnerships are happening? In your experiences, what are those accountability mechanisms that we can create?


Peggy Hicks: Great question. Thank you very much. So I will maybe just, Jason, you want to start on that one, if you’re doing quite a bit?


Jason Pielemeier: Sure. So I think accountability can take a lot of different forms, to Alejandro’s question. I think there are, you know, in GNI, for instance, we have an accountability mechanism that is built in to hold companies to the commitments that they make. And that’s a process that involves sort of very detailed review of internal company systems and policies. independent assessors. And as I mentioned at the beginning, we’ve been working really hard to sort of build more opportunities for a wider group of GNI members to be a part of those conversations. I think at the sort of multilateral level, the question of accountability has always been a somewhat vexing one. The Office of the High Commissioner for Human Rights does a really important, plays a very important role in calling out where states fall short of their commitments. But more tangible legal processes are lacking in many contexts. We do have, obviously, committees related to different treaty bodies that can produce reviews. We have the universal periodic review. We have the special mandates. So it’s not a barren field, but it is also one that is not, that still could be sowed with, I think, more seeds. I don’t know. I’ll stop trying to torture that analogy. And I think for some of these other spaces, whether it’s the IGF itself as a venue for collaboration or the WSIS process, the Global Digital Compact, yeah, it’s an open question, right? How do we ensure that not just the states that are producing the final text, but the other stakeholders who are committing themselves and involving themselves in those processes continue to carry them out? Part of it involves being at places like the IGF, where we can continue to sort of stand on stages and have to answer to audiences about what we’ve done since we’ve made these commitments. Part of it involves, I think, funding and being able to have support for watchdogs like Access Now and others in civil society. So it’s going to take a lot of different tools, but I think at least in this space, you know, we have forums and venues like this, which we sometimes take for granted, but I think we need to double down and reinvest in.


Peggy Hicks: Great, Esteve you want to say few words on accountability side?


Esteve Sanz: If you don’t get journalists, civil society activists, etc. to call out those abuses,


it’s going to get very difficult at the global level to trace that. Because, again, we are having this legitimacy gap between what is written, the safeguards, etc., and what we see in practice. And there is a fundamental problem of complexity and transparency that either you engage the multi-stakeholder community to tackle that, or we will simply not know.


Peggy Hicks: And I think that’s a lead-in for you, Len Manriquez, to both look at the accountability question from a civil society side and the role that it plays.


Ian Barber: Yeah, I mean, civil society can play a key role and, as was noted, serving as kind of a watchdog or an observer, even. And one that can then bring the issues or problems to light to the broader community, I think, is a central component and one that’s kind of overlooked, in a way. And I think that when you’re speaking about accountability in general, a lot of this comes down to transparency, openness, and decision-making in the processes and what’s been done in moving forward. And this should not just be a one-off event, as has been alluded to. It should be done in an iterative and ongoing way and in different manners. So I’ll keep it short and sweet.


Peggy Hicks: And, Alex, on your side, the company side?


Alex Walden: Yeah, I mean, I think, at least for GNI companies, Jason hit on a key piece for us, which is the independent assessment that we have as members of GNI. And so that’s a key way in which we are looking to ensure that we have accountability for our commitment to principles, the GNI principles in particular. And then, obviously, being transparent about our commitment to the UNGPs and how that manifests across our products. That looks like qualitative transparency about what our policies are, quantitative transparency about how we’re implementing them, enforcement measures, etc. And that’s not just for global majority, that’s the entire world and how we’re enforcing that. Obviously, we have the Digital Services Act in Europe. And so that is a sort of beginning entree of what sort of a risk assessment, a report that becomes public can look like. And so I think we’re all learning about what the value of something like that is for the purposes of accountability in a regulatory setting as well.


Peggy Hicks: Yeah, no, I think that’s a really good point. And thanks so much for the question, because I think it’s one where we really are learning now. And I think that’s an important thing to say, how useful are some of these tools going to be? Do they provide the value that we need? I think Len Manriquez’s point about the transparency pieces is absolutely crucial that without transparency, we don’t get to accountability very easily. But I’m sure there’s more we can do, and I’m sure Access Now will help us to figure it out. So thanks so much for the comment. And I’m getting the signal that we’re going to have to draw the session to a close. In doing so, I really want to thank those that are responsible for the organizing of it, which was not my office, but Christina Herrera from Google and Erlinson from ADAPT, who brought us all together today. We’re very glad to have a chance to talk through these issues with you. I hope you come away from it with some good ideas on potential collaboration, comments that you want to follow up on in the course of IGF going forward. And obviously, feel free to reach out to any of the panelists to get more information on some of those good practices that we’ve discussed. Thanks so much for joining us today. Exploring the Fascinating Minds of Octopuses Subscribe to our YouTube channel for more videos on Fascinating Minds of Octopuses!


I

Ian Barber

Speech speed

210 words per minute

Speech length

1384 words

Speech time

393 seconds

Narrative crisis with funding shifting toward national security and economic impact rather than human rights approaches

Explanation

Civil society organizations are facing a fundamental challenge where funding priorities are moving away from human rights-based approaches toward national security and economic impact considerations. This shift represents a crisis in how human rights work is valued and supported, requiring a rethink of advocacy strategies.


Evidence

This particularly impacts civil society in the global majority which are already less well resourced and able to make an impact


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Development


Disagreed with

– Esteve Sanz

Disagreed on

Approach to addressing funding crisis in civil society


Capacity issues due to lack of funding leading to layoffs, burnout, and insufficient expertise to participate effectively in forums

Explanation

The funding crisis has direct operational consequences for civil society organizations, resulting in reduced staff, exhausted workers, and inadequate technical expertise. This creates a vicious cycle where organizations cannot effectively participate in important policy forums and advocacy spaces.


Evidence

With lack of funding there’s less of an ability for civil society across the globe to be able to make an impact, resulting in layoffs, burnout and not having the expertise to come into these forums and spaces


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Development


Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement

Explanation

There is a concerning trend toward state-led processes that exclude civil society input, undermining the multi-stakeholder governance model. This erosion occurs at national, regional, and global levels, preventing civil society from meaningfully contributing their expertise and advocacy perspectives.


Evidence

CSOs are not able to meaningfully engage and be a part of the decision-making process with a lack or closing of mechanisms that are inclusive and transparent, leading to state-led processes that don’t include the expertise and advocacy points of civil society


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Legal and regulatory


Agreed with

– Esteve Sanz
– Jason Pielemeier

Agreed on

There is a concerning gap between international commitments on digital rights and actual implementation


Proliferation of forums and processes making it difficult for under-resourced organizations to keep up and participate meaningfully

Explanation

The rapid expansion of policy forums and governance processes creates an overwhelming landscape for civil society organizations to navigate. With limited resources, organizations struggle to maintain effective participation across multiple venues, from traditional UN processes to new AI governance efforts.


Evidence

There’s been a proliferation of forums and processes – Geneva-based, UPR focused, treaty based, UN Cybercrime Convention, AHC, WSIS, AI governance efforts – making it quite difficult to keep up and have them well resourced


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Legal and regulatory


Disagreed with

– Jason Pielemeier

Disagreed on

Scale of multi-stakeholder engagement challenges


Need for civil society to be co-leaders rather than token participants, with structural support for effective engagement

Explanation

Effective collaboration requires moving beyond symbolic inclusion to genuine partnership where civil society organizations have leadership roles in policy development and governance frameworks. This necessitates structural changes that provide the resources and mechanisms needed for meaningful participation.


Evidence

Champion equity in partnerships, bringing in voices from the global majority as co-leaders rather than tokenism, facilitating access to knowledge and sharing so engagement can be effective and realized


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Development


Agreed with

– Jason Pielemeier
– Alex Walden

Agreed on

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive


Human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts

Explanation

Rather than viewing human rights and security as competing priorities, they should be understood as complementary and mutually supportive. This reframing challenges the false dichotomy often presented in policy discussions and provides a strategic approach for advocacy.


Evidence

I challenge them to say that human rights approaches and outcomes and security are not opposing things but can be very much mutually reinforcing concepts that can support one another


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Human rights | Cybersecurity


Civil society’s watchdog role in bringing issues to light through transparency and ongoing iterative processes

Explanation

Civil society organizations serve a crucial accountability function by monitoring and exposing problems in digital rights protection. This role requires transparency, openness in decision-making processes, and continuous rather than one-off engagement to be effective.


Evidence

Civil society can play a key role serving as a watchdog or observer that can bring issues or problems to light to the broader community, requiring transparency, openness, and decision-making in an iterative and ongoing way


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Alex Walden
– Jason Pielemeier
– Peggy Hicks

Agreed on

Transparency is fundamental to accountability in digital rights protection


Coordination of Global Digital Rights Coalition for WSIS Review working with CSOs in Global North and South

Explanation

Global Partners Digital is coordinating a coalition that brings together civil society organizations from both developed and developing regions to participate in the WSIS review process. This represents a concrete example of inclusive global engagement in digital governance.


Evidence

GPD is now working for the WSIS Review, coordinating the Global Digital Rights Coalition, working with CSOs in the Global North and Global South


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Development


J

Jason Pielemeier

Speech speed

153 words per minute

Speech length

1784 words

Speech time

696 seconds

GNI’s intentional growth from North American/European focus to over 100 global members across four constituencies

Explanation

The Global Network Initiative has deliberately expanded from its original limited geographic scope to become a truly global organization with diverse membership. This transformation involved conscious efforts to reach out to organizations worldwide and demonstrate commitment to global dialogue rather than Western-dominated discourse.


Evidence

When GNI started 17 years ago, it was a relatively small set of mostly North American and European organizations. Today, we have over 100 members from every populated continent, working hard over the last decade to reach out to organizations of all types in different regions


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Development


Agreed with

– Ian Barber
– Alex Walden

Agreed on

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive


Disagreed with

– Ian Barber

Disagreed on

Scale of multi-stakeholder engagement challenges


Success story of MTN’s journey in developing human rights approach through multi-stakeholder engagement and GNI assessment process

Explanation

MTN, a South African telecommunications company, exemplifies how companies can successfully integrate human rights into their operations through multi-stakeholder collaboration. Their progression from initial engagement to developing comprehensive policies demonstrates the value of sustained partnership and accountability mechanisms.


Evidence

MTN developed a robust human rights statement, joined GNI in 2022, their transparency report has gotten much deeper and detailed, and they are now going through their first GNI assessment, creating opportunity to look inward at their systems and get feedback from stakeholders


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Economic


GNI’s independent assessment process for companies with detailed review of internal systems and policies

Explanation

GNI operates a comprehensive accountability mechanism that involves thorough examination of member companies’ internal human rights systems and policies. This process includes independent assessors and has been expanded to include broader member participation from around the world.


Evidence

We have a bespoke accountability process for our companies involving detailed review of internal company systems and policies with independent assessors, and we’ve made efforts to expand opportunities for members from across the world to participate in those assessments


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Ian Barber
– Alex Walden
– Peggy Hicks

Agreed on

Transparency is fundamental to accountability in digital rights protection


Role of OHCHR and treaty bodies in calling out state failures, though more tangible legal processes are needed

Explanation

While existing international human rights mechanisms like the Office of the High Commissioner for Human Rights provide important oversight functions, the current accountability landscape remains insufficient. More concrete legal processes and enforcement mechanisms are needed to address gaps in state compliance with digital rights obligations.


Evidence

The Office of the High Commissioner for Human Rights plays a very important role in calling out where states fall short. We have committees related to treaty bodies, universal periodic review, special mandates, but it’s not a barren field though still could be sowed with more seeds


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Esteve Sanz
– Ian Barber

Agreed on

There is a concerning gap between international commitments on digital rights and actual implementation


Internet remains vibrant space for freedom compared to offline mediums, especially in repressed contexts like Iran

Explanation

Despite concerning trends in digital repression, the internet continues to provide crucial spaces for freedom of expression and association that often exceed offline opportunities. This is particularly evident in authoritarian contexts where people rely on social media and circumvention technologies to access information and organize.


Evidence

If you compare offline and online realities for people in even the most repressed places on earth, there’s a real reason why they cling to social media spaces and the open Internet, using anti-censorship technologies. We don’t have to look far – just look at Iran today


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Human rights | Freedom of expression


Series of workshops in nine countries to involve wider stakeholders in WSIS input processes

Explanation

GNI has conducted rapid-pace workshops across nine countries to expand participation in the WSIS review process beyond traditional Western voices. This initiative aims to ensure that global perspectives, particularly from the Global South, inform international digital governance discussions.


Evidence

We’ve done workshops at lightning pace over the last two months around the world with civil society actors in nine different countries to help inform a wider audience and involve a wider group of stakeholders in the input processes to WSIS, publishing a series of reports from partners


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Development


A

Alex Walden

Speech speed

180 words per minute

Speech length

1257 words

Speech time

418 seconds

Challenge of preventing online harms while respecting human rights, particularly freedom of expression, privacy, and non-discrimination

Explanation

The core operational challenge for tech companies is balancing harm prevention with human rights protection, requiring nuanced approaches rather than simple censorship. This involves developing tailored content moderation that removes genuinely harmful content while preserving fundamental rights to expression, privacy, and equal treatment.


Evidence

Just to censor is not what’s difficult. What’s difficult is to ensure that you’re respecting rights while you are trying to take a tailored approach to removing content that is harmful, specifically freedom of expression, privacy, and non-discrimination


Major discussion point

Technical and Operational Challenges for Tech Companies


Topics

Human rights | Content policy


Speed and scale issues requiring AI assistance for content moderation while maintaining human oversight for context-sensitive content

Explanation

The massive volume of content uploaded daily creates operational challenges that necessitate AI-assisted moderation systems. However, human moderators remain essential for content requiring contextual understanding, creating a hybrid approach that balances efficiency with accuracy.


Evidence

The amount of content being uploaded to our products every day means the volume is high and we need to address that at scale. We use AI and we’re increasingly using AI to help us do that faster, but there are human moderators that participate, especially for content that requires human context to understand


Major discussion point

Technical and Operational Challenges for Tech Companies


Topics

Human rights | Content policy


Complex regulatory environment requiring safe harbors for effective content moderation and policy iteration

Explanation

Companies need legal protections to implement effective content moderation practices and continuously improve their policies. The complex and varied regulatory landscape across jurisdictions makes it challenging to develop consistent approaches while meeting different legal requirements.


Evidence

We need safe harbors in order to do this work effectively to make sure that we are able to implement content moderation practices that are effective and iterate on our policies


Major discussion point

Technical and Operational Challenges for Tech Companies


Topics

Legal and regulatory | Human rights


Importance of showing up at venues where stakeholders are present and being part of curated conversations through organizations like GNI

Explanation

Effective stakeholder engagement requires companies to actively participate in forums where civil society and other stakeholders gather, rather than expecting stakeholders to come to them. This includes both large public venues and more focused organizational settings that facilitate deeper dialogue.


Evidence

It’s important for companies to show up to venues where our stakeholders are – things like IGF, venues like RightsCon – and being part of organizations where more curated versions of that conversation is taking place, like GNI


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Sociocultural


Agreed with

– Jason Pielemeier
– Ian Barber

Agreed on

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive


Need for regional stakeholder meetings to ensure feedback reaches policy-drafting and product-building teams

Explanation

Companies must establish systematic processes for gathering regional stakeholder input and ensuring this feedback directly influences policy development and product design. This requires structured programs that connect external expertise with internal decision-making processes.


Evidence

We have programs in place ensuring that we are doing regional stakeholder meetings with our global colleagues to make sure we’re hearing directly from experts about what’s happening in their region, their experience with our products, and ensuring that feedback goes directly to teams drafting our policies, enforcing our policies and building our products


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Development


Need to focus on human rights, national security, and innovation simultaneously rather than treating them as competing priorities

Explanation

Rather than viewing human rights, security, and innovation as zero-sum trade-offs, companies and governments must develop integrated approaches that advance all three objectives. This requires states to uphold their human rights obligations while pursuing security goals, and companies to maintain their human rights duties across all business activities.


Evidence

We have to figure out how to focus on all these things at the same time. In order to achieve national security interests and focus on innovation and competition, we have to ensure that human rights is integrated across those conversations. States have a duty to uphold their obligations to human rights in regulation and AI use, and companies have a duty too


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Human rights | Cybersecurity


Digital Services Act as beginning model for public risk assessments providing accountability in regulatory settings

Explanation

The European Union’s Digital Services Act represents an emerging model for regulatory accountability through public risk assessments that companies must produce. This transparency mechanism is still being evaluated for its effectiveness in providing meaningful accountability while serving regulatory compliance purposes.


Evidence

We have the Digital Services Act in Europe as a beginning entree of what a risk assessment report that becomes public can look like. We’re all learning about what the value of something like that is for the purposes of accountability in a regulatory setting


Major discussion point

Accountability Mechanisms and Transparency


Topics

Legal and regulatory | Human rights


Agreed with

– Ian Barber
– Jason Pielemeier
– Peggy Hicks

Agreed on

Transparency is fundamental to accountability in digital rights protection


Rights and Risk Forum in Brussels as example of transparent conversation between stakeholders using concrete regulatory artifacts

Explanation

The Rights and Risk Forum convened by GNI and DTSP provided a model for productive stakeholder dialogue by focusing on concrete, publicly available risk assessments rather than abstract discussions. This approach enabled more substantive conversations about what works and what needs improvement in company practices.


Evidence

GNI and DTSP convened a risks and rights forum in Brussels for companies who are VLOPs and VLOSSs under the DSA to come together and have conversations about the assessments that are now public, having open, transparent conversation between civil society and companies about what’s working, what’s not, and how we can improve


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Legal and regulatory


E

Esteve Sanz

Speech speed

165 words per minute

Speech length

1564 words

Speech time

567 seconds

EU’s focus on global agreements like Global Digital Compact and Declaration for Future of Internet to commit states to respect digital rights

Explanation

The European Union has prioritized securing international commitments through multilateral agreements that establish binding obligations for states to protect digital human rights. These diplomatic efforts aim to create global standards that prevent internet censorship and shutdowns while promoting fundamental freedoms online.


Evidence

We have focused on getting agreements at the global level including the global digital compact, the declaration for the future of the internet that commit states and critical actors to respect digital human rights, not censor the internet, not doing internet shutdowns – a very important achievement in the Global Digital Compact that commits states in the UN not to shut down the Internet


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Legal and regulatory


Gap between diplomatic achievements in securing commitments and reality of digital repression on the ground

Explanation

Despite successful international negotiations that produce strong commitments to digital rights, there remains a troubling disconnect with the actual experiences of people facing digital repression worldwide. This gap represents a fundamental challenge where formal agreements fail to translate into meaningful protection for individuals and communities.


Evidence

There is this gap that is very puzzling between the diplomatic achievements that we have managed to do in committing global actors to respect fundamental freedoms online and what’s going on in reality. We are in a new stage where the Internet is not only controlled, but it’s used for control, and we see a very depressing trajectory


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Cybersecurity


Agreed with

– Jason Pielemeier
– Ian Barber

Agreed on

There is a concerning gap between international commitments on digital rights and actual implementation


EU’s public diplomacy efforts calling out internet shutdowns and funding projects like protectdefenders.eu for urgent support

Explanation

The European Union actively engages in public diplomacy to condemn internet shutdowns and digital repression while providing concrete financial support for at-risk individuals. This dual approach combines political pressure with practical assistance for journalists and civil society actors facing immediate threats.


Evidence

When there is a big event, an Internet shutdown, we engage in public diplomacy in Iran, in Jordan, so we have callouts for Internet shutdowns. We have projects like protectdefenders.eu which provides funding in case of urgent need for journalists and other civil society actors


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Freedom of the press


Disagreed with

– Ian Barber

Disagreed on

Approach to addressing funding crisis in civil society


WSIS Plus 20 review as opportunity for unprecedented UN language on digital human rights acknowledging rise of digital authoritarianism

Explanation

The World Summit on the Information Society review process presents a critical opportunity to establish stronger international language on digital rights that explicitly recognizes and addresses digital authoritarianism. The EU aims to achieve more concrete protections for journalists and civil society than have been included in previous UN resolutions.


Evidence

EU member states will take stock of the rise of digital authoritarianism and propose what we hope will be unprecedented language at the UN level in the WSIS plus 20 resolution on digital human rights, going much more concretely into statements that protect journalists, civil society, etc. from digital repression


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Legal and regulatory


EU’s legislative process through Digital Services Act demonstrates successful balance using Charter of Fundamental Rights as framework

Explanation

The European Union’s approach to digital regulation, exemplified by the Digital Services Act, shows how fundamental rights can be successfully integrated into complex legislative processes. The EU Charter of Fundamental Rights serves as an overarching framework that ensures all digital legislation complies with human rights standards.


Evidence

The Digital Services Act is the cornerstone of our digital regulation with a multi-stakeholder approach involving parliament, civil society, consultations, council, and commission. Whatever legislation we put on the table needs to comply with the Charter of Fundamental Rights, which frames everything we do in the EU on digital issues and shows us a path towards finding that balance correctly


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Legal and regulatory | Human rights


EU’s Internet Accountability Compass project to analyze gap between commitments and digital repression reality

Explanation

The European Union has initiated a specific research and analysis project to systematically examine the disconnect between international commitments on digital rights and the actual practice of digital repression by states. This project aims to provide evidence-based understanding of how governments use internet technologies for control rather than just restricting access.


Evidence

We have the Global Initiative for the Future of the Internet that has a project called Internet Accountability Compass that will help us analyze this gap between what we are committing to and what’s really going on in terms of digital repression


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Cybersecurity


P

Peggy Hicks

Speech speed

176 words per minute

Speech length

2927 words

Speech time

992 seconds

OHCHR’s multi-faceted approach to digital rights through judicial engagement, regional studies, and cross-stakeholder projects

Explanation

The Office of the High Commissioner for Human Rights is actively working across multiple dimensions in the digital rights space, including collaborating with judiciary systems, conducting regional research, and facilitating multi-stakeholder engagement. This comprehensive approach aims to develop a ‘smart mix’ of mandatory measures and policy incentives that help states meet their human rights obligations while creating an environment where companies also contribute to rights protection.


Evidence

We had a recent event in Brazil working with the judiciary on social media regulation. We’ve done a study within the MENA region. We’re looking for a smart mix of mandatory measures and policy incentives that states can put in place


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Legal and regulatory


BTEC project as model for cross-sector engagement with tech companies on AI and content moderation challenges

Explanation

The BTEC project represents an innovative approach to multi-stakeholder engagement that brings together companies to address complex technical and policy challenges, particularly around AI and content moderation. The project has demonstrated value in strengthening how companies work together while providing OHCHR with insights that can be shared more broadly, though challenges remain in making the experience more global and engaging smaller enterprises.


Evidence

We have a project called the BTEC project that encourages cross-stakeholder, cross-sector multi-stakeholder engagement, focused on trying to work with companies to answer tough questions including around AI and content moderation. We have found that the work together with them has strengthened the way they work amongst each other


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Sociocultural


Importance of moving beyond high-level discussions to evidence-based case studies for meaningful progress

Explanation

Effective collaboration and improvement in digital rights protection requires moving from abstract, general conversations to concrete analysis of specific situations and failures. This approach enables more frank and useful discussions that can drive actual improvements in policies and practices through peer review and detailed examination of what went wrong.


Evidence

That evidence base, that idea of going beyond the general conversation to really talk about some specific case studies, something went wrong, putting what went wrong on the table sometimes and unpacking it and figuring out how to do better is really important. You can’t do that if you stay at the 10,000 feet level


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Ian Barber
– Alex Walden
– Jason Pielemeier

Agreed on

Transparency is fundamental to accountability in digital rights protection


Civil society exclusion weakens policy processes themselves, not just disadvantages civil society

Explanation

When civil society organizations are unable to provide input into policy processes, it represents a loss not only for those organizations seeking to have their voices heard, but fundamentally weakens the quality and effectiveness of the processes themselves. The expertise and real-world experience that civil society brings is essential for developing sound policies and frameworks.


Evidence

When civil society isn’t able to put their input, that’s not just a disadvantage to civil society who wants to have their voice heard, but to the process itself and it itself is weakened by the lack of the expertise that civil society, real experience that civil society can bring in


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Development


A

Audience

Speech speed

139 words per minute

Speech length

62 words

Speech time

26 seconds

Need for accountability mechanisms in global north-south partnerships to prevent disengagement

Explanation

There is a critical need to establish concrete accountability mechanisms when partnerships are formed between global north and global south actors in digital rights work. The concern is that without proper accountability structures, global north actors can easily disengage from these partnerships, leaving global south partners without support or follow-through on commitments.


Evidence

What are the accountability mechanisms for these type of partnerships, especially when you’re working in the global south and it’s very easy for global north actors to disengage when these type of partnerships are happening?


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Development


Agreements

Agreement points

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive

Speakers

– Jason Pielemeier
– Ian Barber
– Alex Walden

Arguments

GNI’s intentional growth from North American/European focus to over 100 global members across four constituencies


Need for civil society to be co-leaders rather than token participants, with structural support for effective engagement


Importance of showing up at venues where stakeholders are present and being part of curated conversations through organizations like GNI


Summary

All speakers agree that meaningful multi-stakeholder collaboration cannot happen by accident – it requires deliberate investment of time, resources, and structural changes to move beyond tokenism to genuine partnership, particularly in engaging voices from the Global South.


Topics

Human rights | Development


Transparency is fundamental to accountability in digital rights protection

Speakers

– Ian Barber
– Alex Walden
– Jason Pielemeier
– Peggy Hicks

Arguments

Civil society’s watchdog role in bringing issues to light through transparency and ongoing iterative processes


Digital Services Act as beginning model for public risk assessments providing accountability in regulatory settings


GNI’s independent assessment process for companies with detailed review of internal systems and policies


Importance of moving beyond high-level discussions to evidence-based case studies for meaningful progress


Summary

All speakers emphasize that transparency – whether through public reporting, independent assessments, or open dialogue – is essential for holding both companies and governments accountable for their digital rights commitments.


Topics

Human rights | Legal and regulatory


There is a concerning gap between international commitments on digital rights and actual implementation

Speakers

– Esteve Sanz
– Jason Pielemeier
– Ian Barber

Arguments

Gap between diplomatic achievements in securing commitments and reality of digital repression on the ground


Role of OHCHR and treaty bodies in calling out state failures, though more tangible legal processes are needed


Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement


Summary

Speakers acknowledge a troubling disconnect between formal international agreements and diplomatic commitments on digital rights versus the reality of increasing digital repression and exclusion of civil society from governance processes.


Topics

Human rights | Legal and regulatory


Similar viewpoints

Both speakers reject the false dichotomy between human rights and security/innovation, arguing instead that these objectives can and should be pursued simultaneously as mutually reinforcing rather than competing priorities.

Speakers

– Alex Walden
– Ian Barber

Arguments

Need to focus on human rights, national security, and innovation simultaneously rather than treating them as competing priorities


Human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts


Topics

Human rights | Cybersecurity


Both speakers emphasize the value of creating concrete forums and processes that bring stakeholders together around specific, tangible issues rather than abstract discussions, whether through regulatory compliance or global governance processes.

Speakers

– Jason Pielemeier
– Alex Walden

Arguments

Rights and Risk Forum in Brussels as example of transparent conversation between stakeholders using concrete regulatory artifacts


Series of workshops in nine countries to involve wider stakeholders in WSIS input processes


Topics

Human rights | Legal and regulatory


Both speakers argue that excluding civil society from policy processes is not just unfair to civil society organizations, but fundamentally weakens the quality and effectiveness of the policy-making process itself by removing essential expertise and perspectives.

Speakers

– Ian Barber
– Peggy Hicks

Arguments

Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement


Civil society exclusion weakens policy processes themselves, not just disadvantages civil society


Topics

Human rights | Development


Unexpected consensus

Optimism about internet’s continued value despite digital repression trends

Speakers

– Jason Pielemeier
– Esteve Sanz

Arguments

Internet remains vibrant space for freedom compared to offline mediums, especially in repressed contexts like Iran


EU’s legislative process through Digital Services Act demonstrates successful balance using Charter of Fundamental Rights as framework


Explanation

Despite acknowledging serious challenges with digital repression and the gap between commitments and reality, both speakers maintain optimism about the internet’s fundamental value and the possibility of achieving proper balance through appropriate governance frameworks. This is unexpected given the generally pessimistic tone about current trends.


Topics

Human rights | Freedom of expression


Companies and civil society agreeing on need for regulatory safe harbors

Speakers

– Alex Walden
– Ian Barber

Arguments

Complex regulatory environment requiring safe harbors for effective content moderation and policy iteration


Narrative crisis with funding shifting toward national security and economic impact rather than human rights approaches


Explanation

It’s somewhat unexpected that both a company representative and civil society advocate would implicitly agree on the need for regulatory safe harbors, as civil society often pushes for stronger regulation while companies typically seek regulatory flexibility. Their shared concern about the current regulatory environment suggests common ground on the need for balanced approaches.


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

The speakers demonstrate strong consensus on several key issues: the need for genuine (not tokenistic) multi-stakeholder engagement, the fundamental importance of transparency for accountability, and the concerning gap between international commitments and actual protection of digital rights. They also share concerns about the erosion of inclusive governance processes and the challenges facing civil society organizations.


Consensus level

High level of consensus on core principles and challenges, with speakers from different sectors (government, civil society, private sector, international organization) largely agreeing on both problems and solutions. This suggests a mature understanding of digital rights issues across stakeholder groups, though the consensus also highlights the urgency of addressing systemic challenges in funding, inclusion, and accountability mechanisms. The agreement across diverse perspectives strengthens the legitimacy of calls for more resources and structural changes to support effective digital rights protection.


Differences

Different viewpoints

Approach to addressing funding crisis in civil society

Speakers

– Ian Barber
– Esteve Sanz

Arguments

Narrative crisis with funding shifting toward national security and economic impact rather than human rights approaches


EU’s public diplomacy efforts calling out internet shutdowns and funding projects like protectdefenders.eu for urgent support


Summary

Ian Barber identifies a fundamental narrative crisis where funding is shifting away from human rights approaches, while Esteve Sanz presents the EU’s approach of maintaining funding for human rights work alongside security concerns, suggesting different perspectives on whether the shift is inevitable or can be countered


Topics

Human rights | Development


Scale of multi-stakeholder engagement challenges

Speakers

– Jason Pielemeier
– Ian Barber

Arguments

GNI’s intentional growth from North American/European focus to over 100 global members across four constituencies


Proliferation of forums and processes making it difficult for under-resourced organizations to keep up and participate meaningfully


Summary

Jason presents GNI’s expansion as a success story of inclusive growth, while Ian emphasizes how the proliferation of forums creates overwhelming burdens for under-resourced organizations, representing different views on whether expanding engagement opportunities helps or hinders effective participation


Topics

Human rights | Development


Unexpected differences

Optimism vs. pessimism about digital rights trajectory

Speakers

– Jason Pielemeier
– Esteve Sanz

Arguments

Internet remains vibrant space for freedom compared to offline mediums, especially in repressed contexts like Iran


Gap between diplomatic achievements in securing commitments and reality of digital repression on the ground


Explanation

This represents an unexpected philosophical divide where Jason emphasizes reasons for optimism about the internet’s continued value for freedom, while Esteve presents a more pessimistic assessment of digital repression trends, despite both working toward similar goals


Topics

Human rights | Freedom of expression


Overall assessment

Summary

The discussion revealed relatively low levels of direct disagreement among speakers, with most conflicts being subtle differences in emphasis, approach, or perspective rather than fundamental opposition. The main areas of disagreement centered on funding approaches, engagement strategies, and assessment of current trends.


Disagreement level

Low to moderate disagreement level. The speakers largely shared common goals and values around digital rights protection, but differed on tactical approaches, resource allocation strategies, and assessment of progress. These disagreements are constructive and reflect different organizational perspectives and experiences rather than fundamental ideological divisions. The implications are positive – the disagreements suggest a healthy diversity of approaches within a shared framework, which could lead to more comprehensive and effective strategies if properly coordinated.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers reject the false dichotomy between human rights and security/innovation, arguing instead that these objectives can and should be pursued simultaneously as mutually reinforcing rather than competing priorities.

Speakers

– Alex Walden
– Ian Barber

Arguments

Need to focus on human rights, national security, and innovation simultaneously rather than treating them as competing priorities


Human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts


Topics

Human rights | Cybersecurity


Both speakers emphasize the value of creating concrete forums and processes that bring stakeholders together around specific, tangible issues rather than abstract discussions, whether through regulatory compliance or global governance processes.

Speakers

– Jason Pielemeier
– Alex Walden

Arguments

Rights and Risk Forum in Brussels as example of transparent conversation between stakeholders using concrete regulatory artifacts


Series of workshops in nine countries to involve wider stakeholders in WSIS input processes


Topics

Human rights | Legal and regulatory


Both speakers argue that excluding civil society from policy processes is not just unfair to civil society organizations, but fundamentally weakens the quality and effectiveness of the policy-making process itself by removing essential expertise and perspectives.

Speakers

– Ian Barber
– Peggy Hicks

Arguments

Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement


Civil society exclusion weakens policy processes themselves, not just disadvantages civil society


Topics

Human rights | Development


Takeaways

Key takeaways

Civil society faces a narrative crisis with funding shifting from human rights approaches to national security and economic impact priorities, leading to capacity issues and reduced ability to participate effectively in digital rights advocacy


Multi-stakeholder collaboration requires intentional effort and resources to be truly global and inclusive, moving beyond tokenism to meaningful co-leadership roles for civil society organizations


Tech companies face significant challenges balancing online harm prevention with human rights protection, particularly around speed/scale issues and complex regulatory environments


There is a concerning gap between international diplomatic achievements in securing digital rights commitments and the reality of increasing digital repression on the ground


Human rights, national security, and innovation can be mutually reinforcing rather than competing priorities when properly integrated into policy frameworks


Transparency is fundamental to accountability, with new models like the Digital Services Act providing examples of public risk assessments and stakeholder engagement


The internet remains a vital space for freedom of expression compared to offline alternatives, particularly in repressive contexts, making continued protection efforts essential


WSIS Plus 20 represents a critical fork in the road for determining whether future internet governance will build on multi-stakeholder human rights values or move in a different direction


Resolutions and action items

Global Partners Digital is coordinating the Global Digital Rights Coalition for the WSIS Review, working with civil society organizations globally


GNI and partners published reports from workshops in nine countries to inform wider stakeholder input into the WSIS process


EU will propose unprecedented language on digital human rights in the WSIS Plus 20 resolution, acknowledging the rise of digital authoritarianism


Continued Rights and Risk Forums will be held to discuss Digital Services Act implementation and other regulatory frameworks with concrete examples


EU’s Internet Accountability Compass project will analyze the gap between digital rights commitments and actual digital repression practices


Unresolved issues

How to adequately fund civil society organizations globally to maintain their capacity for digital rights advocacy


How to make multi-stakeholder engagement more effective and truly global, particularly including voices from the Global South


How to bridge the gap between international commitments on digital rights and actual state practices of digital repression


What specific accountability mechanisms can be developed for partnerships working in the Global South to prevent disengagement by Global North actors


How to scale successful collaboration models like GNI to include more small and medium enterprises


How to effectively integrate investor engagement in tech governance and human rights protection


How to maintain the open internet’s role as a space for freedom while addressing legitimate security and innovation concerns


Suggested compromises

Using human rights approaches as a way to ‘Trojan horse’ funding by demonstrating how human rights and security outcomes can be mutually reinforcing


Developing a ‘smart mix’ of mandatory measures and policy incentives that allows states to meet human rights obligations while enabling appropriate regulation


Creating iterative, ongoing engagement processes rather than one-off events to build trust and ensure sustained collaboration


Establishing safe harbors for companies to enable effective content moderation while maintaining human rights protections


Using existing successful process modalities from forums like the AHC negotiations as templates for more inclusive multi-stakeholder approaches in other venues


Thought provoking comments

I think that we are in a new stage where the Internet is not only controlled, but it’s used for control, and what we see is a very depressing trajectory. So there is this gap that is very puzzling between the diplomatic achievements that we have managed to do in committing global actors, very powerful global actors, to respect fundamental freedoms online and what’s going on in reality.

Speaker

Esteve Sanz


Reason

This comment reframes the entire discussion by distinguishing between the internet being ‘controlled’ versus being ‘used for control’ – a subtle but profound distinction that highlights how digital infrastructure has become a tool of oppression rather than just being restricted. It also identifies the core paradox of digital rights work: the gap between international commitments and ground reality.


Impact

This observation became a recurring theme throughout the discussion, with multiple panelists referencing this ‘gap’ between commitments and reality. It shifted the conversation from focusing solely on policy solutions to acknowledging the fundamental disconnect between diplomatic achievements and actual implementation, adding a layer of realism and urgency to the discussion.


So we’re dealing with both digital repression and digital depression. But I think it’s really important to remind ourselves… the Internet is still an incredibly vibrant and critical space, especially when you compare it to offline mediums for free expression and freedom of association and assembly.

Speaker

Jason Pielemeier


Reason

This comment is particularly insightful because it acknowledges the emotional toll of working in digital rights (‘digital depression’ – a play on Esteve’s ‘digital repression’) while providing crucial perspective. It challenges the prevailing pessimism by recontextualizing online spaces relative to offline alternatives, especially in repressive contexts.


Impact

This comment served as a pivotal moment that injected much-needed optimism into what had become a rather somber discussion about funding cuts, capacity issues, and rising authoritarianism. It reframed the conversation from one of defeat to one of continued purpose, reminding participants why their work matters and providing emotional grounding for the remainder of the discussion.


At the end of the day, the most impactful forms [of collaboration] are going to be those that truly shift power and resources back to civil society and allow them to engage… it’s not always structural support then to address them… it’s again this symbolic means of doing things.

Speaker

Ian Barber


Reason

This comment cuts through the diplomatic language often used in multi-stakeholder discussions to identify the core issue: the difference between symbolic inclusion and actual power-sharing. It challenges other panelists to move beyond tokenistic engagement to meaningful structural change.


Impact

This observation forced other panelists to be more specific about their collaboration efforts and accountability mechanisms. It elevated the discussion from general statements about ‘multi-stakeholder engagement’ to concrete questions about power dynamics, resource allocation, and genuine partnership, leading to more substantive responses about actual practices and challenges.


In order to achieve national security interests, in order to focus on ongoing innovation and have competition in the market, we have to ensure that human rights is integrated across those conversations and remains a priority… we have to do all of them at the same time.

Speaker

Alex Walden


Reason

This comment directly addresses one of the session’s central tensions by rejecting the false choice between human rights and other priorities. Instead of accepting trade-offs, it argues for integration – a more sophisticated approach that acknowledges complexity while maintaining principles.


Impact

This response helped shift the framing away from human rights as an obstacle to innovation/security toward human rights as an integral component of sustainable solutions. It influenced subsequent speakers to also reject the either/or framing and think more holistically about how different priorities can be mutually reinforcing rather than competing.


What are the accountability mechanisms for these type of partnerships, especially when you’re working in the global south and it’s very easy for global north actors to disengage when these type of partnerships are happening?

Speaker

Alejandro (Access Now)


Reason

This question from the audience cuts to the heart of power imbalances in international digital rights work. It challenges the panel’s discussion of partnerships by highlighting the structural inequalities that make such partnerships fragile and potentially exploitative.


Impact

This question forced all panelists to grapple with concrete accountability mechanisms rather than staying at the level of aspirational statements. It brought the discussion full circle to Ian Barber’s earlier points about power and resources, and prompted more specific responses about transparency, ongoing engagement, and structural supports for meaningful partnership.


Overall assessment

These key comments fundamentally shaped the discussion by introducing critical tensions and reframes that prevented the conversation from remaining at a superficial level. Esteve’s observation about the gap between commitments and reality established a sobering foundation that ran throughout the session. Jason’s ‘digital depression’ comment provided crucial emotional and strategic reframing that prevented despair from overwhelming the discussion. Ian’s focus on power dynamics challenged other participants to move beyond tokenistic approaches, while Alex’s integration argument offered a path forward that doesn’t sacrifice principles. Finally, Alejandro’s accountability question from the audience brought concrete urgency to abstract discussions of partnership. Together, these comments created a discussion that was both realistic about challenges and constructive about solutions, balancing acknowledgment of systemic problems with practical approaches for moving forward. The interplay between these perspectives created a more nuanced and actionable conversation than would have emerged from purely optimistic or pessimistic framings alone.


Follow-up questions

How to make cross-stakeholder engagement more global and better engage with small and medium enterprises

Speaker

Peggy Hicks


Explanation

This addresses the challenge of expanding beyond large companies to include smaller tech enterprises in human rights discussions and ensuring global representation rather than just North American/European perspectives


How to deal with investors within the tech space for human rights protection

Speaker

Peggy Hicks


Explanation

There’s a need to understand how to engage financial stakeholders who influence tech companies to prioritize human rights considerations in their investment decisions


How to assess the risks faced by human rights defenders through digital technology

Speaker

Peggy Hicks


Explanation

This was mentioned as part of a UN Human Rights Council resolution calling for specific work to understand and address threats to human rights defenders in digital spaces


How to bridge the gap between diplomatic achievements in human rights commitments and reality on the ground

Speaker

Esteve Sanz


Explanation

There’s a puzzling disconnect between global actors committing to respect fundamental freedoms online and the actual rise in digital repression that needs to be analyzed and addressed


How to ensure regulatory frameworks provide adequate safe harbors for effective content moderation

Speaker

Alex Walden


Explanation

Companies need clear legal protections to implement responsible content moderation practices while respecting human rights, but the complex regulatory environment makes this challenging


How to maintain human rights focus amid competing pressures from national security and economic competition narratives

Speaker

Peggy Hicks


Explanation

There’s a concerning trend where human rights considerations are being deprioritized in favor of security concerns and economic competitiveness, requiring strategies to maintain their importance


What accountability mechanisms can be created for partnerships working in the Global South to prevent disengagement by Global North actors

Speaker

Alejandro (Access Now)


Explanation

This addresses the need for structural safeguards to ensure sustained commitment and prevent abandonment of collaborative efforts in resource-constrained regions


How to better support civil society capacity building given funding challenges and proliferation of forums

Speaker

Ian Barber


Explanation

Civil society organizations face resource constraints while needing to engage across an increasing number of policy processes, requiring strategic approaches to capacity building and engagement


How to evaluate the effectiveness of new transparency and risk assessment tools like those under the Digital Services Act

Speaker

Alex Walden


Explanation

As new regulatory frameworks create public accountability mechanisms, there’s a need to assess whether these tools provide meaningful value for human rights protection


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.