Open Forum #47 Demystifying WSis+20

Open Forum #47 Demystifying WSis+20

Session at a glance

Summary

This discussion focused on the World Summit on the Information Society (WSIS) Plus 20 review process, examining progress made over the past two decades and identifying priorities for the upcoming December negotiations. The panel, hosted by Finland and featuring representatives from ICANN, UNDP, government, civil society, and Smart Africa, explored how WSIS commitments have been implemented and what gaps remain.


Panelists emphasized that WSIS has successfully established multi-stakeholder participation in internet governance discussions, a significant achievement compared to traditional UN processes. Technical initiatives like DNSSEC and internationalized domain names were highlighted as concrete successes that emerged from WSIS frameworks. However, significant challenges persist, particularly the digital divide affecting 2.6 billion people globally and substantial gender gaps in digital access.


A live poll revealed that digital capacity building in under-resourced regions was identified as the most pressing need requiring greater support. Smart Africa’s representative outlined four critical gaps for Africa: meaningful connectivity, regulatory harmonization, capacity building including AI literacy, and digital sovereignty. The technical community stressed the importance of preserving global internet standards and interoperability that have enabled the internet’s success.


Participants noted a concerning disconnect between New York-based UN diplomatic processes and the technical communities that have been working on these issues for decades. They emphasized the need for stakeholders to actively participate in the WSIS Plus 20 process, provide concrete evidence of what works, and demand meaningful inclusion in negotiations. The discussion concluded with calls for continued multi-stakeholder engagement to ensure the December outcome reflects practical realities of how the internet functions and serves global development needs.


Keypoints

## Major Discussion Points:


– **WSIS Plus 20 Review Process and Timeline**: The panel discussed the upcoming 20-year review of the World Summit on the Information Society (WSIS), including its mandate, deliverables, and timeline leading to December 2024 negotiations. This includes examining whether existing WSIS action lines and institutions like the Internet Governance Forum (IGF) remain fit for purpose.


– **Multi-stakeholder Engagement and Inclusivity**: A central theme was ensuring meaningful participation from all stakeholders – government, civil society, technical community, and underrepresented regions – in the WSIS Plus 20 process. Panelists emphasized the need to maintain and strengthen the multi-stakeholder model that has been a hallmark of WSIS success.


– **Digital Divide and Capacity Building**: The discussion highlighted persistent gaps in global digital access, with particular focus on meaningful connectivity, affordability, digital skills, and infrastructure needs in underserved regions, especially Africa. A live poll showed capacity building as the top priority needing greater support.


– **Technical Infrastructure Successes and Ongoing Needs**: Panelists reviewed concrete achievements from WSIS initiatives, including DNSSEC implementation, internationalized domain names, and Internet exchange points (IXPs) deployment, while identifying areas still requiring attention like universal acceptance and interoperability.


– **Practical Steps for Stakeholder Engagement**: The conversation concluded with specific recommendations for how different communities can contribute to shaping the WSIS Plus 20 outcome, including showing up to consultations, providing evidence-based input, and demanding seats at decision-making tables.


## Overall Purpose:


The discussion aimed to prepare stakeholders for meaningful participation in the WSIS Plus 20 review process by explaining the scope and timeline, identifying successful outcomes from the past 20 years, highlighting current gaps and priorities, and providing concrete steps for engagement before the December 2024 negotiations.


## Overall Tone:


The tone was collaborative and constructive throughout, with panelists demonstrating shared commitment to the multi-stakeholder approach despite representing different sectors. There was a sense of cautious optimism about progress made while acknowledging significant work remains. The discussion maintained a practical, action-oriented focus rather than being purely theoretical, with panelists offering specific examples and concrete recommendations. The tone remained consistently professional and forward-looking, emphasizing the importance of continued engagement and not taking past achievements for granted.


Speakers

**Speakers from the provided list:**


– **Theresa Swinehart** – Works with ICANN, session moderator


– **Yu Ping Chan** – Leads digital engagements and partnerships at the United Nations Development Programme (UNDP)


– **Jarno Suruela** – Undersecretary of State for International Trade at the Finnish Foreign Ministry


– **Fiona Alexander** – Professor at American University in Washington, D.C., former government official with experience in internet governance


– **Kurtis Lindqvist** – President and CEO of ICANN


– **Lacina Kone** – CEO and Director General of Smart Africa, a Pan-African organization based in Kigali


– **UNKNOWN** – Role/title not specified in transcript


**Additional speakers:**


None identified – all speakers in the transcript were included in the provided speakers names list.


Full session report

# WSIS Plus 20 Review: Assessing Two Decades of Progress and Charting the Path Forward


## Executive Summary


This comprehensive discussion, moderated by Theresa Swinehart from ICANN and hosted by Finland, brought together key stakeholders to examine the World Summit on the Information Society (WSIS) Plus 20 review process. The panel featured Yu Ping Chan from the United Nations Development Programme (UNDP), Jarno Suruela from the Finnish Foreign Ministry, Fiona Alexander from American University, Kris Lindqvist from ICANN, and Lacina Kone from Smart Africa.


The discussion revealed both significant achievements and persistent challenges in global digital development. Panelists emphasized that WSIS has successfully established multi-stakeholder participation as a cornerstone of internet governance, with concrete technical successes including DNSSEC implementation and internationalized domain names. However, substantial challenges remain, particularly the digital divide affecting 2.6 billion people globally and significant gender gaps in digital access.


A live poll conducted during the session identified digital capacity building in under-resourced regions as the most pressing priority. The December negotiations will determine critical outcomes including the future of the Internet Governance Forum and potential updates to WSIS action lines.


## The WSIS Plus 20 Process and Timeline


Yu Ping Chan provided essential context for the WSIS Plus 20 review, explaining this represents the second comprehensive review by the UN General Assembly of the 2003-2005 WSIS outcomes. The process examines whether existing action lines remain sufficient for addressing current digital developments and involves multiple UN agencies drafting reports and conducting stakeholder consultations.


The timeline includes several critical milestones leading to December negotiations. Chan emphasized that the multi-stakeholder approach remains central to WSIS and must be maintained throughout the review process, though she acknowledged significant challenges in ensuring meaningful participation within traditional UN frameworks.


Jarno Suruela reinforced that the Global Digital Compact and WSIS should be implemented in synchronization rather than as competing initiatives. The December resolution will determine whether the Internet Governance Forum continues and potentially update WSIS action lines to reflect contemporary digital realities.


## Multi-Stakeholder Engagement: Achievements and Structural Challenges


All panelists demonstrated strong consensus on the fundamental importance of multi-stakeholder engagement while acknowledging implementation challenges. The multi-stakeholder model has been central to WSIS achievements and must be protected in the Plus 20 process.


Fiona Alexander highlighted critical structural challenges, noting that New York-based UN systems are not as open as expert agencies, creating barriers for stakeholder participation. She emphasized that co-facilitators have made positive efforts to allow stakeholder input, but continued pressure is needed to maintain access.


Yu Ping Chan identified a significant gap between the New York diplomatic community and technical communities that have worked on these issues for decades. She observed that General Assembly processes tend to oversimplify complex technical issues and insert compromised language without understanding implications, creating risks of applying inappropriate political context to accepted technical terms.


Lacina Kone provided a positive counterpoint, demonstrating how multi-stakeholder cooperation works effectively when rooted in regional needs. Smart Africa’s experience shows that continental ownership creates leverage, with African heads of state making digital development a political imperative.


## Technical Infrastructure: Concrete Achievements


Kris Lindqvist provided a comprehensive overview of technical achievements over the past 20 years. DNSSEC implementation represents a significant accomplishment in addressing DNS security weaknesses through global cooperation. The evolution of Internationalized Domain Names (IDNs) to support non-Latin scripts has improved linguistic accessibility, though universal acceptance remains an ongoing challenge.


Africa has experienced phenomenal growth in Internet Exchange Points (IXPs) over the past two decades. The global deployment of root server instances, with almost 2,000 instances improving internet stability and performance, represents another concrete achievement of multi-stakeholder cooperation.


Lindqvist made a thought-provoking observation about the paradox of success, suggesting that the internet’s achievements may have made people take global standards for granted. The technical community must continue providing evidence and implementation data showing what works, particularly as the success of global interoperability standards may make their importance less visible to policymakers.


## Regional Development and Persistent Digital Divides


The discussion highlighted persistent gaps in global digital access, with particular focus on meaningful connectivity, affordability, digital skills, and infrastructure needs. Suruela presented sobering statistics: 2.6 billion people still lack internet access, with significant gender gaps and urban-rural disparities. Addressing the digital divide is crucial for getting back on track with Agenda 2030 and Sustainable Development Goal targets.


Lacina Kone provided detailed insights into African digital development challenges, identifying four persistent gaps: meaningful connectivity, regulatory harmonization, skills development, and digital sovereignty. Despite having over 50 digital laws across the continent, Africa suffers from little interoperability, requiring continental legal frameworks to address fragmentation.


The skills gap remains particularly acute, with less than 10% of adults in several African countries possessing basic digital skills. New challenges are emerging with artificial intelligence literacy gaps. Smart Africa, as a Pan-African organization based in Kigali representing over 40 countries and 1.1 billion people, demonstrates how continental ownership can create leverage and deliver concrete results.


## Language and Communication Challenges


A significant theme was the critical importance of language clarity in UN processes. Chan and Lindqvist both identified terminology as a major challenge from different perspectives. Chan emphasized the gap between diplomatic and technical communities, recommending clear, simple, actionable language that diplomats already understand to avoid misinterpretation.


Lindqvist warned that language matters significantly in UN processes, with terms like “sovereignty” and “control” having different meanings to different communities. He expressed concern that political context can be inappropriately applied to technical terms, creating negotiation complications that could undermine the global interoperability that has enabled internet success.


This reflects a broader structural problem: the disconnect between technical experts who understand practical implications of policy decisions and the diplomatic community that ultimately makes those decisions.


## The Internet Governance Forum’s Future


The sustainability of the Internet Governance Forum emerged as a critical concern requiring resolution in December negotiations. Suruela highlighted that the IGF serves as the primary multi-stakeholder forum for international digital policy, with over 160 national and regional initiatives demonstrating its global reach.


However, the IGF requires a more sustainable financial basis from the regular UN budget to ensure continued operations and meaningful participation from underserved regions. The December resolution will determine whether the IGF continues and potentially update WSIS action lines to reflect contemporary digital governance needs.


## Priority Setting and Stakeholder Input


The live poll identified digital capacity building in under-resourced regions as the top priority, reinforcing the development-focused origins of WSIS while highlighting the continued relevance of its foundational principles. Interestingly, open technical standards and cross-border interoperability received minimal support, which Lindqvist interpreted as potentially indicating success rather than lack of importance.


The poll results validated the development orientation that several panelists emphasized throughout the session, suggesting that capacity building remains as critical today as at the summit’s inception.


## Philosophical Framework and Global Context


Lacina Kone provided a sophisticated framework for understanding current global dynamics, distinguishing between multipolarity as a geopolitical fact and multilateralism as a conscious choice to cooperate. This distinction elevated the discussion from technical implementation to fundamental questions about how nations choose to engage in digital governance.


This framework helped contextualize WSIS Plus 20 challenges within broader global trends towards fragmentation while emphasizing that cooperation remains viable despite geopolitical tensions.


## Concrete Action Steps and Recommendations


The discussion concluded with specific recommendations for different stakeholder communities:


**Technical communities** should continue providing concrete evidence and implementation data showing what works, highlighting fundamental principles that have enabled internet success.


**All stakeholder groups** were encouraged to engage with the informal multi-stakeholder feedback group being established by co-facilitators and participate in upcoming consultations.


**Regional organizations** should share concrete success stories and operational examples, following Smart Africa’s model of demonstrating how multi-stakeholder cooperation can deliver tangible results.


**National engagement** was emphasized, with stakeholders urged to engage with their governments to inform WSIS Plus 20 positions and participate in upcoming events, including the high-level WSIS event in Geneva.


## Path Forward and Critical Decisions


The WSIS Plus 20 process represents a critical juncture for internet governance, with December decisions potentially affecting the next decade of digital policy. The strong consensus among diverse stakeholders on fundamental principles provides a solid foundation for negotiations, though maintaining multi-stakeholder openness will require continued vigilance and active participation.


Success will depend on balancing celebration of concrete achievements with honest acknowledgment of persistent gaps, particularly in capacity building and digital inclusion. The ability of different communities to communicate effectively across diplomatic and technical divides while maintaining collaborative spirit will be essential.


The discussion reinforced that the multi-stakeholder model is not just a procedural preference but a practical necessity for addressing complex, interconnected challenges of global digital governance. As the international community prepares for December negotiations, the insights from this discussion provide valuable guidance for ensuring WSIS Plus 20 builds on past successes while addressing digital governance challenges of the next two decades.


Session transcript

Theresa Swinehart: Okay, I think that’s the sign that we get to start. Everybody have their headsets on, ready to go? Yes. Good, good, fantastic. Good, good, excellent. First of all, I’d like to thank the Government of Finland for joining us for this. This is fantastic to be able to do this session. My name is Theresa Swinehart. I work with ICANN. And we are very much looking forward to this panel session, which will focus in on the WSIS process and where we are with regards to that. This does build on our discussion that we held at the last IGF in December 2024 and what has been and can be done between now and the upcoming December WSIS Plus 20-related negotiations and what matters to us leading up to that. Clearly, the decisions that are still to come could affect the Internet governance aspects for the next decade. And we really need to look at what practical steps different communities can take between now and December and what their observations are from the past. So with that, I’d like to ask the panelists to briefly introduce themselves. You’ll see that we have a panel from the technical community, from government, civil society and intergovernmental initiatives. And this will be an important observation from each of these sectors for the discussions. So if I could start with the panel to my right for the brief introductions.


Yu Ping Chan: Thank you so much. So my name is Yu-Ping Chan. I lead digital engagements and partnerships at the United Nations Development Programme.


Jarno Suruela: Thank you. Good afternoon, everybody. My name is Jarno Suruela. I’m Undersecretary of State for International Trade at the Finnish Foreign Ministry.


Fiona Alexander: Hi. Nice to see everyone again. Fiona Alexander. I’m a professor at American University in Washington, D.C.


Kurtis Lindqvist: I’m Kris Lindqvist. I’m the President and CEO of ICANN.


Lacina Kone: I am Lacina Koné. I am the CEO and Director General of Smart Africa, a Pan-African organization based in Kigali.


Theresa Swinehart: Fantastic. Very good. Thank you, everybody. Now, we have four sections to today’s session in a limited amount of time. So we will go through each of them, and we do have a poll as well. So that will add to the excitement. So on the first part that we really want to talk about is understanding WSIS Plus 20 and what it is. It’s in the 20-year review process at this point and the commitments made between 2003 and 2005 and building on a more inclusive development-oriented information society. What would be very good to hear from the panelists is to start discussing the scope of that process, who’s involved, what’s at stake, and how different communities can still contribute. So with that, question one, I’ll go over to Yuping. UNDP is one of the lead co-facilitators of the Geneva Action Plan. If you could just walk us through the mandate for WSIS Plus 20, what deliverables are expected, timelines that you have in place, and the broad range of stakeholders engaged meaningfully.


Yu Ping Chan: That would be great. Thank you so much Theresa. So I think a lot of us have been following this WSIS process for a number of years and for those who are not as up to speed on some of the intricacies of UN processes, which I agree are very long and very complicated, in essence what will be happening this year at the end of the year is that there will be the adoption of the WSIS plus 20 review. So this will be the second review that’s conducted by the member states of the United Nations General Assembly of the outcome documents that Theresa had mentioned, the 2003-2005 Tunis and Geneva outcomes of the WSIS summits themselves. So the question here is how will member states shape this eventual review to reflect some of the ongoing discussions that are happening around digital, both at the United Nations as well as other international forums? Will the WSIS action lines that have actually been established to implement the original WSIS outcomes still suffice to cover the breadth and the multitude of the global digital discussions and developments since then? And do the institutions and processes that were set up through the WSIS, such as the Internet Governance Forum that brings us all together today, still be maintained, updated, refined? You know, are they still fit for purpose? And how do the member states reflect on all the conversations that have taken place, the developments so far in the last 20 years? The process has been actually, as I mentioned, quite complex. There have been a number of UN agencies that have been involved in drafting Secretary-General’s reports. The ITU, the UNESCO colleagues have also had consultation processes that culminated in a number of submissions to the Secretary itself. The Secretary-General will be putting forward a report, the timing of which is a little bit unclear, that will summarize some of these ongoing conversations as an input for the consideration of the member states. And, as some of you have already been involved in, there are ongoing consultations that have been held in forums such as the IGF, such as the Paris Summit, the Paris Conference that was convened by UNESCO just a couple of weeks ago. And then in two weeks, at the high-level WSIS event that is convened by ITU, UNDP, UNESCO and UNCTAD, which is an annual forum where, again, we gather stakeholders to really talk about what we see as the progress made through WSIS and then the future going forward as well. There will be a number of occasions where it will be important to hear the stakeholder point of view because part of the WSIS and the outcomes of the WSIS and the reason why the process endured so long and is something that has really been able to carry through some of these outcome documents in very concrete ways is the commitment of the multi-stakeholder community and the network that has developed. It is important that as stakeholders we continue to be engaged in this process and by saying this I also include the UN system because for us we as UNDP see the WSIS action lines and the WSIS process as very important in translating guidance that is given by the Member States into actionable outcomes that are focused on delivery to countries and to communities that we serve. So in the context of ongoing negotiations at the United Nations, the conclusion of the Global Digital Compact just last year, how then do we reflect these sort of developments into the WSIS action lines, into the WSIS review, reflecting on the fact that the principles that were agreed 20 years ago in Geneva and Tunis remain as relevant today as they were 20 years ago. So that’s sort of the opening context where I really hope that stakeholders will continue to have conversations such as in the IGF and in other institutional forums to maintain the importance of the multi-stakeholder approach as embodied at the heart of WSIS itself. Thank you.


Theresa Swinehart: You really highlight the subject stakeholder and the multi-stakeholder dimension of all of this in the different subject areas. Thank you. Jarno, first of all, Finland, thank you so much for co-hosting this and by example also from government engagement in the process, what steps can governments take to ensure that really all regions, not just traditional actors, are meaningfully included in this WSIS plus 20 process?


Jarno Suruela: I think governments are like the one of Finland, so we are doing a lot together with different actors, organizations and mechanisms. And I think through the recent Global Digital Compact, the UN is better placed than ever to foster multi-stakeholder cooperation on digital matters and to leverage digital technologies for sustainable development. We believe that the GDC and WSIS are highly complementary and should be implemented in sync with each other. This is also to guarantee that everybody will be then on board. For us, the IGF is the primary multi-stakeholder forum for shaping international digital policy and Internet governance at the UN level. A clear indication of its success are over 160 national, regional and youth initiatives of the IGF. It has also become an important platform for discussing emerging digital issues such as AI. And Finland, so we are of course strongly supporting the IGF. We have given financial support to the IGF throughout its existence, being one of the all-time top contributors and we encourage of course other actors to step up their support to the IGF. In these geopolitical circumstances of today, I think it’s even more important to support and enhance the multi-stakeholder model on internet governance, which in essence empowers the various stockholders and enhances resilience of our common internet structure. And I think fragmentation of internet poses a danger to actualization of universal human rights, international trade, and global geopolitical stability. But I think this thing, so how we can reach everybody and do good for everybody, is that we have to of course address the digital divide. And the majority of the world’s population do not yet have meaningful and safe access to the internet, which requires urgent action. And breaching this digital divide is not only about affordable connectivity, it requires also investments in skills and competencies and respect for human rights and fundamental freedoms online. And we want to develop new technologies and the internet by respecting democratic values and principles. Still I think the WSIS 20 years review should highlight the need to focus on trusted connectivity and the free, open, global, interoperable and secure internet. And the review is also an important opportunity to renew and strengthen the IGF mandate, including by ensuring a more sustainable financial basis from the regular UN budget that such a global, inclusive effort deserves and needs. So to conclude, I think we are doing a lot.


Fiona Alexander: Yeah, I would agree. And thank you for your observations for an IGF. One needs a sustainable budget. One needs to be able to bridge all stakeholders around the globe and be as inclusive. So I think those are important elements coming into the WSIS 20 aspect. Fiona, from your perspective, you’ve had quite a bit of experience in this as well. What areas and where is the input most needed before the WSIS 20 outcome is finalized? And how can we make sure that those inputs reflect a diverse global perspective? Thank you, Teresa, and thanks for the invitation and for the question. And I think it’s important to keep in mind, you kind of laid out a lot of the stuff that’s going to be happening across the UN system this year, and it’s a lot of activity. But at the end of the year in December, member states are going to adopt a resolution, and the resolution is going to decide whether or not the IGF continues, and it’s potentially going to update the WSIS action lines, and it’s going to talk about the GDC. And a process that’s been kicked off, there’s been co-facilitators appointed to kind of work with stakeholders and governments to kind of put the base documents together, and those co-facilitators have had some stakeholder consultations, they had a government consultation, which is broadcast on Web TV, or UN Web TV if you wanted to watch it. And then they also have issued an elements paper this past Friday that came out. And I think, you know, there’s a litany of issues, there are substantive issues that are going to be covered that are important to a wide range of the civil society and non-government stakeholders, whether it’s getting people connected, whether it’s ensuring human rights online, whether it’s dealing with AI and digital governance issues. But I think there’s one issue set that I think people are sort of united around from the civil society perspective, and that’s making sure that, you know, what I at least personally think is one of the biggest hallmark achievements of the WSIS process is opening up the conversation so that all stakeholders are considered. And you know, we saw last year, unfortunately, in the Global Digital Compact process, the GDC process, that the systems in New York are not nearly as open as UNDP or ITU or other expert agencies of the UN have become. They originally weren’t 20 years ago either, right, and they’ve made a lot of effort to do that. And so I think the challenge that we have before us this year is to make sure that the process that’s going to unfold this year has a real voice and gives real space for people to provide input into that process. And the two co-facilitators and the agenda they’ve laid out and the timeline and the schedule seems to be allowing that, which I think is a great outcome and a great improvement. There was a group of stakeholders originating in civil society and others that signed on that have submitted a couple of letters giving some very specific recommendations and suggestions for how to allow for that engagement. And so far, we’re seeing positive movement in that, so I think that that’s good. But we shouldn’t take that for granted, and I think people should always be pushing to make sure that everyone’s in the room and everyone has a say. I think conversations here at IGF are helpful, conversations that are going to be happening at UNDP, that are going to be happening at ITU in a few weeks in Geneva. These are all going to be great things. But these conversations that the co-facilitators are going to lead and actually having conversations that are not just people giving empty statements and actually seeing what they say and provide reflected in the documents and discussed are going to be an important next step. And I think that’s what we want to make sure happens going forward for the rest of this year until we get to that final decision point where governments adopt a resolution in December.


Theresa Swinehart: for where the next question goes. And I can only echo that we’ve come a long way from 20 years ago. There was these kinds of panels, these kinds of discussions, were not normal conversations that we would usually have. And so I think these opportunities of inclusivity of all stakeholders and subject areas has lended to some really core results. Which brings us to the second section of this session about where WSIS outputs have shown an impact. We’ve talked about WSIS has produced different frameworks and dialogues and some tangible changes including stakeholders at the table in discussions discussing subject areas and and providing factual subject expertise into different conversations. Which outputs though have been effective and which ones still need support is an important part of this conversation. So with that I’m going to turn it over to Kurtis actually about where have WSIS-related initiatives such as DNSSEC or internationalized domain names demonstrated lasting value, particularly for the underserved regions, and where are there still gaps? Where can we still do some work?


Kurtis Lindqvist: Thank you Theresa. So I think you’re right as was mentioned during the WSIS and IGF processes there’s been a lot of talk about where the internet has been or where there was development or gaps to be filled, such as security in the domain name system, universal access or global region access. And DNSSEC is a very concrete success story. It was globally identified that there were weaknesses in the domain name system. These were addressed through work in the IETF to create the technical standards, city building, community building through the IGF and the multi-stakeholder awareness raising, adaptation. And this wasn’t developed by top-down mandates, this came from this realization from the technical communities, from the work here and elsewhere, really that this was a problem that needed to be addressed to create a reliability and trust in the system. And through persistent cooperation engagement Rich, and accessibility includes linguistic accessibility, and that includes in the DNS. We have seen this being evolved over the years and developed against standardization done throughout the IETF, but where we now today have a technical system in IDNs, which covers the domain name system for non-Latin scripts, Arabic, Cyrillic, Chinese, other non-Latin scripts like Swedish, my native language, and we also see a lot of work being pushed over into the universal acceptance that we go beyond the domain name system and we start talking about applications supporting these scripts in a way that we haven’t had before. Technical standards have been done, we have them, and now the universal acceptance part is to get these adopted and really used in all the world’s applications, again, something that has been discussed and pushed through the awareness and awareness-raising in the IGF sessions, in the VSYS context, about how important this is. We as ICANN have done a lot of work and outreach around this as well. I think this showcased this last point a little bit about where there are still gaps, and the gap is not necessarily always about capability, we have the technical capability, it’s about ongoing alignment and work through that we build on these capabilities and make them accessible everywhere, and we continue to do the work on coordination through rolling out the DNSSEC around operators around the world, and we also see that where this cooperative effort fails or stagnates, that’s when we start seeing fragmentation, which is a bad thing, and we see a lack of progress. And I think VSYS plus 20 really needs to draw a clear distinction here that the internet, actually the technical solutions work when they are supported by this long-term coordination that we have seen as part of the IGF, as part of the VSYS process. So the bottom line is really this, that the VSYS output so far has delivered a lot of positive examples of work when the multi-stakeholder model has been put into practice and concrete action, and the VSYS plus 20 should really preserve and protect this, because this is what has been delivering, and I think if we start upsetting this, we risk seeing continued delivery as we heard from the panels in the first section, we should really not try to bypass this, but build on what we have and continue this to ensure continued deployment.


Theresa Swinehart: Thank you. Thank you. And that does really tie into the first part of making sure we bridge into the south.


Lacina Kone: Thank you very much, Theresa, for the questions, and thank you for inviting Smart Africa. At Smart Africa, a coalition of over 40 countries, African governments, partners, civil society dedicated to transforming Africa into a single digital market, together we represent over 1.1 billion populations, united around just one vision, an integrated, sovereign and inclusive digital Africa. Our mission is to convert WSIS commitments into immeasurable progress. By saying that, excuse me, over the past decade, you know, we’ve drawn three core lessons. Number one is a continental ownership that creates leverage. With a direct endorsement from African head of the state, digital development has become a political imperative for all nations. Flagship priorities like a broadband strategy developed together with Senegal, digital identity developed together with the Benin, cybersecurity with the Cote d’Ivoire, cloud infrastructure and champions are championed nationally and regionally with a regional perspective, thereby reflecting a shift from a fragmented project to a shared continental ambition. Number two of what we actually drawn is a policy framework must deliver real result. Through initiative like a Smart Africa Trust Alliance, which is a digital ID interoperability among nations. the Smart Africa Digital Academy which is for capacity building, the Smart Africa Backbone which actually calls every single country in Africa to be interconnected to at least two of its neighbors and other relevant initiatives. We have moved from policy ideas to operational pilots and regional implementation. These are not just a future aspiration, they are a working model. Number three, and the last one, is the multi-stakeholder cooperation it works if it’s rooted in African need. Our digital scholarship funds created back in 2018, today we’ve actually trained more than 90 students in a master’s degree in digital transformation at different universities in Africa. And our data governance framework which is actually running in Senegal and Ghana and so forth, and our harmonized regulatory blueprints are developed not just with the government but also civil society, academia, and private sector, both African and global. So WSIS gives us the vision, Smart Africa is building the bridge.


Theresa Swinehart: Thank you. That’s so well articulated, I have to say, between the continental ownership, the ambition, but also importantly taking policy and how does one operationalize that in a way that’s rooted in the local community needs, which are very distinct from different communities to each other. So we’re going to run a little experiment here, we’re going to see how this works. Ah, it did work, we have a slide up, fantastic. So much of this work has involved contributions across different sectors, including the private sector. And what we’d like to do is just get a quick read from the room, including the Zoom attendees, which WSIS-related initiatives do you think need greater support today? So from the audience, I’m going to ask for a show of hands, and Becky, I think you’re going to work magic in the Zoom room, is that right? Something to that effect? Yes, the poll’s up in the Zoom room, so I’ll let you know. So for the first question, the IGF, so the question is, does it need greater support? So first one, the IGF is a globally accessible platform for dialogue. Is that a yes? Okay. Sense of the room? Very good, okay, excellent. And online so far that has 8%. in the zoom room that that one has eight percent oh wonderful okay very good let’s keep track of this okay for the second one um digital capacity building in under-resourced regions this one just pulled ahead at 41 okay excellent okay thank you open technical standards and cross-border interoperability okay it’s a little less okay maybe because it’s working better than it was and uh zero percent for online so zero percent okay that’s interesting so i wonder whether 20 years ago that might have been a different result maybe it’s a demonstration of how things have evolved over time universal acceptance including support for multilingual internet infrastructure okay yes a bit yes very good and actually online this is our second runner up at 35 oh my gosh okay very good and for the last one cyber security collaboration through multi-stakeholder approaches okay very good 12 percent 12 percent oh 16 16 percent okay okay that’s very interesting um some of the areas 20 years ago or 15 or 10 years ago might have had a different sector but it certainly does show where greater support is needed today and also the cross-sectorial and the regional aspects that we want to take a look at fantastic okay so going into the next section here we’re going to focus in on what can still be done before december now december being when we look at the negotiations in new york and the conversations that will be happening which will hopefully be as inclusive as possible but what work do we need to undertake in order to demonstrate the value of what needs to be done next so with that how can government civil society and the technical community really help shape that and i’m going to turn to you first for this what are some of the most critical inputs or messages that government should be prioritizing now particularly from those regions that have not traditionally been represented in these global internet policy discussions and you touched on that earlier in your remarks and we’ve certainly heard about it from the others


Jarno Suruela: yeah i i think we have to focus there on the opportunities and try to give also the positive messages and i think I think there are so many things that are quite obvious for us, but still we should try to keep on repeating them. For example, as we know, digitalization accelerates progress towards the sustainable development goals. Digitalization is also a means to strengthen the economy, mobilize domestic resources, increase private investments, improve citizens’ welfare and increase gender equality. Also, I think that digitalization has proven to accelerate the clean transition. But still, we also have to talk about the other side of the thing. Coming back to the topic of this digital divide, it remains a wide concern. There are 2.6 billion people lacking access and significant disparities between nations. The gender gap remains a significant concern, hindering women’s position in the digital economy. For Finland, that is one of the top priorities. In this sense, we are trying to work with different organizations. I think addressing the current digital divide will help us to get back on track when it comes to Agenda 2030 and the majority of SDG targets. Globally, of course, we are still quite far from reaching the target of universal connectivity as set out by the Agenda 2030. At the national level, especially in developing countries, significant gaps remain between urban and rural areas. As I said already, the gender digital divide is still wide. I think a strong focus on trusted connectivity and free, open, global, interoperable, stable and secure Internet, as well as multi-stakeholder Internet governance underpinning this. are essential in the review and they are important messages. This action aligns the perspectives related to new technologies that are essential for the development of the information society should be also considered more comprehensively than before. This includes AI, high-performance computing, quantum technology, semiconductor technology, mobile networks and photophonic, for example. Quite a number of items still on the table.


Theresa Swinehart: A lot of items and new subject areas as well. Thank you, that’s very helpful. Fiona, from your perspective, in light also what we’ve heard from a government perspective, what advice would you offer to smaller or under-resourced organizations aiming to participate? In that, how might they also liaise with some of their governments in relation to helping inform those conversations?


Fiona Alexander: Sure, I think it’s a good question. Also, I wanted to comment on the poll because I was struck actually, if I read the results correctly, that the capacity building one might have been the highest, got the strongest online and in the room vote. I found that really interesting because in my recollection, the original idea for WSIS came from a 1998 ITU Plenipot resolution and development and connectivity was the basis of that resolution. It was the basis of calling for the summit in the New York process and undergirds the entire five years of the WSIS process. I think it’s great that that continues and I think it’s important that we keep that in mind. At the end of the day, I know I haven’t been a former government official, we can spend a lot of time arguing about words in a room, but at the end of the day, this is about getting people connected and getting people to do the things that you’re doing and to do that. I found the poll really interesting actually and it was kind of cool. To answer your specific question, I think as I said, the process that’s unfolding throughout the rest of the calendar year, the co-facilitators so far have been making great efforts to give people the space to have a say. So they had a stakeholder session, they had this elements paper they put out, the comments for that are due July 15th. I cannot believe that something has happened so quickly in the UN system, but I think when I listened to the government stakeholder session, there was a proposal made from the EU that was originally a Swiss proposal for the creation of an informal stakeholder, informal multi-stakeholder feedback group of some kind. I’ll get the exact name wrong. And that was just a few weeks ago and they’ve already announced they’re doing it. I literally have never seen something happen that fast. So that means that there’s going to be a group of stakeholders that the co-facilitators are going to run things past, I guess, and get input from. So once that group gets announced, I would also encourage people to find the people in your stakeholder group that are on that because I think talking to those people can be helpful. Also talking to your individual government back at home to understand what they’re doing and how they’re going to participate and how you can inform their process is equally as important. But I am more optimistic than I was last year that it looks like in this year’s process there’s going to be opportunities for you to provide your own input, whether it’s online or whether it’s through written submissions or whether it’s through working with like-minded groups and doing others. So I would encourage everyone to take advantage of that and keep pushing for more, because if you don’t push, it doesn’t happen. So don’t accept the status quo and push for more, but you have to show up and you have to actually show up and participate. So I would encourage everyone that cares about these issues to take advantage and to do that at all the events that are going to unfold over the course of the year.


Theresa Swinehart: It’s a really good point not to take it for granted. It took a lot of work by a lot of governments and a lot of stakeholders to get to this point, so take advantage and don’t miss these opportunities and let’s keep that going. Lacina, as we approach the final phase of the WSIS Plus 20 review and what’s most pressing with regards to infrastructure policy gaps that should still be addressed, particularly from your perspective and from the regional development and digital development and coordination in Africa, you touched on some really core operational aspects in your introductory remarks, so we’d love to hear a little bit more around that.


Lacina Kone: Thank you very much, Theresa. The WSIS 20, if I need to really look at the most pressing regional infrastructure and policy gap that the WSIS 20 should be addressing, first of all, if I had to rewind the tape back in 1999 when the WSIS was being created, when they talk about connectivity, I would have said meaningful connectivity, because the connectivity led us to affordability challenge as well in Africa. So the WSIS 20 really must address four persistent gaps that constrain Africa’s digital future, which is meaningful connectivity, regulations, skills and sovereignty. I will start with meaningful connectivity. Too many African countries still depend on external routes. for local internet traffic. We must complete the regional backbone, increase IXPs, very important, and reinforce initiatives like the Smart Africa Backbone that require every single country to be connected, at least two of its neighbors, and as well as the One Africa Network. Not only that, we must focus also on affordability, because today, if you look at usage gap today in Africa is over 40%, which means the infrastructure of telecommunication is available, but people are not using it for four reasons. One, affordability. Two, local context. Three, capacity building, because they don’t really understand. Four, cyber hygiene. So I’m going to move to number two, regulatory harmonization. Africa has over 50 digital law, but little interoperability, and I was very surprised in the poll, you know, we had a very few, like a 10% only, but interoperability. So investors and innovators, they need clarity. No one’s like unpredictability. So the WSIS 20 plus 20 should champion continental legal conversions through agile, right-based framework aligned with Africa’s internet governance blueprint. Number three, capacity and inclusion. Less than 10% of adults in several countries in Africa possess basic digital skills. And now, we have now, it’s not enough to be digitally savvy, but you could also be AI ignorant. That’s the reality. Absolutely, you could be a PhD, but if you’re not adapted to AI, we have another gap. So AI gap. So through Smart Africa Digital Academy and the Smart Africa Scholarship Funds, we are investing in both grassroots literacy and high-level technical training, including AI, cyber security, and quantum readiness. Number four, and the last one, which is a sovereignty and institutional coordination. Africa must govern its data, digital assets, initiatives like a Smart Africa Trust Alliance and the Council of Africa Internet Governance Authority, CAIG. Mbenga are essential to asserting regional leadership and ensuring that global governance reflects African reality. WSIS 20-plus must not only review past gaps but equip regions like Africa to co-lead the next decade of digital governance, inclusive, secure and sovereign. Thank you.


Theresa Swinehart: Thank you so much. I love your four very concrete areas. That’s really very thoughtful and also a good way to go into the conversations. Kurtis, what contributions should the technical community bring forward, whether through data case studies or coordination examples to ensure that the WSIS 20-plus outcome reflects how the internet actually works?


Kurtis Lindqvist: We often talk about the IGF and the WSIS 20-plus and what has been achieved. Hidden in that I think we forget quite a bit of what actually has been achieved, all the success stories. And I think maybe to what my colleagues here just referred to about the pole and the lack of engagement in the need for global standards, maybe we’ve just been a little bit too successful because the internet has been a phenomenal success because it’s built on the existing global standards that have been produced through multi-stakeholder processes and we’re taking it for granted. And I think that’s a success story that maybe we should talk more about, is that the reason you can do that is exactly because of the global interoperable standards that enables this. And the flip side of that is that it also makes us to a large extent forget what happens when the opposite occurs, when we see fragmentation and we see how we devalue the access to the internet. But having that fragmentation, by having silos, which is something that we actually built away from. I’m old enough to have been here before that and then what we saw before that. And when we had the silos and the internet unified this and created a much more valuable network for everyone that allowed all this digital economy to flourish, built upon these standards. And I think that we also forget about the work in the multi-stakeholder model and WSIS 20 outcomes have operationally meant in what we have deployed with this. We talked about the underserved regions, and we have seen a lot of build-up with IXPs as the Director General mentioned. Africa has seen a phenomenal explosion in IXPs in the last 20 years. That doesn’t mean there is more to be done, but we have also come quite a far way, and the rest of the world as well. We have seen root server instances. One of the things we talked about in the early IGFs was how do we build a more resilient and better infrastructure in underserved regions. In doing so, we have deployed literally thousands of root server instances around the world to make sure that the Internet is stable, better, more secure in those regions. ICANN is one of these operators, and there is today almost 2,000 of these instances around the world. We operate 240 of them in 70 countries. We can show how this has improved performance where we saw, for example, one of the data points we have is when we deployed this in Egypt, how traffic became much more localized. The same that we have seen as IXPs has been deployed, we see traffic becoming localized, both for benefit of resilience and security of the network, but also for improved user performance and again, stimulating the local economy. For all of the G7 countries and also for any other country, by the way, this infrastructure really matters, and this has been a showcase of what we have established so far. As we heard, there is more that can be done, but we tend to forget what we have also achieved. This has really improved how the Internet works. As I mentioned before, the multilingual support and universal access work has come a very far way. Again, there is much more to do, but this has come out. We should celebrate these success stories and actually highlight this, what we have done. We at ICANN together with Internet Society produced the footprints of the 20 years of the IGF, which is a paper that really summarizes all this achievement that the technical community, that business has delivered over these years. I think that we have maybe not done enough of reflecting on our own successes and talk about this. Saying that, I also think what we should bring with us in this and for the forward is that language matters and having clarity in language is very important. We hear talk about sovereignty or control, which are two words that can mean very different things to different people. If I hear them or someone hears them, they probably believe that we are talking about fragmentation. To others, these might be words that have other meanings, but we have to be clear on what we are trying to achieve, what was the ultimate goal and what are the risks and we break this. The technical community has since before IGF in the early business process been engaged to provide the input and safeguards to explain what the consequences are of decisions to ensure that we can prevent a breakage or siloing or fragmentation of exactly the Internet values that really create this value creation and provided all this economic growth over the last 20 years. and we really need to reinforce this success record and showcase that it is these fundamental principles that the technical community have highlighted, safeguarded and provided the understanding of over 20 years that is at the core of this and continues to provide a value and we need to continue to make these points all between now and December and really really reiterate how important these are for the success of the internet in the future just like it’s been for the last past 20 years.


Yu Ping Chan: Thank you. I think those are some really concrete examples of where we can actually share those stories. Yuping, I know that you had wanted to also offer your thoughts for this so please go ahead. I asked Teresa to give me the floor because I’m going to take off my UNDP hat and sort of say this as a former diplomat at the UN hat. I think there’s a big gap and forgive me all the other MFA diplomats in the room that are from New York. There’s a big gap between the New York community and those of us gathered in the room that have been working on these issues for a long time. There is a tendency when you put resolutions to the General Assembly to oversimplify, to stick in compromised language where you don’t really understand the implications of what the language means and then to really put a political context on terms that are otherwise accepted or actually understood by the technical community. So for instance Curtis mentioned sovereignty and control. In the UN context that has a very loaded meaning and even the use of those terms would actually be debated and some shadows seen into the use of that terminology that could then have implications in terms of the negotiations. So my appeal to all of the stakeholders that are really engaging in this conversation and I really applaud the fact that there have been so many good concrete ideas that we are all united behind in bringing forward to the WSIS review is to keep it clear, simple, actionable when it comes to the UN processes to word it in language that diplomats are already using so that they understand certain things. So the reason why for instance capacity building drew such a high vote is because that is the language that is now pervasive in the UN when it comes to sustainable development and really equipping countries of the global majority to have that kind of ability. So really I think the way we’re starting to frame this conversation into what appeals to the countries of the world in a united collective effort around digital versus sort of taking an approach that might be seen as a little bit more divisive is very important. I would also say that there seems to be a tendency to write off the WSIS as being dusty, out of date, that we have new developments now that must supersede the WSIS but precisely as Fiona said the fact that we’re still here talking about capacity building 20 years on is a testament to how enduring these principles are and what I think we need to make clear is that in the next 20 years going forward these foundations remain as valid as in the past today and going forward as well. I would say, and this way I put back on my UN agency hat and say that a lot of you are aware of the conversations around the UN system where it’s a very difficult time for all of us and in this moment of difficulty I think the greater guidance you can give to the system to double down on what has worked to focus on delivery and impact particularly for the communities that we serve is important and so like that would be my ask to you as UN agency so for instance very concretely sort of following my own recommendation to be clear and simple in the elements paper that the WSIS co-facilitators have put forward there is no reference to the high-level segment of the WSIS forum that’s occurring very soon in a couple of weeks that to us UN agencies ITU, UNDP, UNESCO, UNCTAD has been a cornerstone of the success of the WSIS action line so we would ask that this be reflected again in the WSIS review in the elements paper as well so very concrete recommendations to how the UN system should continue implementing these areas of work and the guidance from member states will be particularly important.


Theresa Swinehart: Thank you so much that almost led into the closing part but I don’t want to skip the opportunity to open it very briefly if there’s any questions from the audience and Becky I don’t know if you have anybody in the zoom room or anybody wants to have a question or their own observations about what would be useful no okay should I jump to the closing yeah we can we can jump to closing okay okay so we’re not closing yet we’re not letting our panelists loose but Yoping had sort of kicked it off a little bit but I’m going to ask each panelist to offer one specific step their community should take to help shape a globally useful outcome of the WSIS plus 20. Now by useful it could be anything practical, pragmatic, anything that they think is useful. We’ve already heard conversations not in New York are often different than conversations outside of New York so those are some examples practical solutions but to the panelists for the audience if you want them to walk away with one action and one thing what would it be and so I’m going to turn to you first from a government perspective. Well I’ll start with you Jarno.


UNKNOWN: I think it’s important for us to think about what it is that we’re doing and how we’re doing it and how we’re doing it and how we’re doing it Well, obviously I have to focus on the political aspects then. Of course, yes. I think a major challenge in this process is how to maintain digital development inclusive from the regional level to the global level as well as to maintain the development orientation of technology and its focus on human rights also.


Jarno Suruela: This is a key challenge which all stakeholders need to address.


Theresa Swinehart: Thank you. I hope everybody heard that and wrote it down. Fiona, over to you. What would you take as an action for everybody?


Fiona Alexander: Sure. I think there are lots of different things we could all point to. But if I’m looking for one very specific thing, I think I would say to people in different stakeholder groups that they should continue to demand their seat at the table. That means you have to actually show up when you get the seat. And I think that we’ve seen some progress this year. Probably still not enough, but some small steps. So we should acknowledge that and we should show up and we should take advantage of it and we should continue to push to see the really truly multi-stakeholder environment that we want to see.


Theresa Swinehart: Thank you. Kurtis, from the technical side.


Kurtis Lindqvist: I think the technical community really must continue to contribute evidence and tangible implementation, coordination outcomes and data that shows what has worked. That’s really our responsibility to bring to the table. And this can’t happen in isolation, as you just heard. The governments and civil society needs to bring their inputs forward rooted in the experiences and not just declarations to bring these tangible examples of what works. And I think this is something that will only succeed if the outcomes reflect the systems that are already making the Internet works. And we don’t need new structures. We need continued collaboration and a clear commitment to the model that has actually delivered.


Theresa Swinehart: Thank you. I think those are really, we need to show that very carefully. Yuping, you shared some observations, but I suspect you might have some more.


Yu Ping Chan: This is a tricky moment for me, because on behalf of the UN system, we have to reiterate the fact that we are guided by the member states. And so just looking to the member states, but also asking that the member states listen to the stakeholder community. Because, again, I’ve said this before, I said this, I think, in a number of discussions before, multistakeholderism is not natural to the UN system itself. It is a multilateral organization, but multistakeholderism, the way it’s been developed in 20 years and the way we do discussions here at the IGF, is not the way how New York does things and perhaps New York needs to adapt to that, but exactly as Fiona says, showing up, saying this and demanding that amount of accountability for this over and over again is how we make the changes. Again, truly, the fact that we now have these changes in the way the WSISCO facilitators are approaching multistakeholderism is a testament to the fact that many people have showed up through the GDC process and now are showing up more and more than ever before.


Theresa Swinehart: Thank you. And Lacina, from your perspective?


Lacina Kone: From my perspective, multipolarity is a geopolitical fact of our time, undeniable and irreversible, but multilateralism is a choice. It is a conscious decision to cooperate across divide, to build trust among ourselves and to shape a fairer global order, not despite our differences, but because of them.


Theresa Swinehart: That’s very well said. Thank you. Thank you. Not only is the light flashing red, so I’m being told it needs to sort of call it a wrap, but we’ve heard a wide range of perspectives and observations of what has been transformative over the past 20 years, but also where we have gaps and where we need to go. The question we were going to pose to you or comments to seek would be what would help you and your organization in WSIS Plus 20 before December. So I would ask that you walk away from this conversation thinking about that. Encourage you to sign up for different things to help inform or get materials. For example, we have a WSIS Plus 20 outreach mailing list with updates. Everybody’s sharing different information. There’s also other dialogues happening. Engage in those. Provide your input and provide your data. And as has been reiterated here, the process is open. Take those opportunities and engage and participate. Share your stories, share your observations, where you’ve seen pragmatic results, but also where there’s gaps and where we can work to improve things. So with that, I would thank everybody for joining and participating, and let’s make the WSIS Plus 20 a successful conversation and outcome. So thank you, everybody. Thank you.


Y

Yu Ping Chan

Speech speed

206 words per minute

Speech length

1572 words

Speech time

457 seconds

WSIS Plus 20 is the second review by UN General Assembly of 2003-2005 outcomes, examining if action lines still suffice for current digital developments

Explanation

Yu Ping Chan explains that WSIS Plus 20 represents the second comprehensive review conducted by UN member states of the original WSIS summit outcomes from Geneva and Tunis. The review will assess whether the established WSIS action lines remain adequate to address the breadth of current global digital developments and discussions.


Evidence

References to the 2003-2005 Tunis and Geneva outcomes of the WSIS summits and the question of whether WSIS action lines still cover current digital developments


Major discussion point

WSIS Plus 20 Process and Mandate


Topics

Legal and regulatory | Development


The process involves multiple UN agencies drafting reports and stakeholder consultations, with member states shaping the eventual review

Explanation

The WSIS Plus 20 process is complex and involves coordination between various UN agencies including ITU, UNESCO, UNDP, and UNCTAD in drafting Secretary-General reports. The process includes extensive stakeholder consultations through forums like IGF and other conferences, with member states ultimately determining the final review outcomes.


Evidence

Mentions of ITU, UNESCO colleagues having consultation processes, ongoing consultations at IGF, Paris Summit, and high-level WSIS event convened by ITU, UNDP, UNESCO and UNCTAD


Major discussion point

WSIS Plus 20 Process and Mandate


Topics

Legal and regulatory | Development


Multi-stakeholder approach remains at the heart of WSIS and must be maintained in the review process

Explanation

Yu Ping Chan emphasizes that the multi-stakeholder community commitment and network development has been crucial to WSIS’s enduring success. She stresses the importance of continued stakeholder engagement to maintain the multi-stakeholder approach that has been fundamental to WSIS from its inception.


Evidence

References to the commitment of the multi-stakeholder community and network that has developed, and the importance of stakeholders continuing to be engaged in the process


Major discussion point

Multi-stakeholder Engagement and Inclusivity


Topics

Legal and regulatory | Development


Agreed with

– Jarno Suruela
– Fiona Alexander
– Lacina Kone

Agreed on

Multi-stakeholder approach is fundamental to WSIS success and must be preserved


There’s a significant gap between the New York diplomatic community and technical communities working on these issues

Explanation

Speaking as a former UN diplomat, Yu Ping Chan identifies a major disconnect between the New York UN community and the technical communities that have been working on internet governance issues. She notes that New York tends to oversimplify complex technical issues and apply inappropriate political contexts to technical terms.


Evidence

References to tendency to oversimplify, stick in compromised language, and put political context on terms that are otherwise accepted by the technical community, with examples of sovereignty and control having loaded meanings in UN context


Major discussion point

Language and Communication Challenges


Topics

Legal and regulatory


Disagreed with

– Kurtis Lindqvist

Disagreed on

Terminology and language interpretation in UN processes


Stakeholders should use clear, simple, actionable language that diplomats already understand to avoid misinterpretation

Explanation

Yu Ping Chan recommends that stakeholders frame their contributions in language that UN diplomats are already familiar with to prevent misunderstandings. She suggests using terminology that is already pervasive in UN sustainable development discussions to ensure better comprehension and acceptance.


Evidence

Example of capacity building drawing high votes because it’s language pervasive in UN sustainable development contexts, and recommendation to word things in language diplomats are already using


Major discussion point

Language and Communication Challenges


Topics

Legal and regulatory | Development


Agreed with

– Kurtis Lindqvist

Agreed on

Language and communication clarity are crucial for effective UN processes


Member states must listen to stakeholder communities while stakeholders demand accountability

Explanation

Yu Ping Chan emphasizes the dual responsibility in the WSIS process – member states need to be guided by and listen to stakeholder communities, while stakeholders must continuously demand accountability and show up to participate. She notes that multistakeholderism is not natural to the UN system but can be achieved through persistent engagement.


Evidence

Reference to multistakeholderism not being natural to the UN multilateral system, and the fact that changes in WSIS co-facilitators’ approach to multistakeholderism resulted from people showing up through the GDC process


Major discussion point

Action Steps and Recommendations


Topics

Legal and regulatory | Development


Agreed with

– Fiona Alexander

Agreed on

Stakeholders must actively participate and demand their seat at the table


J

Jarno Suruela

Speech speed

113 words per minute

Speech length

767 words

Speech time

406 seconds

The Global Digital Compact and WSIS are highly complementary and should be implemented in sync

Explanation

Jarno Suruela argues that the recently concluded Global Digital Compact and WSIS frameworks work well together and should be coordinated in their implementation. He believes this synchronization will better position the UN to foster multi-stakeholder cooperation on digital matters and leverage digital technologies for sustainable development.


Evidence

Reference to the recent Global Digital Compact and belief that GDC and WSIS should be implemented in sync to guarantee everybody will be on board


Major discussion point

WSIS Plus 20 Process and Mandate


Topics

Legal and regulatory | Development


Governments must ensure all regions, not just traditional actors, are meaningfully included in WSIS Plus 20

Explanation

Suruela emphasizes the need for governments to work with diverse actors and organizations to ensure broad regional representation in the WSIS Plus 20 process. He stresses that meaningful inclusion goes beyond traditional participants to encompass underrepresented regions and communities.


Evidence

References to Finland doing a lot together with different actors, organizations and mechanisms, and supporting IGF as one of the all-time top contributors


Major discussion point

Multi-stakeholder Engagement and Inclusivity


Topics

Development | Legal and regulatory


2.6 billion people still lack internet access, with significant gender gaps and urban-rural disparities

Explanation

Suruela highlights the persistent digital divide as a major concern, noting that the majority of the world’s population lacks meaningful and safe internet access. He specifically points to gender disparities and differences between urban and rural areas as critical issues requiring urgent attention.


Evidence

Specific figure of 2.6 billion people lacking access, mention of significant disparities between nations, gender gap as significant concern, and gaps between urban and rural areas in developing countries


Major discussion point

Regional Development and Digital Divide


Topics

Development | Human rights


Agreed with

– Lacina Kone

Agreed on

Digital divide remains a critical challenge requiring urgent attention


Digital divide addressing is crucial for getting back on track with Agenda 2030 and SDG targets

Explanation

Suruela connects digital inclusion directly to broader sustainable development goals, arguing that addressing the digital divide is essential for achieving the UN’s Agenda 2030 targets. He emphasizes that digitalization accelerates progress toward SDGs and strengthens economies while improving citizen welfare.


Evidence

References to digitalization accelerating progress towards SDGs, strengthening economy, mobilizing domestic resources, increasing private investments, improving citizens’ welfare and gender equality


Major discussion point

Regional Development and Digital Divide


Topics

Development | Economic


IGF is the primary multi-stakeholder forum for international digital policy, with over 160 national and regional initiatives

Explanation

Suruela positions the Internet Governance Forum as the central platform for multi-stakeholder digital policy discussions at the UN level. He cites the proliferation of national, regional, and youth IGF initiatives as evidence of its success and importance for discussing emerging issues like AI.


Evidence

Specific mention of over 160 national, regional and youth initiatives of the IGF, and reference to it becoming important platform for discussing emerging digital issues such as AI


Major discussion point

IGF Sustainability and Future


Topics

Legal and regulatory | Development


Agreed with

– Yu Ping Chan
– Fiona Alexander
– Lacina Kone

Agreed on

Multi-stakeholder approach is fundamental to WSIS success and must be preserved


IGF needs a more sustainable financial basis from the regular UN budget for its global inclusive efforts

Explanation

Suruela advocates for securing more stable funding for the IGF through the regular UN budget rather than relying on voluntary contributions. He argues that such a global, inclusive effort deserves and needs sustainable financial support to continue its important work.


Evidence

Reference to Finland being one of the all-time top contributors to IGF and encouragement for other actors to step up their support


Major discussion point

IGF Sustainability and Future


Topics

Legal and regulatory | Development


The challenge is maintaining digital development inclusivity from regional to global levels while focusing on human rights

Explanation

Suruela identifies the key challenge as ensuring that digital development remains inclusive across all levels from regional to global while maintaining a focus on human rights principles. He emphasizes the need to keep technology development oriented toward human rights and democratic values.


Evidence

Reference to developing new technologies and the internet by respecting democratic values and principles, and maintaining development orientation of technology with focus on human rights


Major discussion point

Action Steps and Recommendations


Topics

Development | Human rights


F

Fiona Alexander

Speech speed

208 words per minute

Speech length

1376 words

Speech time

396 seconds

The New York UN systems are not as open as expert agencies, creating challenges for stakeholder participation

Explanation

Alexander points out that the UN systems in New York are significantly less open to multi-stakeholder participation compared to expert agencies like UNDP or ITU. She notes that while these expert agencies have made efforts to become more inclusive over the past 20 years, the New York systems have not evolved similarly, as evidenced in the Global Digital Compact process.


Evidence

Reference to the Global Digital Compact process showing that New York systems are not nearly as open as UNDP or ITU or other expert agencies, and that expert agencies originally weren’t open 20 years ago either but made effort to change


Major discussion point

Multi-stakeholder Engagement and Inclusivity


Topics

Legal and regulatory


Agreed with

– Yu Ping Chan
– Jarno Suruela
– Lacina Kone

Agreed on

Multi-stakeholder approach is fundamental to WSIS success and must be preserved


Co-facilitators have made positive efforts to allow stakeholder input, but continued pressure is needed to maintain access

Explanation

Alexander acknowledges that the current WSIS Plus 20 co-facilitators have been making good efforts to create space for stakeholder participation through consultations and feedback mechanisms. However, she emphasizes that this progress should not be taken for granted and requires continued advocacy to maintain and expand access.


Evidence

References to co-facilitators having stakeholder sessions, elements paper with July 15th comment deadline, proposal for informal multi-stakeholder feedback group happening quickly, and civil society letters with specific recommendations


Major discussion point

Multi-stakeholder Engagement and Inclusivity


Topics

Legal and regulatory


The December resolution will decide whether IGF continues and potentially update WSIS action lines

Explanation

Alexander explains that the December 2024 resolution by member states will be crucial in determining the future of the Internet Governance Forum and may also update the WSIS action lines. This resolution will also address the relationship with the Global Digital Compact, making it a critical decision point for internet governance.


Evidence

Specific mention that member states will adopt a resolution in December that will decide whether IGF continues, potentially update WSIS action lines, and talk about the GDC


Major discussion point

IGF Sustainability and Future


Topics

Legal and regulatory


Stakeholders must continue to demand their seat at the table and actually show up when given opportunities

Explanation

Alexander emphasizes that stakeholders cannot be passive in expecting inclusion but must actively demand participation opportunities and then follow through by actually participating when given access. She stresses that progress requires both pushing for more opportunities and taking advantage of existing ones.


Evidence

Reference to people needing to push because if you don’t push, it doesn’t happen, and emphasis on not accepting the status quo while actually showing up and participating


Major discussion point

Action Steps and Recommendations


Topics

Legal and regulatory


Agreed with

– Yu Ping Chan

Agreed on

Stakeholders must actively participate and demand their seat at the table


K

Kurtis Lindqvist

Speech speed

166 words per minute

Speech length

1566 words

Speech time

563 seconds

DNSSEC represents a concrete success story of global cooperation addressing DNS security weaknesses

Explanation

Lindqvist presents DNSSEC as a prime example of successful multi-stakeholder collaboration in addressing identified internet infrastructure vulnerabilities. The solution emerged through technical standards development in the IETF, community building through IGF, and multi-stakeholder awareness raising, rather than top-down mandates.


Evidence

Reference to DNSSEC being globally identified as addressing weaknesses in the domain name system, developed through IETF technical standards, and implemented through persistent cooperation and engagement


Major discussion point

Technical Infrastructure and Standards Success Stories


Topics

Infrastructure | Cybersecurity


Internationalized Domain Names (IDNs) have evolved to support non-Latin scripts, improving linguistic accessibility

Explanation

Lindqvist explains how IDNs have been developed to support domain names in non-Latin scripts including Arabic, Cyrillic, Chinese, and other languages like Swedish. This technical advancement addresses linguistic accessibility in the DNS system and is being extended through universal acceptance work to ensure application support.


Evidence

Specific mention of IDNs covering Arabic, Cyrillic, Chinese, other non-Latin scripts like Swedish, and ongoing universal acceptance work to get these adopted in all world’s applications


Major discussion point

Technical Infrastructure and Standards Success Stories


Topics

Infrastructure | Sociocultural


Africa has seen phenomenal growth in Internet Exchange Points (IXPs) over the past 20 years

Explanation

Lindqvist highlights the significant expansion of internet infrastructure in underserved regions, particularly noting the dramatic increase in IXPs across Africa. This development has improved local internet traffic routing and contributed to better network performance and economic benefits.


Evidence

Reference to Africa seeing a phenomenal explosion in IXPs in the last 20 years, though acknowledging there is more to be done


Major discussion point

Technical Infrastructure and Standards Success Stories


Topics

Infrastructure | Development


Root server instances have been deployed globally, with almost 2,000 instances improving internet stability and performance

Explanation

Lindqvist provides specific data on the global deployment of root server instances as an example of successful infrastructure development in underserved regions. ICANN operates 240 instances across 70 countries, with concrete examples of improved performance, such as traffic localization in Egypt.


Evidence

Specific numbers: almost 2,000 root server instances worldwide, ICANN operates 240 of them in 70 countries, example of traffic becoming localized in Egypt when deployed there


Major discussion point

Technical Infrastructure and Standards Success Stories


Topics

Infrastructure | Development


Language matters in UN processes, with terms like “sovereignty” and “control” having different meanings to different communities

Explanation

Lindqvist warns that terminology used in international negotiations can be interpreted very differently by various stakeholders. Words like “sovereignty” or “control” might suggest fragmentation to technical communities while having different meanings for others, requiring careful attention to language clarity.


Evidence

Specific examples of sovereignty and control as words that can mean very different things to different people, with technical community potentially interpreting them as fragmentation


Major discussion point

Language and Communication Challenges


Topics

Legal and regulatory


Technical community must continue providing evidence and tangible implementation data showing what works

Explanation

Lindqvist emphasizes the technical community’s responsibility to contribute concrete evidence, coordination outcomes, and data demonstrating successful implementations. He stresses that this evidence-based approach must be collaborative with governments and civil society, focusing on systems that already make the internet work rather than creating new structures.


Evidence

Reference to bringing tangible examples of what works, outcomes reflecting systems already making the Internet work, and not needing new structures but continued collaboration


Major discussion point

Action Steps and Recommendations


Topics

Infrastructure | Legal and regulatory


L

Lacina Kone

Speech speed

132 words per minute

Speech length

868 words

Speech time

394 seconds

Continental ownership creates leverage, with African heads of state making digital development a political imperative

Explanation

Kone explains that Smart Africa’s direct endorsement from African heads of state has elevated digital development to a political priority across the continent. This high-level political commitment has transformed digital initiatives from fragmented projects into a unified continental ambition with regional perspective.


Evidence

Reference to direct endorsement from African heads of state making digital development a political imperative for all nations, and shift from fragmented projects to shared continental ambition


Major discussion point

Regional Development and Digital Divide


Topics

Development | Legal and regulatory


Policy frameworks must deliver real results through operational pilots and regional implementation, not just aspirations

Explanation

Kone emphasizes that Smart Africa has moved beyond policy discussions to create working operational models and regional implementations. Through initiatives like the Smart Africa Trust Alliance for digital ID interoperability and the Smart Africa Backbone for regional connectivity, they have demonstrated practical policy implementation.


Evidence

Specific examples of Smart Africa Trust Alliance for digital ID interoperability, Smart Africa Digital Academy for capacity building, Smart Africa Backbone requiring every country to connect to at least two neighbors


Major discussion point

Regional Development and Digital Divide


Topics

Development | Infrastructure


Multi-stakeholder cooperation works when rooted in African needs, demonstrated through digital scholarship programs and governance frameworks

Explanation

Kone argues that effective multi-stakeholder collaboration must be grounded in local African requirements rather than external models. Smart Africa’s success in training over 90 students through digital scholarship programs and implementing data governance frameworks in multiple countries demonstrates this approach.


Evidence

Digital scholarship funds training more than 90 students in master’s degrees in digital transformation, data governance framework running in Senegal and Ghana, harmonized regulatory blueprints developed with government, civil society, academia, and private sector


Major discussion point

Regional Development and Digital Divide


Topics

Development | Sociocultural


Agreed with

– Yu Ping Chan
– Jarno Suruela
– Fiona Alexander

Agreed on

Multi-stakeholder approach is fundamental to WSIS success and must be preserved


Four persistent gaps constrain Africa’s digital future: meaningful connectivity, regulatory harmonization, skills development, and digital sovereignty

Explanation

Kone identifies four critical areas that WSIS Plus 20 must address for Africa’s digital advancement. These include moving beyond basic connectivity to meaningful access, harmonizing the continent’s fragmented regulatory landscape, addressing massive digital skills gaps, and ensuring African control over digital assets and governance.


Evidence

Specific mention of usage gap over 40% in Africa due to affordability, local context, capacity building, and cyber hygiene issues; less than 10% of adults in several African countries possessing basic digital skills


Major discussion point

Critical Infrastructure and Policy Gaps


Topics

Development | Infrastructure


Agreed with

– Jarno Suruela

Agreed on

Digital divide remains a critical challenge requiring urgent attention


Africa has over 50 digital laws but little interoperability, requiring continental legal frameworks

Explanation

Kone highlights the fragmentation of Africa’s digital legal landscape, where numerous national laws exist without coordination or interoperability. He argues that investors and innovators need regulatory clarity and predictability, which requires continental legal harmonization through rights-based frameworks.


Evidence

Specific figure of over 50 digital laws in Africa with little interoperability, and statement that investors and innovators need clarity and no one likes unpredictability


Major discussion point

Critical Infrastructure and Policy Gaps


Topics

Legal and regulatory | Development


Less than 10% of adults in several African countries possess basic digital skills, with new AI literacy gaps emerging

Explanation

Kone warns of a compounding skills crisis where traditional digital literacy gaps are being overtaken by AI literacy requirements. He notes that even highly educated individuals can become disadvantaged if they lack AI adaptation skills, creating new forms of digital exclusion.


Evidence

Specific statistic of less than 10% of adults in several African countries having basic digital skills, and example that someone could be a PhD but if not adapted to AI, there’s another gap


Major discussion point

Critical Infrastructure and Policy Gaps


Topics

Development | Sociocultural


Africa must govern its own data and digital assets through regional leadership initiatives

Explanation

Kone emphasizes the importance of digital sovereignty for Africa, arguing that the continent must assert control over its data and digital governance rather than being subject to external control. He points to initiatives like the Smart Africa Trust Alliance and Council of Africa Internet Governance Authority as examples of regional leadership.


Evidence

References to Smart Africa Trust Alliance and Council of Africa Internet Governance Authority (CAIG) as essential for asserting regional leadership and ensuring global governance reflects African reality


Major discussion point

Critical Infrastructure and Policy Gaps


Topics

Legal and regulatory | Development


Multipolarity is a geopolitical fact, but multilateralism is a conscious choice to cooperate and build trust

Explanation

Kone makes a philosophical distinction between the inevitable reality of a multipolar world and the deliberate decision to engage in multilateral cooperation. He argues that multilateralism represents a conscious commitment to work across differences to create a fairer global order that embraces rather than despite diversity.


Major discussion point

Action Steps and Recommendations


Topics

Legal and regulatory | Development


T

Theresa Swinehart

Speech speed

149 words per minute

Speech length

1959 words

Speech time

784 seconds

WSIS decisions could affect Internet governance for the next decade, requiring practical steps from different communities

Explanation

Swinehart emphasizes that the upcoming WSIS Plus 20 negotiations and decisions will have significant long-term implications for internet governance spanning the next ten years. She stresses the need for various stakeholder communities to take concrete, practical actions between now and December to influence these critical outcomes.


Evidence

Reference to decisions that could affect Internet governance aspects for the next decade and need to look at practical steps different communities can take between now and December


Major discussion point

WSIS Plus 20 Process and Mandate


Topics

Legal and regulatory | Development


Multi-stakeholder panels from technical community, government, civil society and intergovernmental initiatives provide important observations for discussions

Explanation

Swinehart highlights the value of having diverse representation across different sectors in the WSIS discussions. She emphasizes that observations from technical community, government, civil society, and intergovernmental perspectives are all crucial for comprehensive and effective policy discussions.


Evidence

Reference to having a panel from the technical community, from government, civil society and intergovernmental initiatives providing important observations from each of these sectors


Major discussion point

Multi-stakeholder Engagement and Inclusivity


Topics

Legal and regulatory | Development


These kinds of inclusive multi-stakeholder panels and discussions were not normal conversations 20 years ago, showing significant progress

Explanation

Swinehart reflects on the evolution of internet governance discussions, noting that the current inclusive format with diverse stakeholders participating in policy conversations represents a major advancement from two decades ago. This demonstrates how the WSIS process has successfully opened up previously closed policy discussions.


Evidence

Statement that these kinds of panels and discussions were not normal conversations 20 years ago, and opportunities of inclusivity of all stakeholders has led to core results


Major discussion point

Multi-stakeholder Engagement and Inclusivity


Topics

Legal and regulatory | Development


The process is open and stakeholders should take advantage of opportunities to engage, participate, and share their stories

Explanation

Swinehart encourages active participation from all stakeholders in the WSIS Plus 20 process, emphasizing that opportunities exist for meaningful engagement. She stresses the importance of not only participating but also sharing concrete examples of both successes and gaps to inform the policy discussions.


Evidence

References to encouraging sign-up for different things, WSIS Plus 20 outreach mailing list, engaging in dialogues, providing input and data, and sharing stories and observations


Major discussion point

Action Steps and Recommendations


Topics

Legal and regulatory | Development


U

UNKNOWN

Speech speed

161 words per minute

Speech length

84 words

Speech time

31 seconds

Focus should be on political aspects and maintaining digital development inclusivity from regional to global levels

Explanation

The unknown speaker emphasizes the importance of political considerations in the WSIS process and highlights the challenge of ensuring digital development remains inclusive across all levels from regional to global. They stress the need to maintain a development orientation that focuses on human rights principles.


Evidence

Reference to focusing on political aspects and maintaining digital development inclusive from regional to global level with focus on human rights


Major discussion point

Action Steps and Recommendations


Topics

Development | Human rights | Legal and regulatory


Agreements

Agreement points

Multi-stakeholder approach is fundamental to WSIS success and must be preserved

Speakers

– Yu Ping Chan
– Jarno Suruela
– Fiona Alexander
– Lacina Kone

Arguments

Multi-stakeholder approach remains at the heart of WSIS and must be maintained in the review process


IGF is the primary multi-stakeholder forum for international digital policy, with over 160 national and regional initiatives


The New York UN systems are not as open as expert agencies, creating challenges for stakeholder participation


Multi-stakeholder cooperation works when rooted in African needs, demonstrated through digital scholarship programs and governance frameworks


Summary

All speakers strongly emphasize that the multi-stakeholder model has been central to WSIS achievements and must be protected and strengthened in the WSIS Plus 20 process, though they acknowledge challenges in implementation


Topics

Legal and regulatory | Development


Digital divide remains a critical challenge requiring urgent attention

Speakers

– Jarno Suruela
– Lacina Kone

Arguments

2.6 billion people still lack internet access, with significant gender gaps and urban-rural disparities


Four persistent gaps constrain Africa’s digital future: meaningful connectivity, regulatory harmonization, skills development, and digital sovereignty


Summary

Both speakers highlight the persistent digital divide as a major concern, with specific focus on connectivity gaps, gender disparities, and the need for meaningful rather than basic access


Topics

Development | Human rights


Stakeholders must actively participate and demand their seat at the table

Speakers

– Fiona Alexander
– Yu Ping Chan

Arguments

Stakeholders must continue to demand their seat at the table and actually show up when given opportunities


Member states must listen to stakeholder communities while stakeholders demand accountability


Summary

Both speakers emphasize that meaningful participation requires active engagement from stakeholders who must both demand access and follow through with actual participation when opportunities arise


Topics

Legal and regulatory


Language and communication clarity are crucial for effective UN processes

Speakers

– Yu Ping Chan
– Kurtis Lindqvist

Arguments

Stakeholders should use clear, simple, actionable language that diplomats already understand to avoid misinterpretation


Language matters in UN processes, with terms like ‘sovereignty’ and ‘control’ having different meanings to different communities


Summary

Both speakers recognize that terminology and communication approaches significantly impact the effectiveness of UN negotiations, with technical and diplomatic communities often interpreting the same terms differently


Topics

Legal and regulatory


Similar viewpoints

Both speakers emphasize the importance of concrete, operational achievements in internet infrastructure development, with Lindqvist highlighting technical infrastructure successes and Kone focusing on policy implementation that delivers tangible results

Speakers

– Kurtis Lindqvist
– Lacina Kone

Arguments

Africa has seen phenomenal growth in Internet Exchange Points (IXPs) over the past 20 years


Policy frameworks must deliver real results through operational pilots and regional implementation, not just aspirations


Topics

Infrastructure | Development


Both speakers connect digital development directly to broader sustainable development goals and emphasize the importance of high-level political commitment to drive digital transformation

Speakers

– Jarno Suruela
– Lacina Kone

Arguments

Digital divide addressing is crucial for getting back on track with Agenda 2030 and SDG targets


Continental ownership creates leverage, with African heads of state making digital development a political imperative


Topics

Development | Legal and regulatory


Both speakers emphasize the complexity and long-term significance of the WSIS Plus 20 process, highlighting the need for coordinated action across multiple stakeholders and agencies

Speakers

– Yu Ping Chan
– Theresa Swinehart

Arguments

The process involves multiple UN agencies drafting reports and stakeholder consultations, with member states shaping the eventual review


WSIS decisions could affect Internet governance for the next decade, requiring practical steps from different communities


Topics

Legal and regulatory | Development


Unexpected consensus

Technical standards and interoperability may be working better than expected

Speakers

– Kurtis Lindqvist
– Audience poll results

Arguments

Internationalized Domain Names (IDNs) have evolved to support non-Latin scripts, improving linguistic accessibility


Poll results showing low priority for open technical standards and cross-border interoperability (0% online, minimal room response)


Explanation

The low poll response for technical standards support suggests that 20 years of WSIS work may have been successful in this area, with Lindqvist noting that success might make people take global standards for granted. This represents unexpected consensus that technical interoperability challenges have been largely addressed


Topics

Infrastructure | Sociocultural


Capacity building emerged as the highest priority across all stakeholder groups

Speakers

– All speakers
– Poll participants

Arguments

Poll results showing capacity building in under-resourced regions as top priority (41% online)


Less than 10% of adults in several African countries possess basic digital skills, with new AI literacy gaps emerging


Digital divide addressing is crucial for getting back on track with Agenda 2030 and SDG targets


Explanation

Despite different backgrounds and perspectives, there was unexpected unanimous agreement that capacity building remains the most critical need, suggesting this foundational WSIS principle from 1998 remains as relevant today as it was 20 years ago


Topics

Development | Sociocultural


Overall assessment

Summary

The speakers demonstrated strong consensus on core WSIS principles including multi-stakeholder governance, the importance of addressing digital divides, the need for active stakeholder participation, and the critical role of capacity building. There was also agreement on procedural challenges, particularly around language barriers between technical and diplomatic communities.


Consensus level

High level of consensus on fundamental principles with constructive alignment on implementation challenges. The agreement spans across different stakeholder groups (government, technical community, civil society, international organizations) and suggests a mature understanding of both achievements and remaining gaps after 20 years of WSIS implementation. This strong consensus provides a solid foundation for the WSIS Plus 20 negotiations, though speakers acknowledge that maintaining multi-stakeholder openness will require continued vigilance and active participation.


Differences

Different viewpoints

Terminology and language interpretation in UN processes

Speakers

– Yu Ping Chan
– Kurtis Lindqvist

Arguments

There’s a significant gap between the New York diplomatic community and technical communities working on these issues


Language matters in UN processes, with terms like ‘sovereignty’ and ‘control’ having different meanings to different communities


Summary

Both speakers identify language as a challenge but from different perspectives – Yu Ping Chan focuses on the gap between diplomatic and technical communities and recommends using diplomatic language, while Lindqvist warns that technical communities may interpret certain terms (like sovereignty/control) as suggesting fragmentation


Topics

Legal and regulatory


Unexpected differences

Limited disagreement on technical standards priority

Speakers

– Kurtis Lindqvist
– Poll results

Arguments

Technical community must continue providing evidence and tangible implementation data showing what works


Explanation

Lindqvist expressed surprise that the poll showed low interest in ‘open technical standards and cross-border interoperability’ (0% online, minimal in-room), suggesting this might indicate success rather than lack of importance. This represents an unexpected disconnect between technical community priorities and audience perception


Topics

Infrastructure | Legal and regulatory


Overall assessment

Summary

The discussion showed remarkable consensus on major issues including the importance of multi-stakeholder engagement, need for inclusive processes, digital divide challenges, and IGF sustainability. The primary areas of difference were tactical rather than strategic – focusing on how to communicate effectively with different audiences and how to work within existing UN systems.


Disagreement level

Very low level of substantive disagreement. The speakers demonstrated strong alignment on fundamental principles and goals, with differences mainly in emphasis, approach, and tactical considerations. This high level of consensus suggests a mature, collaborative stakeholder community but may also indicate potential groupthink or lack of diverse perspectives that could strengthen the WSIS Plus 20 process.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the importance of concrete, operational achievements in internet infrastructure development, with Lindqvist highlighting technical infrastructure successes and Kone focusing on policy implementation that delivers tangible results

Speakers

– Kurtis Lindqvist
– Lacina Kone

Arguments

Africa has seen phenomenal growth in Internet Exchange Points (IXPs) over the past 20 years


Policy frameworks must deliver real results through operational pilots and regional implementation, not just aspirations


Topics

Infrastructure | Development


Both speakers connect digital development directly to broader sustainable development goals and emphasize the importance of high-level political commitment to drive digital transformation

Speakers

– Jarno Suruela
– Lacina Kone

Arguments

Digital divide addressing is crucial for getting back on track with Agenda 2030 and SDG targets


Continental ownership creates leverage, with African heads of state making digital development a political imperative


Topics

Development | Legal and regulatory


Both speakers emphasize the complexity and long-term significance of the WSIS Plus 20 process, highlighting the need for coordinated action across multiple stakeholders and agencies

Speakers

– Yu Ping Chan
– Theresa Swinehart

Arguments

The process involves multiple UN agencies drafting reports and stakeholder consultations, with member states shaping the eventual review


WSIS decisions could affect Internet governance for the next decade, requiring practical steps from different communities


Topics

Legal and regulatory | Development


Takeaways

Key takeaways

WSIS Plus 20 represents a critical juncture for internet governance, with decisions potentially affecting the next decade of digital policy and the continuation of the IGF


Multi-stakeholder engagement remains fundamental to WSIS success, but there’s a significant gap between New York UN diplomatic processes and technical communities that needs bridging


Technical infrastructure has seen remarkable success over 20 years (DNSSEC, IDNs, IXPs, root servers), demonstrating the effectiveness of multi-stakeholder cooperation


Digital capacity building in under-resourced regions emerged as the highest priority need, reflecting the persistent digital divide affecting 2.6 billion people


Regional ownership and coordination, particularly demonstrated by Smart Africa’s continental approach, proves effective when rooted in local needs


Language and communication clarity is crucial in UN processes, as technical terms can be misinterpreted in political contexts


The Global Digital Compact and WSIS are complementary frameworks that should be implemented together


Success stories from the past 20 years should be celebrated and used as evidence for continued multi-stakeholder approaches


Resolutions and action items

Stakeholders should provide input to the WSIS Plus 20 elements paper by the July 15th deadline


Technical community must continue providing concrete evidence and implementation data showing what works


All stakeholder groups should engage with the informal multi-stakeholder feedback group being established by co-facilitators


Organizations should sign up for WSIS Plus 20 outreach mailing lists and participate in upcoming consultations


Stakeholders should engage with their national governments to inform their WSIS Plus 20 positions


Communities should participate in upcoming events including the high-level WSIS event in Geneva


Stakeholders must continue to demand and show up for their seat at the table in UN processes


Regional organizations should share concrete success stories and operational examples


Technical community should highlight fundamental principles that have enabled internet success over 20 years


Unresolved issues

Sustainable funding for the IGF from the regular UN budget remains uncertain pending December negotiations


The scope and effectiveness of the informal multi-stakeholder feedback group has not been fully defined


Timing and content of the Secretary-General’s report to member states remains unclear


How to effectively bridge the communication gap between New York diplomatic processes and technical communities


Whether WSIS action lines will be updated to reflect current digital developments and the Global Digital Compact


How to ensure meaningful participation from underrepresented regions in the final negotiations


The specific language and commitments that will be included in the December resolution


How to operationalize the complementary relationship between WSIS and the Global Digital Compact


Suggested compromises

Use clear, simple, actionable language that diplomats already understand to avoid technical terms being misinterpreted in political contexts


Frame digital governance conversations around capacity building and sustainable development language that appeals to the global majority of countries


Build on existing successful frameworks rather than creating entirely new structures


Maintain the multi-stakeholder model while adapting it to work better within traditional UN multilateral processes


Focus on concrete, evidence-based examples of what works rather than abstract declarations


Emphasize the complementary nature of WSIS and Global Digital Compact rather than viewing them as competing frameworks


Balance global standards and interoperability with regional sovereignty and local needs


Thought provoking comments

I think there’s a big gap and forgive me all the other MFA diplomats in the room that are from New York. There’s a big gap between the New York community and those of us gathered in the room that have been working on these issues for a long time. There is a tendency when you put resolutions to the General Assembly to oversimplify, to stick in compromised language where you don’t really understand the implications of what the language means and then to really put a political context on terms that are otherwise accepted or actually understood by the technical community.

Speaker

Yu Ping Chan


Reason

This comment exposed a critical structural problem in global internet governance – the disconnect between technical experts who understand the practical implications of policy decisions and the diplomatic community in New York that ultimately makes those decisions. It highlighted how technical terms get politicized and oversimplified in UN processes, potentially undermining effective outcomes.


Impact

This shifted the conversation from discussing what should be done to addressing how to communicate effectively with decision-makers. It provided crucial context for why previous processes may have failed and influenced the final recommendations about showing up and demanding seats at the table. It also validated concerns about language precision that Kurtis had raised earlier.


We often talk about the IGF and the WSIS 20-plus and what has been achieved. Hidden in that I think we forget quite a bit of what actually has been achieved, all the success stories… maybe we’ve just been a little bit too successful because the internet has been a phenomenal success because it’s built on the existing global standards that have been produced through multi-stakeholder processes and we’re taking it for granted.

Speaker

Kurtis Lindqvist


Reason

This reframed the entire discussion by suggesting that the technical community’s success in creating seamless global internet infrastructure has made people forget why those achievements matter. It introduced the paradox that success can lead to complacency and undervaluation of the systems that created that success.


Impact

This comment shifted the tone from focusing on gaps and problems to celebrating achievements and understanding why certain poll results showed low concern for technical standards. It influenced how other panelists framed their closing recommendations, emphasizing the need to showcase concrete successes rather than just identify problems.


Multipolarity is a geopolitical fact of our time, undeniable and irreversible, but multilateralism is a choice. It is a conscious decision to cooperate across divide, to build trust among ourselves and to shape a fairer global order, not despite our differences, but because of them.

Speaker

Lacina Kone


Reason

This philosophical distinction between multipolarity (the reality of multiple power centers) and multilateralism (the choice to cooperate) provided a sophisticated framework for understanding current global dynamics. It elevated the discussion from technical implementation to fundamental questions about how nations choose to engage with each other in digital governance.


Impact

This served as a powerful closing statement that synthesized the entire discussion’s themes about inclusion, cooperation, and the challenges of global coordination. It provided a conceptual framework that tied together the technical, political, and developmental aspects discussed throughout the session.


I found the poll really interesting actually… I was struck actually, if I read the results correctly, that the capacity building one might have been the highest… I found that really interesting because in my recollection, the original idea for WSIS came from a 1998 ITU Plenipot resolution and development and connectivity was the basis of that resolution.

Speaker

Fiona Alexander


Reason

This observation connected current priorities back to WSIS’s original development-focused mandate, suggesting that despite 20 years of progress, the fundamental challenge of digital inclusion remains paramount. It provided historical context that validated current priorities while highlighting persistent challenges.


Impact

This comment helped interpret the poll results and reinforced the development orientation that several panelists emphasized. It influenced the discussion by showing continuity between original WSIS goals and current needs, strengthening arguments for maintaining development focus in WSIS Plus 20.


Continental ownership creates leverage. With a direct endorsement from African head of the state, digital development has become a political imperative for all nations… We have moved from policy ideas to operational pilots and regional implementation. These are not just a future aspiration, they are a working model.

Speaker

Lacina Kone


Reason

This challenged the typical narrative of developing regions as passive recipients of digital development by presenting Africa as an active coordinator of its own digital transformation. It demonstrated how regional coordination can create leverage and move beyond aspirational policies to concrete implementation.


Impact

This shifted the conversation from discussing gaps in underserved regions to showcasing successful regional coordination models. It influenced how other panelists discussed regional development and provided concrete examples of multi-stakeholder cooperation working at scale.


Overall assessment

These key comments fundamentally shaped the discussion by introducing critical tensions and reframings that elevated the conversation beyond routine policy discussions. Yu Ping Chan’s observation about the New York-technical community gap introduced a meta-level analysis of why internet governance processes struggle, influencing how other panelists approached their recommendations. Kurtis Lindqvist’s ‘victim of our own success’ insight reframed technical achievements from being taken for granted to being celebrated and protected. Lacina Kone’s contributions consistently elevated the discussion from technical implementation to strategic vision, culminating in the multipolarity/multilateralism distinction that provided a philosophical framework for understanding current challenges. Fiona Alexander’s historical contextualization of poll results reinforced the development focus throughout. Together, these comments created a more sophisticated understanding of WSIS Plus 20 challenges – not just as technical or policy problems, but as communication, recognition, and cooperation challenges requiring strategic thinking about how different communities engage with global governance processes.


Follow-up questions

How can the WSIS action lines be updated to reflect developments since the Global Digital Compact and other recent digital governance initiatives?

Speaker

Yu Ping Chan


Explanation

This is important to ensure WSIS remains relevant and complementary to newer frameworks like the GDC, avoiding duplication while maintaining effectiveness


How can the IGF secure more sustainable financial basis from the regular UN budget?

Speaker

Jarno Suruela


Explanation

Critical for ensuring the long-term viability and global inclusiveness of the IGF as the primary multi-stakeholder forum for internet governance


What specific mechanisms can ensure meaningful multi-stakeholder participation in the New York-based UN processes, which traditionally operate differently from Geneva-based agencies?

Speaker

Yu Ping Chan and Fiona Alexander


Explanation

Essential for bridging the gap between traditional multilateral diplomacy and the multi-stakeholder model that has proven successful in internet governance


How can universal acceptance of internationalized domain names be accelerated beyond technical standards to actual implementation in applications worldwide?

Speaker

Kurtis Lindqvist


Explanation

Important for achieving true linguistic accessibility and inclusion in the global internet infrastructure


What are the most effective approaches to address the 40% usage gap in Africa where telecommunications infrastructure exists but people aren’t using it due to affordability, capacity, and other barriers?

Speaker

Lacina Kone


Explanation

Critical for achieving meaningful connectivity and digital inclusion across the African continent


How can the high-level segment of the WSIS forum be better reflected and integrated into the WSIS+20 review process?

Speaker

Yu Ping Chan


Explanation

Important for ensuring continuity and recognition of successful WSIS implementation mechanisms


What specific data and case studies should the technical community compile to demonstrate internet governance successes over the past 20 years?

Speaker

Kurtis Lindqvist


Explanation

Necessary to provide evidence-based input for WSIS+20 negotiations and prevent fragmentation of successful internet governance models


How can smaller or under-resourced organizations effectively engage with their governments to influence WSIS+20 positions?

Speaker

Fiona Alexander


Explanation

Important for ensuring diverse global perspectives are represented in government positions during the December negotiations


What language and framing strategies work best when communicating technical internet governance concepts to UN diplomats unfamiliar with these issues?

Speaker

Yu Ping Chan


Explanation

Critical for effective communication between technical communities and diplomatic processes to avoid misunderstandings that could lead to problematic policy outcomes


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #219 Generative AI Llms in Content Moderation Rights Risks

WS #219 Generative AI Llms in Content Moderation Rights Risks

Session at a glance

Summary

This discussion focused on the human rights implications of using Large Language Models (LLMs) for content moderation on social media platforms and digital services. The panel, featuring experts from the European Center for Nonprofit Law, Digital Trust and Safety Partnership, Center for Democracy and Technology, and Access Now, examined both the potential benefits and significant risks of deploying LLMs in automated content moderation systems.


The conversation highlighted how LLMs represent a concentration of power, with a handful of companies developing foundation models that are then deployed by smaller platforms, creating a cascading effect where moderation decisions made at the foundation level impact content across multiple platforms. While LLMs offer some advantages over traditional automated systems, including better contextual understanding and improved accuracy, they also pose serious risks to human rights, particularly freedom of expression, privacy, and non-discrimination.


A critical issue discussed was the disparity between high-resource languages like English and low-resource languages, with multilingual LLMs performing poorly for languages with limited training data. This creates significant inequities in content moderation, as demonstrated by examples from the Middle East and North Africa region, where Arabic content faces over-moderation while hate speech in Hebrew was under-moderated due to lack of appropriate classifiers.


The panelists shared concerning real-world examples, including the misclassification of Al-Aqsa Mosque as a terrorist organization and the wrongful detention of a Palestinian construction worker based on Facebook’s mistranslation. These cases illustrate how LLM errors can have severe consequences for marginalized communities, particularly during times of crisis when platforms tend to rely more heavily on automation.


The discussion emphasized the need for greater community involvement in LLM development, mandatory human rights impact assessments, and more transparency from platforms about their use of these technologies.


Keypoints

## Major Discussion Points:


– **Concentration of Power in LLM Development**: The discussion highlighted how a handful of companies (like those behind ChatGPT, Claude, Gemini, Llama) develop foundation models that are then used by smaller platforms, creating a concerning concentration of power where decisions made at the foundation level (such as defining Palestinian content as terrorist content) automatically trickle down to all deploying platforms.


– **Language Inequities and Low-Resource Languages**: A significant focus was placed on how LLMs perform poorly for “low-resource languages” (languages with limited textual data available for training), creating disparities where content moderation works well for English and other high-resource languages but fails for languages like Swahili, Tamil, Quechua, and various Arabic dialects, despite these being spoken by millions of people.


– **Real-World Harms in Content Moderation**: The panel extensively discussed concrete examples of LLM failures, particularly in the Middle East and North Africa region, including cases where Arabic content was over-moderated while Hebrew hate speech was under-moderated, mistranslations leading to false terrorism accusations, and aggressive automated removal of Palestinian content during crises.


– **Technical Limitations and Trade-offs**: The discussion covered inherent technical challenges including the precision vs. recall trade-off (accuracy vs. comprehensive coverage), hallucinations where LLMs confidently provide wrong information, and the particular vulnerability of LLMs when dealing with novel situations not well-represented in their training data.


– **Community Involvement and Alternative Approaches**: The conversation emphasized the need for meaningful community engagement throughout the AI development lifecycle, highlighting emerging community-led initiatives in the Global South that focus on culturally-informed, decentralized models as alternatives to the current concentrated approach.


## Overall Purpose:


The discussion aimed to provide a comprehensive analysis of the human rights implications of using Large Language Models (LLMs) for content moderation on social media platforms. The panel sought to bridge technical understanding with real-world societal impacts, moving beyond AI hype to document actual harms while also exploring potential solutions and alternative approaches that could better respect human rights and community needs.


## Overall Tone:


The discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep expertise while expressing genuine alarm about current practices. The tone was analytical rather than alarmist, with panelists providing concrete evidence and examples to support their concerns. While the conversation acknowledged some potential benefits of LLMs, the overall sentiment was cautionary, emphasizing the urgent need for better oversight, community involvement, and human rights protections. The tone remained constructive, with speakers offering specific recommendations and highlighting promising alternative approaches, suggesting a path forward despite the significant challenges identified.


Speakers

– **Marlene Owizniak**: Leads the technology team at the European Center for Nonprofit Law (ECNL), a human rights and civic space organization; based in San Francisco


– **David Sullivan**: Executive Director of the Digital Trust and Safety Partnership (DTSP), which brings together companies providing digital products and services around trust and safety best practices


– **Dhanaraj Thakur**: Research Director at the Center for Democracy and Technology (CDT), a nonprofit tech policy advocacy group based in Washington, D.C. and Brussels; expertise in content moderation and multilingual AI systems


– **Panelist 1**: Marwa Fatafta, MENA (Middle East and North Africa) policy and advocacy director at Access Now; expertise in regional content moderation issues and human rights impacts


– **Audience**: Multiple audience members including:


– Balthazar from University College London


– Someone from the Internet Architecture Board in the IETF (Internet Engineering Task Force)


– Professor Julia Hornley, Professor of Internet Law at Queen Mary University of London


**Additional speakers:**


None identified beyond those in the speakers names list.


Full session report

# Human Rights Implications of Large Language Models in Content Moderation: Panel Discussion Report


## Introduction and Context


This panel discussion examined the intersection of artificial intelligence and human rights in digital content moderation. The conversation featured Marlene Owizniak from the European Center for Nonprofit Law (ECNL), David Sullivan from the Digital Trust and Safety Partnership (DTSP), Dhanaraj Thakur from the Center for Democracy and Technology (CDT), and Marwa Fatafta from Access Now.


The discussion addressed how the deployment of Large Language Models (LLMs) for automated content moderation affects fundamental human rights, particularly freedom of expression, privacy, and non-discrimination. The panelists grounded their analysis in documented real-world harms and systemic inequities already emerging from current LLM implementations.


## The Concentration of Power Problem


### Foundation Models and Cascading Effects


Marlene Owizniak highlighted the unprecedented concentration of power in LLM development, explaining how “a handful of companies” developing foundation models like ChatGPT, Claude, Gemini, and Llama create a cascading effect where “any kind of decision made at the foundation level, let’s say, defining Palestinian content as terrorist content will then also trickle down to the deployer level unless it’s explicitly fine-tuned.”


This structural issue creates what Owizniak described as “even more homogeneity of speech” than previous systems. Unlike traditional platforms where individual companies made independent moderation decisions, the current LLM ecosystem concentrates unprecedented power in foundation model developers.


### Demystifying AI Technology


Owizniak provided crucial context by demystifying LLM technology, noting that “AI is neither artificial nor intelligent. It uses a lot of infrastructure, a lot of hardware, and it’s mostly guesstimates.” She characterised LLMs as “basically statistics on steroids” rather than divine intelligence, connecting their technical limitations to their concentrated ownership structure.


## Language Inequities and Systematic Discrimination


### The Low-Resource Language Crisis


Dhanaraj Thakur provided extensive analysis of how language inequities create systematic discrimination in LLM-based content moderation. He explained the concept of “low-resource languages” – languages with limited textual data available for training – and how this creates severe disparities in system performance. Despite languages like Swahili, Tamil, and Quechua being spoken by millions of people, they receive inadequate representation in training datasets compared to “high-resource languages” like English.


The consequences are severe: speakers of low-resource languages experience “longer moderation times, unjust content removal, and shadow banning” compared to English-language users. These represent systematic digital discrimination that mirrors and amplifies existing global inequalities.


### Complex Linguistic Challenges


Thakur introduced the concept of “diglossia” – situations where communities use two languages with different social functions, often reflecting colonial power structures. He explained how “in many of these languages, particularly in those that have gone through the colonial experience, there’s a combined use of two languages” where one represents power whilst the other serves mundane functions.


This analysis raised questions about whether LLM development might “replicate or exacerbate this kind of power dynamics between these two languages,” connecting historical colonialism to contemporary AI systems. Additional challenges include code-switching (mixing languages within conversations) and agglutinative language structures that don’t conform to English-based training assumptions.


### Regional Case Studies from MENA


Marwa Fatafta provided concrete examples from the Middle East and North Africa region, documenting systematic disparities where “there were no Hebrew classifiers to moderate hate speech in Hebrew language, but they were such for Arabic.” This had severe consequences during periods of heightened conflict, with Arabic content facing over-moderation whilst Hebrew hate speech remained undetected.


Fatafta described how “removing terrorist content in Arabic got it wrongly 77% of the time,” whilst critical infrastructure like Al-Aqsa Mosque was mislabelled as a terrorist organisation during sensitive periods. She also recounted cases where Facebook’s mistranslation led to false terrorism accusations, including a Palestinian construction worker who was wrongfully detained after the platform mistranslated his Arabic post about “attacking” his work (meaning going to work) as a terrorist threat.


## Crisis Response and Over-Moderation


Fatafta observed that “companies tend to over rely on automation around times of crises” and “are willing to sacrifice accuracy in the decisions, as long as we try to catch as large amounts of content as possible.” She noted that Meta lowered confidence thresholds from 85% to 25% for Arabic content during crisis periods, leading to what she described as “mass censorship of legitimate content.”


This approach is particularly harmful because crises are precisely when marginalised communities most need access to communication platforms for safety, coordination, and documentation of human rights violations.


## Technical Capabilities and Limitations


### Potential Benefits and Applications


David Sullivan provided the industry perspective, identifying areas where LLMs might improve current systems: enhancing risk assessments, improving policy development consultation, and augmenting rather than replacing human review. He noted the potential for “generative AI to improve explainability of content moderation decisions and provide better context to users.”


Sullivan emphasised that effective deployment requires understanding LLMs as tools that “augment human review rather than replace it” and referenced the ROOST (Robust Open Online Safety Tooling) project as an example of collaborative safety initiatives.


### Fundamental Technical Constraints


However, Sullivan was candid about technical limitations, explaining that “models struggle with novel challenges not adequately represented in training data” and highlighting the persistent issue of hallucinations where LLMs “confidently provide wrong information.”


He explained the precision versus recall trade-off that remains central to all content moderation systems: emphasising precision (accuracy) means missing harmful content, whilst emphasising recall (comprehensive coverage) means removing legitimate content. LLMs don’t eliminate this trade-off but may shift where the balance point lies.


### Structural Incompatibilities with Democratic Rights


Owizniak highlighted structural incompatibilities with democratic rights, explaining that “even the best intentioned platforms will make errors just because this is content that falls outside of data sets and the bell curve. It is, by definition, exceptional or contrarian.”


She noted that organisations working on “protests, civic space, assembly and association” deal with content that is “by default, contrarian, minority, anti-power, protest.” Statistical systems trained on mainstream content will systematically struggle with such material.


## Community Engagement and Alternative Approaches


### Current Inadequacies


Thakur noted that “social media companies lack awareness of local LLM developers and researchers,” missing opportunities for partnerships that could improve system performance for underserved languages and communities.


Owizniak outlined ECNL’s work developing “a framework for meaningful engagement” that involves stakeholders “from AI design stage through deployment,” including their current Discord partnership pilot project. She emphasised the need for involvement in reinforcement learning and human feedback processes rather than relying solely on what she termed “Silicon Valley experts.”


### Community-Led Alternatives


Thakur described efforts to develop “culturally-informed, decentralised models as alternatives to the current concentrated approach.” These initiatives focus on community ownership of both data and classification systems, representing a fundamental alternative to the centralised foundation model approach.


## ECNL’s Research Methodology


Owizniak described ECNL’s comprehensive research approach, which involved “reading 200+ computer science papers” and conducting human rights legal analysis to understand the implications of LLM deployment in content moderation. This methodology combines technical understanding with human rights expertise to provide grounded analysis of current harms.


## Regulatory and Governance Solutions


### Human Rights Impact Assessments


Fatafta advocated for “mandatory human rights impact assessments throughout the AI development lifecycle,” moving beyond voluntary corporate initiatives to regulatory requirements. She referenced BSR human rights due diligence findings regarding Meta’s content moderation practices as evidence of the need for systematic evaluation.


### Transparency and Accountability


Owizniak emphasised the need for greater transparency about “when and how LLMs are used in content moderation.” Current opacity makes it impossible for researchers, civil society, and affected communities to assess system performance or advocate for improvements.


### Alternative Moderation Models


Thakur highlighted “different moderation models beyond centralized approaches, including community-based solutions” that could provide more culturally appropriate and contextually sensitive content governance.


## Audience Engagement


The discussion included questions from academic participants, including Professor Julia Hornley from Queen Mary University of London, who raised concerns about whether community-based approaches could handle the sophisticated legal analysis that content moderation often requires. An audience member from University College London asked about government and civil society influence on technical design decisions.


## Conclusion


This panel discussion revealed the complexity and urgency of human rights challenges posed by LLM-based content moderation. While the technology offers some potential benefits over traditional automated systems, it also creates new forms of systematic discrimination and concentrates unprecedented power in foundation model developers.


The conversation moved beyond both uncritical AI optimism and complete technological pessimism to provide nuanced analysis grounded in documented harms. The speakers demonstrated how seemingly technical decisions about training data, confidence thresholds, and language support embed political choices with severe consequences for marginalised communities.


The discussion highlighted the need for meaningful community engagement, mandatory human rights assessments, and greater transparency in how LLMs are deployed for content moderation. All speakers agreed that human involvement remains essential and that current voluntary approaches to addressing these challenges are inadequate.


The path forward requires sustained collaboration between technologists, human rights advocates, affected communities, and policymakers to ensure that content moderation systems serve human rights rather than undermining them.


Session transcript

Marlene Owizniak: LLM’s and calling us online, Shen. It’s great to see you all here in person and welcome to the folks joining us online. My name is Marlene Owizniak. I lead the technology team at the European Center for Nonprofit Law, a human rights and civic space organization, mostly based in Europe, but we operate worldwide and I myself am based in San Francisco. I’m really thrilled to be here today with my esteemed panelists, from right to left, David Sullivan, executive director of the DTSP, what is his hand for again? Digital Trust and Safety Partnership. Yes. It’s a lengthy acronym. And David can share 30 seconds of what it is later. Dan Arush from, Charles Bradley from Center for Democracy and Technology, research director, and to my left, Marwa Fatafta, MENA policy and advocacy director at Access Now. And today we’ll talk about a topic that is really emerging and a lot of acronyms, apologies in advance, JNAI, LLMs, being based in San Francisco, it’s something that people talk about daily and it seems far fetched, but we already see that in the world today. So the way that our session will be structured is I’ll share a few key takeaways from our emerging research at ECNL, as well as some human rights impacts. And then we’ll hear from folks on the panel about different use cases, different risks. We’ll hear a regional perspective from Marwa and really try to bridge both the technical and societal aspects of LLMs and placing it today in everything that’s happening in the world, including geopolitical developments. So this topic is really relevant for ECNL. We’ve been working on automated content moderation. for the past five, six years, and LLMs have become really an interesting development, and interesting from a human rights perspective means both potential good as well as alarming use cases. And I’d say one of the biggest issues for LLMs, which is a subset of generative AI trained on vast textual data, is that while it promises efficiency and adaptability, they also pose serious risks. Automated content moderation, as many folks in this room probably know, pose already a lot of human rights risks and cause violations, and LLMs have become, at least in Silicon Valley, and increasingly presented as a silver bullet for solving these issues. However, our research has shown that they can reinforce existing systemic discrimination, censorship, and surveillance. And one of the most pressing issues that we found, and ECNL has conducted research on this topic for the past year, working with hundreds of different folks across civil society, academia, industry, on mapping the human rights impacts of large language models for content moderation. One of the most pressing issues is the concentration of power. So the way that LLM used for content moderation works today is that there’s a handful of companies that develop foundation models or LLMs. The one that probably folks are most aware of is ChatGPT. There’s also Cloud, Gemini, Llama, and a few others. And so this is at the AI developer level. Then you have the deployer level, which are often smaller social media platforms like Discord, Reddit, Slack. They often will not have their own LLMs, but they will use other LLMs like the ones I mentioned and fine-tune them for their own purposes. So what does this mean? Any kind of decision made at the foundation level, let’s say, defining Palestinian content as terrorist content will then also trickle down to the deployer level unless it’s explicitly fine-tuned. What this means for freedom of expression globally is that content moderation defined at the foundation level will also be replicated on the deployment one and really there’s even more homogeneity of speech as before. However, alternative approaches are emerging and we’ve seen throughout our research that there are community-led initiatives, especially in the global majority, that focus on public interest AI that is culturally informed and really decentralized. These models, though smaller in scale, demonstrate comparable performance with broader LLMs and highlight the potential for more rights-based moderation. Our report, which our friends at IGF can maybe post online, and I encourage you guys to check out, look at the various human rights impacts from privacy, freedom of expression information, assembly and association, non-discrimination, participation, and remedy. We’ll talk about some of these later but I encourage you to read it. It’s a thorough analysis of each of these rights and the last part is recommendations, which we’ll also dive in to this session. And just to note that we’ll have ample time for questions. I really want to hear from folks in the room as well as online. I already see a few experts on this topic and also encourage everyone to participate even though it seems like it’s brand new. A lot of the questions around large language models have been around for a long time. Obviously automated content moderation but even offline or human-led moderation, you know, these questions are often the same and they’re just exacerbated and accelerated due to the scale and speed of AI. With that said, I wanted to to turn it to David to share a few use cases of LLMs for content moderation. And DTSP together with BSR, Business for Social Responsibility, has led research on the topic. So David, if you could share some of those findings and introduce your work.


David Sullivan: Thank you. Thanks, Marlena. And I’m going to take these off for a moment. It’s great to be here with everyone. I’m David Sullivan. I lead the Digital Trust and Safety Partnership, which brings together companies providing all different types of digital products and services, including some of those companies that are frontier model developers and deployers, like a Google or a Meta, as well as smaller players, such as Discord and Reddit. And our companies come together around a framework of best practices for trust and safety, a framework that is content and technology agnostic. And the idea is basically that companies can come together around the practices that they use to develop their products, enforce, develop the governance and rules for those products, enforce those rules, improve over time, and be transparent with their users and with the public. So it’s about the practices of safety, as opposed to agreeing on what types of content should be favored or disfavored on these kinds of digital services. So what I want to start with, so last year, in 2024, we brought together a working group of our partner companies on best practices for AI and automation in trust and safety. So that looked at the full range of technologies from even the most basic kind of rule-based systems that have been used as part of trust and safety going back for 20 years, dealing with things like spam, to possibilities for use of generative AI as part of trust and safety. of which content moderation is kind of one component. And so we spent the better part of a year looking at what companies were doing and trying to identify some best practices and as well as what we called generative AI possibilities, ways that companies might be experimenting and beginning to use this technology as part of trust and safety, as well as what the limitations and challenges and ways to overcome those challenges. So I’d encourage folks to go to our website, DTSpartnership.org. That report is right on the front page. And as Marlena mentioned, we worked closely with a team at BSR who helped us do that research. Hannah from that team is here and is also really an expert in this space. So I want to just briefly mention a few things. First, as I think I already said, I think that use of trust and safety, use of AI and automation in trust and safety has always been a blended process of human and technology. That’s always been the case and it continues to be the case, even as what that blend looks like may change quite substantially as LLMs and these generative AI technologies get incorporated into trust and safety. The second thing is that perfection when it comes to content moderation is nearly an impossibility. And so we’re always thinking about potential for over-action or under-action when it comes to how companies are enforcing their policies. And we can talk a little bit more about some of the trade-offs there. And I’m sure we’ll talk a lot about that with this group here. So with that in mind, I wanted to just use our framework of these five overarching commitments to talk about five examples of kind of possibilities for the use of generative AI. as part of trust and safety. And then hopefully those will help kick off some discussion. So the first of our commitments that all our company members make is around product development. And so one example of a generative AI possibility in product development is the use of generative AI to enhance and inform the kinds of risk assessments that companies do when they are developing and rolling out new products or new features within products. So examples of this could be generative AI could help to analyze emerging patterns of content related abuse. They could identify edge cases that could then become more mainstream. You connect data points between different types of risk factors and could potentially be used as part of red teaming exercises by trust and safety teams, brainstorming attack scenarios and things like that. So that’s on the product development side. The second commitment that all our companies make is to product governance. And so there as part of that commitment, some of the best practices we’ve identified around external consultation, incorporating user perspectives into company policies and consulting with the civil society organizations and other external groups as the part of developing and iterating those policies. So there again, I think LLMs could potentially be leveraged to gather much more data. You have companies currently using kind of surveys and focus groups and getting information from outside experts and all of that may not always be as coherently brought together as it could be. So LLMs could help with that. And one other thing they could potentially do is help to create the kind of feedback loop where those organizations that spend a lot of time telling companies what they should and shouldn’t be doing with their policies. would be able to hear it. This is how your input was used in, you know, the development of this, you know, new content policy around whatever issue. On enforcement, so there I think one of the things that, where I am cautiously optimistic, is about the potential for generative AI to augment human review as opposed to replace it. And we hear a lot these days about AI replacing humans when it comes to content review. But I think the area where there’s the most potential, both in terms of shielding humans from having to review the worst of the worst type of content, but also being able to help provide context to human reviewers that maybe will help them with their decision-making. And also being able to route the things that are easily determined to be content violating away from humans to then make their own work more efficient. And we can talk, of course, about a lot of the challenges there as well. Just quickly on improvement, I think one thing that gen AI can do is sort of enhance the automated evaluation of context around violations. So, you know, it can be hard for companies to be able to let’s see basically the idea is being able to have more information at your disposal in order to figure out how your policies are being are actually being implemented in practice and to incorporate that context into automated actions as well as that sort of information for human reviewers. And then lastly on transparency, there I think there’s also potential for generative AI to improve the explainability of the decisions that companies are taking. So I think maybe all of us have at one point or another had an experience of having Something you’ve posted finding that it violates some services guidelines one way or another and when you Appeal those things you get very little information in return And so there is I think potential for these types of technologies to be able to provide a little bit more information So the example for example would be you know, if you have posted a video that’s an hour long It could tell you here’s the two minutes that we found to be violative and you can have a chance to correct that So those are I think some Possibilities some positive use cases. We’re going to talk a lot more about the limitations and challenges. I just wanted to Mention just a couple of them The first is that and this is where I think the stakes of all of this gets very high is that we don’t know what you know, kind of tomorrow’s content crises are going to look like and We know that these models are not good when they’re dealing with novel challenges that are not adequately Represented in their training data. That’s when they really go off off the deep end So we need to be aware of that. The second thing is that for all companies that have Trust and safety operations that have their content policies. They need to exist in three different forms They have to there has to be a public facing version for users to understand what’s allowed and not allowed There has to be the internal detailed. These are the specifics of how we enforce this policy, which you don’t want to make Completely public because bad actors can use that to kind of you know, gain the system and then you need a version That’s machine readable. It can be used by a company by by LLMs So those are that’s a complicated sort of balancing act and one that that also complicates these challenges Lastly I think there’s just trade-offs when it comes to the metrics that companies use here And so you you can you know optimize for? Precision which is really the metric about how correct your decisions are or you can optimize for recall which is about are here to talk to you about how we can make sure we get as much, covering as much content as possible. What that, these are the terms that, you know, kind of AI folks will throw around. They have real consequences when it comes to the kinds of harms that occur through digital services and the impact of those. And so you’re constantly having to balance, you know, we need to make sure we get as much of the really harmful content as possible. Whereas in other situations, you want to worry about false positives. So those are real trade-offs. You can’t just wish them away. And so I think hopefully that maybe helps kick things off and I’ll stop there to give others time.


Marlene Owizniak: Thanks. Thanks, David. And can everybody hear us? Yeah. OK, great. Because the mic situation is a little bit off, but you have to wear the earphones to hear. Next we’ll hear from Dhanaraj about multilingual models in particular. So a lot of this conversation research is often on English content and some colonial languages. But perhaps or hopefully unsurprisingly to folks in this room, that is not the case across languages. There’s a lot of inequities. And CDT really over the past few years has done groundbreaking research on this topic. It has informed our own research as well. So I’m thrilled to have you, Dhanaraj, here. And also shout out to Alia Bhatia, who is not here with us today, but who has done some of that research.


Dhanaraj Thakur: Yeah, great. Thank you, Marlena. And thanks for the invitation to join this conversation. Yeah, so I’m Dhanaraj Thakur. I’m research director at the Center for Democracy and Technology based in Washington, D.C. and in Brussels. We’re a nonprofit tech policy advocacy group. We focus on a range of issues, one of which is our own content moderation. Great. So, yeah, just to follow up on then on what David discussed on, like, the application of large and large models the content moderation analysis and trust and safety systems. I can talk a bit more specifically about how those are applied, how those systems and technologies are applied in what we’ll further discuss and explain is low-resource languages. And this is based on some research that as Marlena mentioned the CT has done. For example one report called large language models in non-language content analysis led by Gabriel Nicholas and Aliya Bhatia and a forthcoming report based on a series of case studies we’ve been doing on content moderation and global south looking at different low-resource languages specifically Maghrebi Arabic, Swahili, Tamil and Quechua. So when we talk about multilingual large language models what we’re essentially focused on is large language models that are trained on text data from several different languages at once. And the logic or the claim that researchers often make with these kinds of models is that they can extend the various multi-capacity capabilities and benefits that for example that David highlighted to languages other than English and even to languages for which there’s little or no text data available. And so you can also then see how you can apply some of these kinds of benefits to many of the content, user-generated content from various kinds of languages around the world. That said there are several issues and challenges that come up and that’s what I’ll spend a bit more time talking about. Skipping over the potential benefits which I think David has covered quite well. So a lot of studies also show that multilingual language models also struggle to deal with what’s called there’s this wide disparity between languages and how much textual text data is available. And so researchers describe or use categories of high-resource and low-resource languages. English has by multiple odds of magnitude much more text data available than any other language and there’s a lot of reasons behind that. You can think of the legacy of British colonialism, American neocolonialism and the subsequent erasure of regional and indigenous languages. Most of the companies that we are discussing now when we talk about frontier model companies are based in the U.S. as well, as well as the social media companies, so English becomes a dominant language there. So what we call high-resource languages effectively refer to languages where there are significant amounts of text data available, such as English and many other European and other languages, Chinese, Arabic, for example. On the other hand, on the other side of the spectrum, you have low-resource languages with very little textual data available, but these can still be major languages in terms of number of speakers. So this can include, for example, Swahili, Tamil, or dialects of Arabic, so the Maghrebi Arabic languages I mentioned earlier. There’s also like Bahasa Indonesian, which is like literally hundreds of millions of speakers. So what this leads to then is this kind of disparity or inequity in the potential applicability of these technologies to content analysis around, particularly in social media, but in other use cases. We could have a separate discussion on the terminology of high and low-resource as it applies to these kinds of languages, but we could leave that for another time. So one of the questions that comes up in a lot of our work is not so much the technical capability of these technologies, but also how they’re incorporated into existing trust and safety systems. And so here we come across several different kinds of problems. So with low-resource languages, for example, we have this problem of lack of training data, but often that lack of training data is not just for content in general, but can be for specific domains. So in our research, for example, we spoke to LLM and LP researchers, natural language processing researchers, working on Quechua, for example. and the other is the problem of having enough data and catch available generally for developing these models, but in specific domains such as hate speech. Because if hate speech is a concern, for example, for a particular trust and safety application, then you need particular data in that as well. And that’s also part of the lack of data or the low resource problems, so to speak. Many users, and this is, I think, a well-known fact for many of you, is that people online, when they come to user-generated content, engage in what’s called code switching. So they alternate between two languages, for example, because many people employ multiple languages in daily conversation as well. There are other challenges, such as the agglutinative nature of some of these languages. So by that, I mean that some languages, such as Quechua, Tamil, for example, will build words based on lexical roots and then they add suffixes to create complex meanings that often in other languages require entire sentences or multiple sentences to convey the same thing. How LLMs handle that process is quite different, but it adds challenges to the analysis of text or content in those languages. There’s also the issue of diglossia, which is a linguistic concept. Often in many of these languages, particularly in those that have gone through, like, the colonial experience, there’s a combined use of two languages, which I mentioned, but they’re done in such a way that there’s an asymmetrical relationship between the use of two languages. So if I use the example of Quechua and its relationship to Spanish, which is a colonial language, one would represent power status and issues of importance in how people use that language in the same sentence or paragraph with Quechua, whereas Quechua would be used, for example, a more mundane function. So this concept of, like, words from a certain language you use to represent more power and the other is used to represent less power, and that’s used together. and the issue that comes up here is to what extent, or a question that comes to mind is to what extent will the development of models around this, building upon this kind of dynamic replicate or exacerbate this kind of polydynamics between these two languages. This is combined with a problem that you see a lot, for example, in indigenous languages where there is a history of erasure of languages. And so often a concern that comes up when you talk to researchers and people in these communities is to what extent do these technologies combine or at least push back against that kind of erasure. In our research, what we showed or what we found, I’ll just highlight some of the problems, but these have direct impacts on people, social media users, who post in these languages. For example, what people observed is that often it would take a longer time for the social media companies to moderate content in these languages versus content that was uploaded in high resource or in the colonial languages. And because of the challenges around developing models around this, and or the lack of native speakers on trust institute teams that can handle these languages, it could lead to a longer time to moderate content. There were often in addition reports of unjust content removal, perceived shadow banning and so on. People also highlighted different ways of recognizing these problems, highlighted different tactics of what we call resistance. So, for example, I mentioned code switching. There’s algo speak, you know, using random letters in a word or using different emojis, for example, the watermelon emoji, which is used to refer to Palestine, for example. So using various tactics because they are aware of the failures or the potential weaknesses of these kinds of automated systems, using various tactics to get around them. Yeah, I can stop there for now. Thanks.


Marlene Owizniak: Thanks so much. And this is a perfect segue to talk about some real world regional harms going to the Middle East and North Africa region, which is, you know, especially today, very topical, but the issues around content moderation and censorship more broadly, surveillance are not new to the region. So really, really grateful to have Marwa here. And we’d love to hear from you about your regional perspective.


Panelist 1: Yeah, thank you, Marlena. And yeah, unfortunately, the MENA region is quite rife in examples. But I do want to first thank my co-panelists for laying the ground pretty well for me to provide some specific examples. I want to make my comments around three issues. The first one of which that has already been alluded to, to the question of where do you invest in those systems and in which languages? Arabic is one of I don’t want to call it a minority language. Millions of people speak it. It’s an official UN language. Yet, unfortunately, AI systems used by tech companies and social media platforms more specifically tend to be poorly trained. And I’ll mention some specific examples there. But the issue here is also in some cases where there is a minority language, companies sometimes think it’s not, you know, market. It’s not there is no incentive for them to prioritize that language, even though, for example, in the context of Palestine, Israel in 2021, when there was a surge of violence on the ground and also a surge of online content, protesting or documenting abuses, we noticed that there was an over-moderation of Arabic language under moderation of Hebrew language on META’s platforms more specifically. And after Business for Social Responsibility conducted a human rights due diligence into META’s content moderation of that period, one of the reasons behind such dynamic was the fact that There were no Hebrew classifiers to moderate hate speech in Hebrew language, but they were such for Arabic. One would ask a question here is that why, despite the context was very clear, there were high incitement, so high volume of content inciting to genocide, inciting to violence, pretty direct hate speech where it’s pretty much black and white. But nevertheless, the company did not think that it was a priority at the time to roll out classifiers that would be able to automatically detect and remove such harmful and potentially violative content. When we pushed back, of course, and after the due diligence findings were out, now META has classifiers, but we found out in the new round of violence, unfortunately, I mean, after October 7th, that those classifiers were not even well trained to be able to capture even more, a larger volume of hate speech and incitement to violence and genocide and dehumanization or dehumanizing rhetoric, which leads me to the second issue. That is the under and over enforcement that comes as a direct impact of basically company decisions and investment and where and when to deploy such systems. Let’s now zoom in into the concrete issues of how these systems are not, they’re far from perfect, but the risks and the direct impact of which can be very, very harmful. One of the examples here in terms of over enforcement, I mean, if you talk to, for example, Syrians, Syria has been one of the most sanctioned countries on Earth planet. Thankfully, many of the sanctions are being removed. But the result of that, you know, for example, on counterterrorism legislation or laws, we’ve seen aggressive moderation. and removing terrorist content in Arabic got it wrongly 77% of the time. That’s quite huge. And when we talk again about a region that is pretty much at the receiving end of this aggressive counterterrorism measures, the result is this mass scale censorship of activists, of human rights defenders, of journalists, and particularly around peaks of violence and escalations where people do come to online platforms to share their stories and the realities and to document abuses and for journalists, of course, to govern what’s happening on the ground. There are other examples that I could mention where I got things terribly wrong and extremely sensitive in critical moments. One example I can think of in twenty twenty one when Instagram falsely flagged Al-Aqsa Mosque, which is the third holiest mosque in Islam as a terrorist organization. And as a result, all hashtags. And that particular time is also quite interesting. It’s interesting because it was when the Israeli army stormed Al-Aqsa Mosque and people were reporting and sharing photos with that hashtag Al-Aqsa and this is when Instagram decided, or Meta, now is the time to mislabel this mosque as a terrorist organization and as a result all the content was banned and forcibly removed. During the unfolding genocide in Gaza we had also examples where one famous example was of a person whose Instagram bio was mistranslated. He said, you know, praise to be God, I’m Palestinian but the system translated it as praise to be God, Palestinian terrorists are fighting for their freedom. Many, many years ago also there was a case of a Palestinian construction worker who was working in Jerusalem, who was arrested by the Israeli police because he was flagged to them that he’s about to conduct a terrorist attack and they relied on Facebook’s translation, automated translation which falsely or mistakenly translated the man saying good morning, posting a picture of himself smoking a cigarette and leaning on a caterpillar to good morning, I’m going to attack them. And the man was detained for a few hours and interrogated and then he was released after the Israeli police realized oh, Facebook made a mistake in translation. The man had shut down his accounts, I do remember, meaning that those types of actions and their consequences can be quite detrimental for people’s ability not only to exercise themselves but can constitute or instill a sense of fear that they might be subject to similar detrimental consequences. David had mentioned an interesting point which I would like to elaborate on. The tension between precision versus recall. From my observation, having worked on multiple crises in the MENA region over the past few years, is that companies tend to over rely on automation around times of crises. And particularly when there are attacks or, you know, they feel like under pressure that they need to remove as fast as possible large amounts of content in order to avoid being, you know, liable. One example I can think of is, of course, the October 7th attack. Immediately after tens of thousands, hundreds, in fact, hundreds of thousands of content was just largely removed using automation. And there that balancing act that is not possible to even achieve means that companies are willing to sacrifice or, you know, to say, okay, it’s fine if we erroneously get content decisions or content moderation decisions wrong, as long as we try to catch as large amounts of content as possible. And there one specific example I can mention here is META’s decision to lower the threshold for hate speech classifiers directly in the aftermath of October 7th attack to remove, for these classifiers to detect comments in Arabic language and specifically those coming from Palestine. So lowering the confidence thresholds from, I think it was around 85 or so, all the way down to 25%, meaning that the classifier, you know, at that very low level could remove and hide people’s comments. Because again, the emphasis here on removal versus precision or accuracy in the decisions. Now, what does that mean for the users, for people and their ability to use? Those platforms freely and safely to express themselves. We’ve had situations where people were banned from commenting for days We’ve had people who had Really extremely innocuous. I mean just Palestinian Flags or they watermelon emojis being removed We’ve had even people having receiving warnings Before following particular accounts, you know For instance if you were a journalist known for covering the events in in Palestine and or in Gaza more specifically and The you would get a notification saying are you sure you want to follow this person because they’re known for spreading disinformation I’m talking about credible journalists as you know, professional journalists not influencers or content creators so we’ve had many examples and again where Hundreds if not thousands of people who had their content removed as a result of these types of that tension to which companies tend to tilt towards Again over moderation or aggressive moderation rather than than accuracy Lastly what I want to say is that okay. I’m not an expert on LLMs, but I What concerns me the most is that we are at the cusp yet of another era of new technologies or you know a new iteration of technologies in which there is a lot of promise, but there are yet to be proper human rights impact assessments and that’s something that you Excellently catch in your in your report that we still don’t have access to these systems It’s hard to independently audit them and therefore to understand and also work with the companies What are the the risks and how can be mitigated before they are already rolled out at a scale and then we as civil society? find ourselves in the position of having to Document the harm try to connect the dots and understand Okay, why is it that the certain population at a certain time being subject to censorship? and what could be the catalyst reasons behind it and then provide that as an evidence for platforms for them to correct course and adjust the systems and the policies behind them. And I’ll stop here. Thanks so much.


Marlene Owizniak: And before I open it up to the floor, I just wanted to highlight a few of the key risks that we found, just following up on the speaker’s points. And I really do encourage you to read our report. We distilled it down to 70 pages, which is still quite long. We read over 200 computer science papers and like really brought a human rights legal analysis to it and try to make it more digestible by having different chapters. So every right is its own chapter. And then there’s also one technical primer. Some of the concepts that David shared, which are very common in the, you know, AI, CS technical world are less so in policy and vice versa. And then Marwa said we need more human impact assessments. So BSI was brought up several times. There’s a handful of orgs and people doing that, but it’s, it’s really concerning that you have these many human rights impact assessments for this big of an impact. And some of the key LLM impacts we found, one of the benefits side, because there can be some potential use cases is that LLMs are typically better at assessing context. So if we are going to use automated content moderation, they typically perform better. The accuracy level is higher than traditional machine learning. And they can be also better for personalized content moderation. So we talk a lot about user empowerment and agency, and if folks want to kind of like adjust their own moderation settings, if someone is comfortable with sensitive content, for example, or gore or nudity, they can choose that versus others can filter that out. And also LLMs can be better at informing users in real time why the content was removed, for example. and I’m going to talk about what steps they can take to remedy it. There’s such a big gap today with explaining to users why their content was removed and what they can do to appeal that. That said, a few key risks specific to LLMs. One, because there’s so much content. And I often say, I’ve been working in AI for a long time, for those who know me, and AI is neither artificial nor intelligent. It uses a lot of infrastructure, a lot of hardware, and it’s mostly guesstimates. So LLMs, and I should have begun with that, is large language models. It’s basically statistics on steroids. It’s not divine intelligence. It’s just a lot of data with a lot of computing power, which is also one of the reasons why it’s so concentrated. What happens when systems have so much data, often from web scraping, is that they can infer sensitive attributes. Much more than traditional ML systems can. And when we think about the relationships between governments and companies today, that really puts minorities at risk of being targeted and increasingly surveilled. Marwa and folks here already talked about over and under enforcement. Unfortunately, marginalized groups are both impacted by false positives and false negatives. That means, for Marwa’s example, Palestinian content is both overly censored and at the same time, genocidal, hateful content is not removed from the platform. Hallucinations is a very typical Gen AI LLM example. When companies rely on LLM-driven content moderation to moderate misinformation, for example, the LLMs can put out really confident-sounding statements that are just wrong. So using that to inform human content moderators or automated removal often leads to just errors and inaccuracy. One last thing I will mention is that our organization, ECNL, we work a lot on protests, civic space, assembly and association. These are often actions and content that are, by default, contrarian, minority, anti-power, protest. You usually protest something. You protest a powerful institution. And if you think about AI, both traditional machine learning and LMs, they are statistical bell curves. And minority content falls outside this data set. And Marwa and David, I think, hinted at that when we think about crisis. And, quote-unquote, exceptional content. So even the best intentioned platforms will make errors just because this is content that falls outside of data sets and the bell curve. It is, by definition, exceptional or contrarian. So that’s really something to consider when you think about assembly and protests. Yeah, and I’ll just leave it at that. We also have a lot of work on participation, so I encourage you all to check that out and reach out. And I would love to open it up now. One of the things that we’ve been thinking a lot about at ACNL after doing this human rights impact assessment is now what? What kind of recommendations can we make to AI developers and employers? What are we still missing in the academic and civil society community? What gaps are there? So if anybody has thoughts on that, I would love to hear from you. Otherwise, any other questions? And folks online, please either write your question in the chat and I’ll bring it to the floor or you can raise your hand. I’ll take a couple questions just because we have limited time. Please raise your hand if you have a question. For now, there’s only one, so please, yes. And if you can introduce yourself, name, and affiliation, that’d be great. Oh, yeah, the mic is over there. I’m sorry. So please line up and go to the mic if you’d like to ask a question.


Audience: Yes, so this is Balthazar from University College London I’m just wondering if there are any avenues for other actors I mean in this case, sorry, in this case government and civil society to influence the technical design of the LLM used for content moderation within digital platform, or is it like largely proprietary by these social media companies and there’s no no way to influence the technical life cycle so to speak and then we just you know rely on tech platform to make decision on what is the next iteration of the LLM going to be or is there really I haven’t used some kind of external human in the loop mechanism so that civil society or government can influence in a more technical sense to complement the legal intervention and program intervention. Thank you


Marlene Owizniak: Thanks so much. I have a few thoughts, but I’ll hand it over to the panel first


Dhanaraj Thakur: Sure, thanks great person So one of the kind of feedback that came up a lot in our research is that engage with greater community leadership and participation in the building of LLM So for example, that can come in the form of the building of data sets ownership of data sets, right? For specific kinds of contents in in building LLMs There are many examples about around the world of this happening with local researchers and communities coming together to build LLMs, build models, language technologies for specific purposes outside of social media. The problem that came up a lot and I don’t know how if others have thoughts on this was that social media companies the ones who engage are often not aware of these communities of local LLM developers, LLM researchers or these kinds of efforts which they could benefit a lot from but there’s a gap there, a disconnect there. I think that’s one area as well There’s also significant opportunities for governments and even industry to invest in these kinds of partnerships as well and to support those kinds of efforts


David Sullivan: Just building on that I do think that there is an opportunity coming up where there is a lot of enthusiasm and interest within the trust and safety community for open source So in particular there’s a new project called Roost robust open online safety tooling Which is where a lot of companies are coming together to open source some of the technologies and tools in the space That’s been a sticky area when it comes to really challenging online safety issues But I think there is an opportunity there and there’s ways for people to get involved So I think that that’s one one positive to look at


Panelist 1: Plus one to involving communities from the get-go from the start and yes, and I also do Yeah, I do confirm that I don’t think that Social media companies are connected to local developers or local LLM experts. I certainly don’t see that happening in the region I would also say, in addition to these voluntary multi-stakeholder mechanisms or FORAs, maybe there should be a space for mandatory human rights impact assessments. Also, throughout the cycle of development, starting from the very beginning, up and, of course, during the launch of those systems and their enrollment, and, of course, following any adjustments or modifications of such systems and their use.


Marlene Owizniak: Yeah, and I’ll just add briefly on the AI lifecycle. ECNL has been working with Discord on piloting what we call a framework for meaningful engagement, where from the first stage of the lifecycle on AI design, they’re developing machine learning and LLM-driven interventions to moderate content online. So we’ve been partnering with them and with stakeholders around the world, some of you in this room, on helping them do that. So it’s a very specific case study, and I can’t share more about that. Another example where I think folks can be involved is after the deployment stage. So the way that LLMs work is they’re trained, and then there’s the whole validation slash evaluation section. They’re often done through reinforcement learning by human feedback. I won’t go into details, but it basically requires people to go through the outputs and retrain them. One thing that we’ve been advocating for at ECNL is to involve communities at that stage as well. What typically happens is during this reinforcement learning phase, it’s mostly Silicon Valley folks or experts like probably us in the room who would do that, but not the communities affected. And it’s very, very homogenous. So it’s people from elitist academic institutions, high-name NGOs, and those based in Silicon Valley. And that’s a problem, because it’s supposed to, quote-unquote, fix or improve the LLM, but it just ends up perpetuating even more bias. So many ways to involve folks, and that is definitely step one, I think, to actually make these systems better. Next question, please.


Audience: I’m part of the Internet Architecture Board in the IETF, the Internet Engineering Task Force. I have a very straightforward or blunt question, maybe. So I kind of understood that LLMs will always make mistakes. I think that seems obvious, but do you think there is an area to involve LLMs to do a good job here? Or do you think there will always be a need for other mechanisms to have humans involved in the thing? Or do you think LLMs is not the right technology at all, and we need maybe other ways to empower the users and to give users a decision about which contents they want to engage with and they want to see? So, you know, what’s the way forward?


Marlene Owizniak: Want to briefly say?


Dhanaraj Thakur: Sure, I can take a quick. So I think there are two dimensions to this. Like, how do companies address content moderation? And there are actually many different models, not just this kind of centralized model that you see with large social media companies. So we should keep that in mind. And they could use, like, you can imagine a subreddit, a moderator, thinking of ways they could use these kind of tools for their specific use case. And it could be very helpful for community building in that sense. So just keep in mind that there’s this range of options available. But I think when we think of it at a larger scale, all positions, always, at human moderators should be part of the consideration and the calculus of how you address content. I think others, like David and Marwad, mentioned this as well. I think what’s important is the flexibility around this. So for some of the particularly low-resource languages where there’s very little data and these kinds of tools may not be as effective, there should be a heavier emphasis on human moderation. And that could evolve over time. But I think there will always be some kind of combination between the two.


David Sullivan: I would just add, maybe, there’s an excellent research paper that Google folks put out last year in 2024 about how LLMs can be leveraged to support human raters of content, which goes into this at a level of technical detail that is beyond me, but I think might be helpful. But I think one of the opportunities here, which is cognizant of all of the risks when it comes to how AI can be misused when it comes to content moderation, I do think when you think about AI as this, sometimes, this technology that is overhyped and lacking business applications, I do think that content moderation and trust and safety is a concrete business application for AI, and one where the developers and deployers are often the same company. And so I think there are some opportunities there, but it comes with all the risks we’ve talked about.


Audience: That’s a very quick follow-up. So you said if the humans are part of the chain, I think the challenge is always scaling up and also timely reactions, right? Can you come comment on that?


Marlene Owizniak: Excuse me, there’s people behind me with time, but I urge you to read the reports, and we also have a large section on recommendations, so that is the next step forward. We only have two minutes, so briefly, please.


Audience: I’m Professor Julia Hornley. I’m a professor of internet law at Queen Mary University of London. I’m an academic, and I’m a lawyer, and hence my question. Obviously, for lawyers, it would often take an extremely long qualification for a judge to adjudicate content, right? For lawyers, these are very, very complex decisions, whereas, as I understand, LLMs and artificial intelligence is based on, obviously, complex processes of labeling and validation processes. So I was wondering whether, in addition to LLMs, which, by definition, will always have these problems, which you so often…


Dhanaraj Thakur: So there are models where it’s really engaging communities about what kinds of data, like how you classify data, what categories of data are important, and who ultimately owns it and becomes a steward of that. That kind of emphasis is very different from the current models. And having LLM developers partner with these kinds of communities in those contexts is one approach. And it’s very similar to the kinds of community-based internet networks that you’re talking about. And that also introduces different kinds of business models as well.


Marlene Owizniak: Thanks so much. And unfortunately, the session is already wrapping up. There’s so much to be said about this topic. I hope one takeaway you have is that it’s an emerging field. There’s still too little transparency. And if we can urge platforms to share more data, including how and when LLMs are used, we often don’t even know that. That is one thing. One of the things that we try to do at EC&L is really document the human rights harms as opposed to AI hype. There’s a lot of hype in this space, as you probably all know. And at the same time, excitement around community-driven models, like Dhanraj talked about. And so there is, like not everything is doom and gloom. There is hope for some community-driven public interest, fit-for-purpose models that I think we can explore. And really find a way that develop AI that respects labor rights, including human counter-moderators, users, and engages stakeholders. And going forward, we will obviously continue to work closely with our partners, many of you in the room, and implement the recommendations with the platforms, test them as well. They’re very much ongoing. So you’ll see that in our report in the last section. Please reach out if you want to get involved. This conversation is only starting. Who knows if LLMs will even be deployed. They’re very expensive to run to begin with. That’s something we didn’t really talk about. But in any case, hearing your voice and concerns is really important. So thank you so much for being here, and happy IGF.


M

Marlene Owizniak

Speech speed

168 words per minute

Speech length

2596 words

Speech time

926 seconds

LLMs can reinforce existing systemic discrimination, censorship, and surveillance

Explanation

While LLMs are presented as a solution to content moderation issues, research shows they can actually exacerbate existing problems. They perpetuate and amplify discriminatory patterns present in their training data, leading to biased enforcement that disproportionately affects marginalized communities.


Evidence

ECNL conducted research for the past year working with hundreds of different folks across civil society, academia, and industry on mapping human rights impacts of LLMs for content moderation


Major discussion point

Human Rights Impacts and Risks of LLMs in Content Moderation


Topics

Human rights | Legal and regulatory | Sociocultural


Disagreed with

– David Sullivan
– Panelist 1

Disagreed on

Optimism about LLM potential versus focus on current harms


Concentration of power at foundation model level creates homogeneity of speech globally

Explanation

A handful of companies develop foundation models like ChatGPT, Claude, Gemini, and Llama, which are then used by smaller platforms. Any content moderation decisions made at the foundation level automatically trickle down to all deploying platforms unless explicitly fine-tuned, creating unprecedented uniformity in global speech regulation.


Evidence

Example given: defining Palestinian content as terrorist content at foundation level will trickle down to deployer level platforms like Discord, Reddit, Slack unless explicitly fine-tuned


Major discussion point

Human Rights Impacts and Risks of LLMs in Content Moderation


Topics

Human rights | Legal and regulatory | Economic


LLMs can infer sensitive attributes more than traditional ML systems, putting minorities at risk of targeting and surveillance

Explanation

Due to the vast amount of data LLMs are trained on through web scraping, they can deduce sensitive personal characteristics about users far beyond what traditional machine learning systems could detect. This capability, combined with government-company relationships, creates significant surveillance risks for vulnerable populations.


Major discussion point

Human Rights Impacts and Risks of LLMs in Content Moderation


Topics

Human rights | Cybersecurity | Legal and regulatory


Marginalized groups face both over-enforcement and under-enforcement of content moderation

Explanation

Vulnerable communities experience a double burden where their legitimate content is excessively censored (false positives) while harmful content targeting them remains on platforms (false negatives). This creates a situation where they are both silenced and unprotected simultaneously.


Evidence

Palestinian content is both overly censored and at the same time, genocidal, hateful content is not removed from the platform


Major discussion point

Human Rights Impacts and Risks of LLMs in Content Moderation


Topics

Human rights | Sociocultural | Legal and regulatory


Agreed with

– Panelist 1

Agreed on

Crisis periods lead to over-reliance on automation with harmful consequences


Protest and contrarian content falls outside statistical bell curves, making it vulnerable to errors

Explanation

Content related to protests and civic activism is inherently contrarian and anti-establishment, representing minority viewpoints that fall outside the statistical norms that AI systems are trained on. Since LLMs operate on statistical patterns, this exceptional content is systematically misclassified, even by well-intentioned platforms.


Evidence

ECNL works on protests, civic space, assembly and association – actions that are by default contrarian, minority, anti-power; AI systems are statistical bell curves and minority content falls outside datasets


Major discussion point

Human Rights Impacts and Risks of LLMs in Content Moderation


Topics

Human rights | Sociocultural | Legal and regulatory


D

Dhanaraj Thakur

Speech speed

179 words per minute

Speech length

1857 words

Speech time

619 seconds

Wide disparity exists between high-resource languages like English and low-resource languages in available training data

Explanation

English has orders of magnitude more textual data available than any other language due to historical factors like British colonialism and American technological dominance. This creates a fundamental inequality where major languages with millions of speakers are still considered ‘low-resource’ for AI training purposes.


Evidence

Examples include Swahili, Tamil, Maghrebi Arabic dialects, and Bahasa Indonesian with hundreds of millions of speakers still being low-resource; legacy of British colonialism and American neocolonialism mentioned as contributing factors


Major discussion point

Language Inequities and Multilingual Challenges


Topics

Sociocultural | Human rights | Development


Agreed with

– Panelist 1

Agreed on

Language inequities create systematic discrimination in content moderation


Code switching, agglutinative language structures, and diglossia create additional challenges for LLM analysis

Explanation

Many users naturally alternate between languages in their communications, while some languages build complex meanings through word construction that would require entire sentences in other languages. Additionally, colonial language relationships create power dynamics where different languages within the same text carry different social meanings.


Evidence

Quechua and Tamil mentioned as agglutinative languages; diglossia example of Quechua-Spanish relationship where Spanish represents power/status and Quechua represents mundane functions


Major discussion point

Language Inequities and Multilingual Challenges


Topics

Sociocultural | Human rights | Legal and regulatory


Users experience longer moderation times, unjust content removal, and shadow banning in low-resource languages

Explanation

Due to technical limitations and lack of native speakers on trust and safety teams, content in low-resource languages takes significantly longer to moderate. Users also report widespread unjust removal of legitimate content and perceived shadow banning, leading them to develop resistance tactics.


Evidence

Users employ algo speak, random letters in words, different emojis like watermelon emoji for Palestine, and other tactics to circumvent system weaknesses


Major discussion point

Language Inequities and Multilingual Challenges


Topics

Human rights | Sociocultural | Legal and regulatory


Agreed with

– Panelist 1

Agreed on

Language inequities create systematic discrimination in content moderation


Greater community leadership and participation needed in building LLMs and datasets

Explanation

Local communities should have ownership and stewardship over the data and classification systems used to moderate their content. This approach would ensure cultural context and community values are properly represented in AI systems rather than imposing external standards.


Evidence

Examples of local researchers and communities building LLMs and language technologies for specific purposes outside social media; emphasis on community ownership of datasets


Major discussion point

Community Engagement and Governance Solutions


Topics

Sociocultural | Human rights | Development


Agreed with

– Marlene Owizniak
– Panelist 1

Agreed on

Community engagement and participation is critical for effective AI systems


Social media companies lack awareness of local LLM developers and researchers

Explanation

There is a significant disconnect between social media platforms and local communities of AI researchers and developers who could provide valuable expertise for their specific languages and contexts. This gap prevents companies from benefiting from existing local knowledge and community-driven solutions.


Major discussion point

Community Engagement and Governance Solutions


Topics

Development | Economic | Sociocultural


Agreed with

– Marlene Owizniak
– Panelist 1

Agreed on

Community engagement and participation is critical for effective AI systems


Different moderation models exist beyond centralized approaches, including community-based solutions

Explanation

Content moderation doesn’t have to follow the centralized model of large social media companies. Alternative approaches like subreddit moderation show how LLM tools could be adapted for specific community use cases, potentially being more effective for community building and context-appropriate moderation.


Evidence

Example of subreddit moderator using these tools for their specific use case


Major discussion point

Technical Limitations and Future Considerations


Topics

Sociocultural | Legal and regulatory | Economic


Flexibility needed with heavier emphasis on human moderation for low-resource languages

Explanation

Given the technical limitations of LLMs with low-resource languages, these contexts require a greater reliance on human moderators rather than automated systems. This balance should be flexible and can evolve over time as technology improves, but human oversight remains essential.


Major discussion point

Technical Limitations and Future Considerations


Topics

Human rights | Sociocultural | Development


Agreed with

– David Sullivan
– Audience

Agreed on

Human involvement remains essential in content moderation systems


Disagreed with

– David Sullivan
– Panelist 1

Disagreed on

Role of automation versus human moderation in content decisions


D

David Sullivan

Speech speed

170 words per minute

Speech length

1898 words

Speech time

666 seconds

LLMs can enhance risk assessments, improve policy development consultation, and augment human review rather than replace it

Explanation

Generative AI can analyze emerging abuse patterns, identify edge cases, connect risk factors, and assist in red teaming exercises. For policy development, LLMs can help gather and synthesize input from surveys, focus groups, and external experts more coherently, while in enforcement they can provide context to human reviewers and route clear violations away from human review.


Evidence

DTSP worked with BSR on research with partner companies; examples include analyzing emerging content abuse patterns, brainstorming attack scenarios, creating feedback loops with civil society organizations


Major discussion point

Technical Applications and Use Cases


Topics

Legal and regulatory | Cybersecurity | Economic


Agreed with

– Dhanaraj Thakur
– Audience

Agreed on

Human involvement remains essential in content moderation systems


Disagreed with

– Dhanaraj Thakur
– Panelist 1

Disagreed on

Role of automation versus human moderation in content decisions


Generative AI can improve explainability of content moderation decisions and provide better context to users

Explanation

Current content moderation appeals provide very little information to users about why their content was removed. LLMs have the potential to offer more detailed explanations and specific guidance on how to correct violations, such as identifying the specific problematic segments in longer content.


Evidence

Example given of hour-long video where system could identify the specific two minutes that were violative


Major discussion point

Technical Applications and Use Cases


Topics

Human rights | Legal and regulatory | Sociocultural


Models struggle with novel challenges not adequately represented in training data

Explanation

LLMs perform poorly when encountering new types of content crises or abuse patterns that weren’t present in their training data. This limitation is particularly concerning given the unpredictable nature of online harms and the high stakes involved in content moderation decisions.


Major discussion point

Technical Applications and Use Cases


Topics

Cybersecurity | Legal and regulatory | Human rights


Trade-offs exist between precision and recall metrics, with real consequences for harmful content detection

Explanation

Companies must constantly balance between precision (accuracy of decisions) and recall (coverage of content). Optimizing for one metric necessarily compromises the other, and these technical trade-offs have direct real-world impacts on both the spread of harmful content and the wrongful removal of legitimate speech.


Major discussion point

Technical Applications and Use Cases


Topics

Legal and regulatory | Human rights | Cybersecurity


Open source initiatives like ROOST provide opportunities for collaborative safety tooling

Explanation

The Robust Open Online Safety Tooling (ROOST) project represents a new approach where companies collaborate to open source trust and safety technologies. This initiative could provide avenues for broader community involvement in developing content moderation tools, though it has historically been challenging in this sensitive area.


Evidence

ROOST (Robust Open Online Safety Tooling) project mentioned as new collaborative effort


Major discussion point

Community Engagement and Governance Solutions


Topics

Legal and regulatory | Development | Economic


Content moderation represents a concrete business application for AI with specific technical opportunities

Explanation

Unlike many overhyped AI applications, content moderation provides a genuine business use case where AI developers and deployers are often the same company. This alignment creates opportunities for more integrated and effective solutions, though it comes with all the associated risks discussed.


Evidence

Reference to Google research paper from 2024 about LLMs supporting human content raters


Major discussion point

Technical Limitations and Future Considerations


Topics

Economic | Legal and regulatory | Cybersecurity


Disagreed with

– Marlene Owizniak
– Panelist 1

Disagreed on

Optimism about LLM potential versus focus on current harms


P

Panelist 1

Speech speed

153 words per minute

Speech length

1676 words

Speech time

656 seconds

Arabic content is over-moderated while Hebrew content is under-moderated due to classifier availability disparities

Explanation

During the 2021 violence surge, META had Arabic language classifiers for hate speech detection but no Hebrew classifiers, leading to systematic bias in enforcement. Even after Hebrew classifiers were developed following criticism, they remained poorly trained and ineffective at detecting incitement to violence and genocide.


Evidence

Business for Social Responsibility human rights due diligence found no Hebrew classifiers existed in 2021; after October 7th, Hebrew classifiers were still inadequately trained to capture hate speech, incitement to violence and genocide


Major discussion point

Language Inequities and Multilingual Challenges


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Dhanaraj Thakur

Agreed on

Language inequities create systematic discrimination in content moderation


Translation errors have led to false terrorism accusations and wrongful arrests

Explanation

Automated translation systems have made critical errors with severe real-world consequences, including false terrorism alerts that led to police arrests. These errors demonstrate how technical failures in AI systems can directly harm individuals through interaction with law enforcement and security systems.


Evidence

Palestinian construction worker arrested by Israeli police after Facebook mistranslated ‘good morning’ post with cigarette photo as ‘good morning, I’m going to attack them’; Instagram bio ‘praise to be God, I’m Palestinian’ mistranslated as ‘praise to be God, Palestinian terrorists are fighting for their freedom’


Major discussion point

Language Inequities and Multilingual Challenges


Topics

Human rights | Cybersecurity | Legal and regulatory


Companies over-rely on automation during crises, sacrificing accuracy for speed of content removal

Explanation

During crisis periods, platforms prioritize rapid content removal over accurate decision-making, accepting high error rates to avoid liability. This approach systematically disadvantages affected communities who need platforms most during critical moments to document abuses and share information.


Evidence

After October 7th attack, hundreds of thousands of content was removed using automation; companies feel pressure to remove content quickly to avoid liability


Major discussion point

Crisis Response and Over-Moderation


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Marlene Owizniak

Agreed on

Crisis periods lead to over-reliance on automation with harmful consequences


Disagreed with

– David Sullivan
– Dhanaraj Thakur

Disagreed on

Role of automation versus human moderation in content decisions


Confidence thresholds are lowered during crises, leading to mass censorship of legitimate content

Explanation

META lowered hate speech classifier confidence thresholds from around 85% to 25% specifically for Arabic content from Palestine after October 7th. This dramatic reduction meant that classifiers with very low confidence could automatically remove or hide content, leading to widespread censorship of legitimate expression.


Evidence

META lowered confidence thresholds for hate speech classifiers from ~85% to 25% for Arabic language content from Palestine; resulted in people banned from commenting for days, Palestinian flags and watermelon emojis removed


Major discussion point

Crisis Response and Over-Moderation


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Marlene Owizniak

Agreed on

Crisis periods lead to over-reliance on automation with harmful consequences


Aggressive counter-terrorism content moderation wrongly removes content 77% of the time in Arabic

Explanation

Automated systems designed to detect and remove terrorist content in Arabic language have an extremely high error rate, incorrectly flagging legitimate content as terrorism-related in more than three-quarters of cases. This massive failure rate particularly impacts regions already subject to aggressive counter-terrorism measures.


Evidence

77% error rate specifically mentioned for Arabic content removal related to terrorism


Major discussion point

Crisis Response and Over-Moderation


Topics

Human rights | Cybersecurity | Legal and regulatory


Critical infrastructure like Al-Aqsa Mosque has been mislabeled as terrorist organization during sensitive periods

Explanation

Instagram falsely flagged Al-Aqsa Mosque, the third holiest site in Islam, as a terrorist organization in 2021, causing all related hashtags and content to be banned. This error occurred precisely when the Israeli army stormed the mosque and people were trying to document and report on the events.


Evidence

Al-Aqsa Mosque flagged as terrorist organization by Instagram in 2021 when Israeli army stormed the mosque, resulting in all hashtags and related content being banned


Major discussion point

Crisis Response and Over-Moderation


Topics

Human rights | Sociocultural | Legal and regulatory


Mandatory human rights impact assessments should be required throughout the AI development lifecycle

Explanation

Current voluntary approaches are insufficient to address the scale of human rights harms from AI systems. Comprehensive, mandatory assessments should be conducted from initial development through deployment and any subsequent modifications to ensure human rights considerations are embedded throughout the process.


Major discussion point

Community Engagement and Governance Solutions


Topics

Human rights | Legal and regulatory | Development


Agreed with

– Marlene Owizniak
– Dhanaraj Thakur

Agreed on

Community engagement and participation is critical for effective AI systems


Disagreed with

– David Sullivan
– Marlene Owizniak

Disagreed on

Optimism about LLM potential versus focus on current harms


A

Audience

Speech speed

253 words per minute

Speech length

390 words

Speech time

92 seconds

LLMs will always make mistakes and require human involvement in content moderation

Explanation

Given the inherent limitations of LLM technology, there will always be errors in automated content moderation systems. The question becomes whether there are areas where LLMs can perform adequately, or if alternative approaches like user empowerment and choice should be prioritized over automated moderation entirely.


Major discussion point

Technical Limitations and Future Considerations


Topics

Legal and regulatory | Human rights | Economic


Agreed with

– David Sullivan
– Dhanaraj Thakur

Agreed on

Human involvement remains essential in content moderation systems


Complex legal decisions require extensive qualification, raising questions about AI’s capability for nuanced judgments

Explanation

Legal professionals undergo extensive training and qualification to make content-related decisions that judges would typically handle in court systems. This raises fundamental questions about whether AI systems, regardless of their sophistication, can adequately handle the nuanced legal and ethical judgments required for content moderation.


Evidence

Reference to judges requiring extensive qualification to adjudicate content and lawyers finding these very complex decisions


Major discussion point

Technical Limitations and Future Considerations


Topics

Legal and regulatory | Human rights | Sociocultural


Agreements

Agreement points

Human involvement remains essential in content moderation systems

Speakers

– David Sullivan
– Dhanaraj Thakur
– Audience

Arguments

LLMs can enhance risk assessments, improve policy development consultation, and augment human review rather than replace it


Flexibility needed with heavier emphasis on human moderation for low-resource languages


LLMs will always make mistakes and require human involvement in content moderation


Summary

All speakers agree that despite technological advances, human oversight and involvement in content moderation remains crucial. LLMs should augment rather than replace human judgment, particularly for low-resource languages and complex decisions.


Topics

Human rights | Legal and regulatory | Sociocultural


Community engagement and participation is critical for effective AI systems

Speakers

– Marlene Owizniak
– Dhanaraj Thakur
– Panelist 1

Arguments

Greater community leadership and participation needed in building LLMs and datasets


Social media companies lack awareness of local LLM developers and researchers


Mandatory human rights impact assessments should be required throughout the AI development lifecycle


Summary

There is strong consensus that meaningful community involvement from the beginning of AI development is essential, including local researchers, affected communities, and comprehensive stakeholder engagement throughout the AI lifecycle.


Topics

Human rights | Development | Sociocultural


Language inequities create systematic discrimination in content moderation

Speakers

– Dhanaraj Thakur
– Panelist 1

Arguments

Wide disparity exists between high-resource languages like English and low-resource languages in available training data


Users experience longer moderation times, unjust content removal, and shadow banning in low-resource languages


Arabic content is over-moderated while Hebrew content is under-moderated due to classifier availability disparities


Summary

Both speakers agree that significant language disparities in AI training data and system development lead to discriminatory outcomes, with non-English and particularly Arabic content facing systematic bias and poor moderation quality.


Topics

Human rights | Sociocultural | Legal and regulatory


Crisis periods lead to over-reliance on automation with harmful consequences

Speakers

– Marlene Owizniak
– Panelist 1

Arguments

Marginalized groups face both over-enforcement and under-enforcement of content moderation


Companies over-rely on automation during crises, sacrificing accuracy for speed of content removal


Confidence thresholds are lowered during crises, leading to mass censorship of legitimate content


Summary

Both speakers identify that during crisis situations, platforms increase automated moderation at the expense of accuracy, disproportionately harming marginalized communities who need platforms most during critical moments.


Topics

Human rights | Legal and regulatory | Cybersecurity


Similar viewpoints

Both speakers highlight how AI systems create disproportionate surveillance and enforcement risks for minority and marginalized communities, with particularly severe impacts on Arabic-speaking populations.

Speakers

– Marlene Owizniak
– Panelist 1

Arguments

LLMs can infer sensitive attributes more than traditional ML systems, putting minorities at risk of targeting and surveillance


Aggressive counter-terrorism content moderation wrongly removes content 77% of the time in Arabic


Topics

Human rights | Cybersecurity | Legal and regulatory


Both speakers see potential in alternative, more collaborative approaches to content moderation that move beyond centralized corporate control toward community-driven and open-source solutions.

Speakers

– David Sullivan
– Dhanaraj Thakur

Arguments

Open source initiatives like ROOST provide opportunities for collaborative safety tooling


Different moderation models exist beyond centralized approaches, including community-based solutions


Topics

Development | Economic | Sociocultural


Both speakers recognize that AI systems inherently struggle with content that falls outside normal patterns, whether it’s protest content or novel challenges, due to their statistical nature.

Speakers

– Marlene Owizniak
– David Sullivan

Arguments

Protest and contrarian content falls outside statistical bell curves, making it vulnerable to errors


Models struggle with novel challenges not adequately represented in training data


Topics

Human rights | Legal and regulatory | Cybersecurity


Unexpected consensus

Industry-civil society collaboration potential

Speakers

– David Sullivan
– Dhanaraj Thakur
– Marlene Owizniak

Arguments

Open source initiatives like ROOST provide opportunities for collaborative safety tooling


Greater community leadership and participation needed in building LLMs and datasets


ECNL has been working with Discord on piloting what we call a framework for meaningful engagement


Explanation

Despite the critical tone toward tech companies throughout the discussion, there was unexpected consensus that meaningful collaboration between industry and civil society is both possible and necessary, with concrete examples of successful partnerships already emerging.


Topics

Development | Legal and regulatory | Economic


Technical limitations acknowledgment across all stakeholders

Speakers

– David Sullivan
– Dhanaraj Thakur
– Panelist 1
– Audience

Arguments

Models struggle with novel challenges not adequately represented in training data


Code switching, agglutinative language structures, and diglossia create additional challenges for LLM analysis


Translation errors have led to false terrorism accusations and wrongful arrests


Complex legal decisions require extensive qualification, raising questions about AI’s capability for nuanced judgments


Explanation

Surprisingly, even the industry representative openly acknowledged significant technical limitations of LLMs, creating consensus across all stakeholders about the fundamental constraints of current AI technology for content moderation.


Topics

Legal and regulatory | Human rights | Sociocultural


Overall assessment

Summary

The discussion revealed strong consensus on key issues: the necessity of human involvement in content moderation, the critical importance of community engagement, the systematic discrimination created by language inequities, and the harmful over-reliance on automation during crises. There was also unexpected agreement on the potential for industry-civil society collaboration and honest acknowledgment of technical limitations.


Consensus level

High level of consensus on fundamental principles and problems, with implications suggesting that despite different perspectives, there is a shared foundation for developing more equitable and effective approaches to AI-driven content moderation. This consensus provides a strong basis for collaborative solutions that prioritize human rights, community involvement, and technical humility.


Differences

Different viewpoints

Role of automation versus human moderation in content decisions

Speakers

– David Sullivan
– Dhanaraj Thakur
– Panelist 1

Arguments

LLMs can enhance risk assessments, improve policy development consultation, and augment human review rather than replace it


Flexibility needed with heavier emphasis on human moderation for low-resource languages


Companies over-rely on automation during crises, sacrificing accuracy for speed of content removal


Summary

David Sullivan emphasizes LLMs augmenting rather than replacing human review and sees potential for AI to improve content moderation processes. Dhanaraj Thakur advocates for heavier human moderation especially for low-resource languages. Panelist 1 criticizes the over-reliance on automation during crises, arguing companies prioritize speed over accuracy.


Topics

Human rights | Legal and regulatory | Sociocultural


Optimism about LLM potential versus focus on current harms

Speakers

– David Sullivan
– Marlene Owizniak
– Panelist 1

Arguments

Content moderation represents a concrete business application for AI with specific technical opportunities


LLMs can reinforce existing systemic discrimination, censorship, and surveillance


Mandatory human rights impact assessments should be required throughout the AI development lifecycle


Summary

David Sullivan expresses cautious optimism about LLMs as concrete business applications with genuine opportunities. Marlene Owizniak and Panelist 1 focus more heavily on documenting current harms and the need for stronger regulatory oversight, with less emphasis on potential benefits.


Topics

Human rights | Legal and regulatory | Economic


Unexpected differences

Degree of technical optimism about LLM capabilities

Speakers

– David Sullivan
– Marlene Owizniak

Arguments

Generative AI can improve explainability of content moderation decisions and provide better context to users


LLMs can infer sensitive attributes more than traditional ML systems, putting minorities at risk of targeting and surveillance


Explanation

Despite both being from organizations that work closely with tech companies, David maintains more optimism about LLM potential for improving user experience and transparency, while Marlene emphasizes how the same capabilities create surveillance risks. This disagreement is unexpected given their similar institutional positions and shared concern for human rights.


Topics

Human rights | Cybersecurity | Legal and regulatory


Overall assessment

Summary

The main areas of disagreement center on the appropriate balance between automation and human oversight, the level of optimism about LLM potential versus focus on current harms, and implementation approaches for community engagement. While all speakers acknowledge both benefits and risks of LLMs, they differ significantly in emphasis and proposed solutions.


Disagreement level

Moderate disagreement with significant implications. The speakers share fundamental concerns about human rights impacts but differ on whether to focus on improving current systems or implementing stronger regulatory oversight. These differences could lead to divergent policy recommendations and advocacy strategies, potentially affecting how the technology develops and is regulated.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers highlight how AI systems create disproportionate surveillance and enforcement risks for minority and marginalized communities, with particularly severe impacts on Arabic-speaking populations.

Speakers

– Marlene Owizniak
– Panelist 1

Arguments

LLMs can infer sensitive attributes more than traditional ML systems, putting minorities at risk of targeting and surveillance


Aggressive counter-terrorism content moderation wrongly removes content 77% of the time in Arabic


Topics

Human rights | Cybersecurity | Legal and regulatory


Both speakers see potential in alternative, more collaborative approaches to content moderation that move beyond centralized corporate control toward community-driven and open-source solutions.

Speakers

– David Sullivan
– Dhanaraj Thakur

Arguments

Open source initiatives like ROOST provide opportunities for collaborative safety tooling


Different moderation models exist beyond centralized approaches, including community-based solutions


Topics

Development | Economic | Sociocultural


Both speakers recognize that AI systems inherently struggle with content that falls outside normal patterns, whether it’s protest content or novel challenges, due to their statistical nature.

Speakers

– Marlene Owizniak
– David Sullivan

Arguments

Protest and contrarian content falls outside statistical bell curves, making it vulnerable to errors


Models struggle with novel challenges not adequately represented in training data


Topics

Human rights | Legal and regulatory | Cybersecurity


Takeaways

Key takeaways

LLMs in content moderation pose significant human rights risks including reinforcing discrimination, censorship, and surveillance while concentrating power among a few foundation model companies


Language inequities are severe – low-resource languages face longer moderation times, higher error rates, and systematic bias compared to high-resource languages like English


During crises, platforms over-rely on automation and lower confidence thresholds, leading to mass censorship of legitimate content while sacrificing accuracy for speed


LLMs perform better than traditional ML at understanding context and can improve user explanations, but they struggle with novel content and hallucinate confidently incorrect information


Marginalized communities face both over-enforcement (false positives) and under-enforcement (false negatives) simultaneously


Protest and contrarian content is inherently vulnerable to AI moderation errors because it falls outside statistical norms by definition


Community-driven, culturally-informed AI models show promise as alternatives to centralized foundation models


Resolutions and action items

Implement mandatory human rights impact assessments throughout the entire AI development lifecycle from design to deployment


Establish frameworks for meaningful engagement that involve affected communities from the AI design stage through deployment


Increase investment in partnerships with local LLM developers and researchers, particularly for low-resource languages


Develop open source safety tooling initiatives like ROOST to enable collaborative approaches


Involve affected communities in reinforcement learning and human feedback processes rather than relying solely on Silicon Valley experts


Require platforms to provide greater transparency about how and when LLMs are used in content moderation


Document human rights harms systematically to counter AI hype with evidence-based analysis


Unresolved issues

How to balance precision versus recall metrics in content moderation without causing systematic harm to marginalized groups


Whether LLMs are fundamentally the right technology for content moderation or if alternative user empowerment approaches should be prioritized


How to scale human involvement in content moderation while maintaining timely responses during crises


How to address the economic sustainability of LLMs given their high operational costs


How to prevent the replication of colonial language hierarchies and power dynamics in multilingual AI systems


How to ensure adequate representation of indigenous and minority languages in AI development


How to create effective oversight mechanisms for proprietary AI systems used by social media companies


Suggested compromises

Implement blended human-AI approaches that augment rather than replace human moderators, with flexibility to emphasize human moderation more heavily for low-resource languages


Use LLMs to enhance human reviewer capabilities by providing better context and routing obviously violative content away from humans rather than fully automating decisions


Develop community-specific moderation models that can be tailored to different contexts (like subreddit moderators) rather than relying solely on centralized approaches


Create tiered systems where confidence thresholds and automation levels can be adjusted based on language resources and cultural context


Establish partnerships between large tech companies and local researchers/communities to combine resources with cultural expertise


Thought provoking comments

Any kind of decision made at the foundation level, let’s say, defining Palestinian content as terrorist content will then also trickle down to the deployer level unless it’s explicitly fine-tuned. What this means for freedom of expression globally is that content moderation defined at the foundation level will also be replicated on the deployment one and really there’s even more homogeneity of speech as before.

Speaker

Marlene Owizniak


Reason

This comment crystallizes one of the most critical structural issues with LLM-based content moderation – the concentration of power and how biases cascade through the entire ecosystem. It moves beyond technical discussions to highlight the systemic implications for global freedom of expression.


Impact

This framing established the power dynamics theme that ran throughout the discussion, setting up the foundation for later speakers to provide concrete examples of how this plays out in practice, particularly Marwa’s examples from the MENA region.


There were no Hebrew classifiers to moderate hate speech in Hebrew language, but they were such for Arabic. One would ask a question here is that why, despite the context was very clear, there were high incitement… the company did not think that it was a priority at the time to roll out classifiers that would be able to automatically detect and remove such harmful and potentially violative content.

Speaker

Marwa Fatafta


Reason

This comment exposes the political dimensions of seemingly technical decisions about language support in AI systems. It reveals how resource allocation decisions by tech companies can systematically disadvantage certain communities while protecting others, even in contexts of clear harm.


Impact

This shifted the discussion from abstract concerns about bias to concrete examples of how technical decisions have real-world consequences for vulnerable populations. It demonstrated how the ‘concentration of power’ issue Marlene introduced manifests in practice.


There’s also the issue of diglossia… Often in many of these languages, particularly in those that have gone through, like, the colonial experience, there’s a combined use of two languages… one would represent power status and issues of importance… whereas [the other] would be used… a more mundane function… to what extent will the development of models around this… replicate or exacerbate this kind of power dynamics between these two languages.

Speaker

Dhanaraj Thakur


Reason

This comment introduces sophisticated linguistic and postcolonial analysis to the technical discussion, showing how LLMs might not just fail to understand languages but actively perpetuate colonial power structures embedded in language use patterns.


Impact

This deepened the conversation by connecting historical colonialism to contemporary AI systems, adding a crucial dimension that moved the discussion beyond technical performance metrics to questions of historical justice and power reproduction.


From my observation… companies tend to over rely on automation around times of crises. And particularly when there are attacks… they feel like under pressure that they need to remove as fast as possible large amounts of content… companies are willing to sacrifice… accuracy in the decisions, as long as we try to catch as large amounts of content as possible.

Speaker

Marwa Fatafta


Reason

This insight reveals how crisis situations create perverse incentives that amplify the worst aspects of automated content moderation, showing how the precision vs. recall trade-off becomes weaponized against marginalized communities during their most vulnerable moments.


Impact

This comment connected the technical discussion of precision vs. recall that David had introduced to real-world crisis scenarios, showing how technical trade-offs become political choices with severe consequences for human rights during critical moments.


AI is neither artificial nor intelligent. It uses a lot of infrastructure, a lot of hardware, and it’s mostly guesstimates… LLMs… is basically statistics on steroids. It’s not divine intelligence. It’s just a lot of data with a lot of computing power, which is also one of the reasons why it’s so concentrated.

Speaker

Marlene Owizniak


Reason

This demystifying comment cuts through AI hype to reveal the material and statistical reality of these systems, directly connecting their technical limitations to their concentrated ownership structure.


Impact

This reframing helped ground the discussion in material reality rather than technological mysticism, providing a foundation for more realistic policy discussions and connecting technical limitations to economic concentration.


Even the best intentioned platforms will make errors just because this is content that falls outside of data sets and the bell curve. It is, by definition, exceptional or contrarian… our organization… we work a lot on protests, civic space, assembly and association. These are often actions and content that are, by default, contrarian, minority, anti-power, protest.

Speaker

Marlene Owizniak


Reason

This comment reveals a fundamental incompatibility between the statistical nature of AI systems and the protection of dissent and protest rights, showing how the technology is structurally biased against the very content that democratic societies most need to protect.


Impact

This insight shifted the conversation from fixable bias problems to fundamental structural incompatibilities, suggesting that some human rights issues with LLMs may be inherent rather than solvable through better training or fine-tuning.


Overall assessment

These key comments transformed what could have been a technical discussion about AI performance into a sophisticated analysis of power, colonialism, and structural inequality. The speakers successfully connected abstract technical concepts to concrete human rights harms, while revealing how seemingly neutral technical decisions embed political choices. The discussion evolved from identifying problems to understanding their systemic nature – moving from ‘LLMs make mistakes’ to ‘LLMs systematically reproduce and amplify existing power structures.’ The comments also demonstrated how crisis situations exploit these structural vulnerabilities, making the stakes clear and urgent. Most importantly, the speakers avoided both uncritical AI hype and complete technological pessimism, instead providing a nuanced analysis that grounds policy recommendations in material reality while maintaining focus on community-driven alternatives.


Follow-up questions

How can social media companies better connect with local LLM developers and researchers in different regions?

Speaker

Dhanaraj Thakur


Explanation

There’s a disconnect between social media companies and local communities developing LLMs, which could benefit content moderation systems but companies are often unaware of these efforts


What are the specific technical details of how LLMs can support human content raters?

Speaker

David Sullivan


Explanation

Sullivan referenced a Google research paper from 2024 about leveraging LLMs to support human raters but noted the technical details were beyond his expertise


How can scaling and timely reactions be addressed when humans are part of the content moderation chain?

Speaker

Audience member from IETF


Explanation

This addresses the fundamental challenge of balancing human oversight with the need for rapid, large-scale content moderation


How can complex legal decisions about content be effectively translated into LLM training and validation processes?

Speaker

Professor Julia Hornley


Explanation

Legal content decisions require extensive qualification and expertise, raising questions about how this complexity can be captured in AI systems


What alternative business models could support community-based LLM development for content moderation?

Speaker

Dhanaraj Thakur


Explanation

Community-driven models require different economic structures than current centralized approaches


Will LLMs actually be widely deployed given their high operational costs?

Speaker

Marlene Owizniak


Explanation

The economic viability of LLMs for content moderation remains uncertain due to expensive computational requirements


How can platforms be urged to share more data about when and how LLMs are used in content moderation?

Speaker

Marlene Owizniak


Explanation

Lack of transparency makes it difficult to assess and improve LLM-based content moderation systems


How can reinforcement learning phases better involve affected communities rather than just Silicon Valley experts?

Speaker

Marlene Owizniak


Explanation

Current human feedback processes are homogenous and may perpetuate bias rather than improve LLM performance


What are the most effective ways for governments and civil society to influence technical LLM design beyond legal interventions?

Speaker

Balthazar from University College London


Explanation

Understanding pathways for external stakeholders to impact proprietary AI systems used by social media companies


How can mandatory human rights impact assessments be implemented throughout the AI development lifecycle?

Speaker

Marwa Fatafta


Explanation

Current voluntary assessments are insufficient given the scale of human rights impacts from LLM-based content moderation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #51 Strengthening Cyber Resilience in Global Posts Logistics

Open Forum #51 Strengthening Cyber Resilience in Global Posts Logistics

Session at a glance

Summary

The UPU Open Forum on Strengthening Cybersecurity for the Global Posts and Logistics Sector brought together experts to discuss the growing cyber threats facing postal services worldwide as they undergo digital transformation. Kevin Hernandez from the Universal Postal Union presented alarming findings from a survey of 52 countries, revealing that while 71% of postal services now offer digital services extending far beyond traditional mail delivery, their cybersecurity preparedness remains inadequate. The survey showed that less than two-thirds of posts have implemented basic cyber hygiene practices, with developing regions in Latin America, the Caribbean, Asia-Pacific, and Africa being particularly vulnerable.


Nigel Cassimire from the Caribbean Telecommunications Union highlighted the region’s digital transformation challenges and described their partnership with the UPU through a memorandum of understanding to enhance postal cybersecurity capabilities. Floretta Faber from Albania’s National Cyber Security Authority shared concrete examples of cyber threats, noting that over 15% of cyber attacks in Albania target postal services, primarily through domain impersonation and phishing campaigns designed to exploit public trust in postal brands. She emphasized the importance of proactive measures including staff training, early detection systems, and joint incident response protocols.


Mats Lillesund from Norwegian Post stressed the critical importance of collaboration between postal organizations and industry stakeholders, citing successful sector-specific initiatives like the Nordic financial CERT as models for information sharing. Tracy Hackshaw outlined the UPU’s comprehensive cyber resilience program, including the secure .post domain initiative and plans for a postal sector Information Sharing and Analysis Center (ISAC) to facilitate threat intelligence collaboration. The discussion concluded with recognition that securing postal services requires both technical solutions and human capacity building, as postal workers increasingly serve as the interface between citizens and digital services in an interconnected world.


Keypoints

## Major Discussion Points:


– **Current State of Postal Cybersecurity**: Kevin Hernandez presented alarming findings from a UPU survey of 52 countries, revealing that posts are rapidly expanding digital services (71% offering e-commerce, 58% digital financial services) but have poor cybersecurity implementation rates. Only basic practices like secure websites are implemented by two-thirds of posts, while critical measures like cybersecurity training and incident response plans lag significantly behind.


– **Regional Disparities in Cyber Preparedness**: The discussion highlighted stark regional differences in cybersecurity readiness, with developing regions—particularly Latin America and the Caribbean, Asia-Pacific, and Africa—showing the lowest implementation rates of cyber hygiene best practices and inadequate budget allocations despite increasing cybersecurity workloads.


– **Real-World Threat Landscape**: Floretta Faber from Albania’s National Cyber Security Authority provided concrete examples of postal sector attacks, noting that over 15% of cyber attacks in Albania target postal services through domain impersonation and phishing campaigns, demonstrating that these threats affect both developed and developing nations.


– **Collaborative Solutions and Partnerships**: Multiple panelists emphasized the critical importance of cross-sector collaboration, with examples including the CTU-UPU partnership in the Caribbean, Norway’s sector-specific CERT initiatives, and the UPU’s development of collaborative platforms like the postal ISAC (Information Sharing and Analysis Center).


– **UPU’s Cyber Resilience Program**: Tracy Hackshaw outlined comprehensive UPU initiatives including the .post secure domain infrastructure, the secure.post platform for threat detection, and plans for a global postal ISAC to facilitate secure information sharing among postal operators and their supply chain partners.


## Overall Purpose:


The discussion aimed to assess the current state of cybersecurity in the global postal and logistics sector, identify vulnerabilities and regional disparities, and explore collaborative solutions to strengthen cyber resilience as postal services increasingly become digital service hubs offering e-commerce, financial services, and e-government solutions.


## Overall Tone:


The discussion maintained a professional and urgent tone throughout, beginning with concern as alarming statistics were presented about the sector’s cyber vulnerabilities. The tone evolved to become more constructive and solution-oriented as panelists shared successful initiatives and collaborative approaches. While the gravity of the cybersecurity challenges was consistently acknowledged, the conversation remained optimistic about the potential for improvement through partnership, knowledge sharing, and the implementation of comprehensive cyber resilience programs.


Speakers

– **Mayssam Sabra** – Moderator from the DotPost Business Management Unit of the Postal Technology Center of the UPU (Universal Postal Union)


– **Kevin Hernandez** – Digital Inclusion Expert at the Universal Postal Union, works on Connect.Post project


– **Floreta Faber** – Deputy Director General and Director for International Project Coordination and Strategic Cyber Security Development at the Albanian National Cyber Security Authority


– **Nigel Cassimire** – Deputy Secretary General of the Caribbean Telecommunications Union (CTU)


– **Mats Lillesund** – Director of Governance and Communication Group Security at Postenbrink AS (Norwegian Post)


– **Tracy Hackshaw** – Head of the Dotpost Business Management Unit of the Universal Postal Union


– **Ihita Gangavarapu** – Representative of Youth IGF India, works in external threat monitoring


**Additional speakers:**


None identified beyond the speakers names list provided.


Full session report

# UPU Open Forum on Strengthening Cybersecurity for the Global Posts and Logistics Sector: Discussion Report


## Executive Summary


The Universal Postal Union’s Open Forum on Strengthening Cybersecurity for the Global Posts and Logistics Sector brought together international experts to address cybersecurity challenges facing postal services as they expand their digital offerings. Moderated by Maysam Sabra from the UPU’s DotPost Business Management Unit, the forum featured presentations from the UPU, national cybersecurity authorities, postal operators, and regional telecommunications unions examining current threats and collaborative solutions.


The discussion highlighted how postal services are evolving beyond traditional mail delivery into comprehensive digital service providers, creating new cybersecurity vulnerabilities that require coordinated responses across the global postal network.


## Digital Services Survey Findings


Kevin Hernandez, Digital Inclusion Expert at the Universal Postal Union, presented findings from a survey of 52 countries revealing the extent of postal services’ digital transformation. The data showed that 71% of postal services now promote economic inclusion through e-commerce services, 58% offer digital financial services for financial inclusion, and 51% provide e-government services for social inclusion. Over one-third (34%) of postal services show signs of becoming comprehensive “one-stop shops” combining multiple inclusion services.


However, this expansion has not been matched by corresponding cybersecurity improvements. Less than two-thirds of postal services implement basic cyber hygiene practices such as secure websites, and essential security measures including cybersecurity training programmes and incident response plans lag behind service expansion. The survey revealed regional disparities, with developing regions showing lower implementation rates of cybersecurity best practices.


A significant concern identified was the budget-workload mismatch: while 70% of postal services report increased cybersecurity workloads, less than half have increased their cybersecurity budget allocations accordingly.


Hernandez also mentioned the Connect.Post project, which aims to connect all post offices to the internet, further emphasizing the digital transformation underway across the postal sector.


## Regional Perspectives


### Caribbean Telecommunications Union


Nigel Cassimire, Deputy Secretary General of the Caribbean Telecommunications Union, acknowledged that digital transformation in the Caribbean remains relatively underdeveloped. The CTU signed a memorandum of understanding with the UPU in 2023 to promote digital transformation and enhance cybersecurity capabilities across Caribbean postal services. The CTU conducts digital readiness assessments in member states to identify specific vulnerabilities and development needs.


### Albania’s National Cyber Security Authority


Floreta Faber, Deputy Director General of Albania’s National Cyber Security Authority, provided specific threat intelligence data showing over 6,500 indicators of compromise from January to March 2025, with over 15% linked to postal services. These attacks primarily involve domain impersonation and phishing campaigns exploiting public trust in postal brands.


Albania experienced significant state-sponsored cyber attacks in 2022 affecting over 1,200 e-government services, leading to comprehensive cybersecurity reforms. Faber emphasized that attacks on postal services extend beyond operational disruption to undermine public confidence in government institutions, noting that “the human layer is still the weakest link in cybersecurity attacks.”


### Norwegian Post


Mats Lillesund, Director of Governance and Communication Group Security at Norwegian Post, shared Norway’s experience with collaborative cybersecurity initiatives, highlighting the success of the Nordic Financial CERT in creating effective information-sharing mechanisms across the financial sector. He emphasized developing a culture of openness regarding security incidents, moving beyond traditional competitive secrecy to embrace collaborative defense strategies.


Norway faces similar global threats including sophisticated fraud campaigns using postal logos and computer-based attacks, but their collaborative approach has enhanced detection, response, and recovery capabilities through shared intelligence.


## UPU Cyber Resilience Programme


Tracy Hackshaw, Head of the DotPost Business Management Unit, outlined the UPU’s comprehensive cybersecurity initiatives designed to strengthen global postal cybersecurity.


### The .post Domain Initiative


The .post domain provides secure digital identity and services specifically for postal operators, offering enhanced security features and collaborative threat intelligence. The UPU has developed special funding packages for Small Island Developing States and Least Developed Countries, with QR codes provided for accessing these packages.


### Secure.post Platform


The secure.post platform offers URL checking services for suspicious links, with planned expansion to include comprehensive cybersecurity testing and learning resources. The Trust.post platform is currently live, providing accessible security tools for postal operators.


### Postal Sector Information Sharing and Analysis Center (ISAC)


The UPU is developing a postal sector ISAC to facilitate secure collaboration and threat intelligence sharing among postal operators and supply chain partners including airlines, shipping companies, delivery partners, and technology vendors. This platform will enable confidential collaboration and coordinated incident responses across the global postal ecosystem.


## Discussion and Q&A


### Human Factors and Workforce Development


During the discussion, speakers addressed the human element in cybersecurity. When asked about job creation versus replacement through digitalization, Hernandez emphasized that digitalization should focus on upskilling postal staff rather than replacing jobs, positioning postal workers as skilled digital service facilitators providing “digital services with a human touch.”


### External Threat Monitoring


Ihita Gangavarapu from Youth IGF India raised questions about balancing internal and external threat monitoring for resource-constrained organizations. The discussion highlighted the complexity of monitoring threats across diverse supply chains involving multiple stakeholders, each representing potential vulnerability points.


### Physical Infrastructure Considerations


A final question addressed the relationship between digital transformation and physical infrastructure resilience, including disaster fallback capabilities. This highlighted the need to consider both digital and physical security aspects as postal services expand their technological capabilities.


## Key Collaborative Approaches


All speakers emphasized the importance of collaboration in addressing postal cybersecurity challenges, though they proposed different models:


– Regional partnerships and assessments (CTU approach)


– National authority cooperation with postal operators (Albanian model)


– Sector-specific information sharing (Nordic Financial CERT model)


– Global collaborative platforms (UPU ISAC initiative)


These approaches are complementary, addressing various aspects of cybersecurity cooperation at national, regional, and global levels.


## Conclusion


The forum demonstrated both the scope of cybersecurity challenges facing postal services and the potential for collaborative solutions. As postal services transform into comprehensive digital service platforms, they face new vulnerabilities requiring sophisticated cybersecurity responses. The gap between service expansion and security preparedness, particularly in developing regions, represents an urgent challenge requiring immediate attention.


The collaborative frameworks outlined—from regional partnerships to global information sharing platforms—provide a foundation for coordinated action. The UPU’s cyber resilience programme, combined with national and regional initiatives, offers a multi-layered approach addressing diverse needs across the global postal network.


The discussion positioned postal cybersecurity as essential for maintaining public trust in digital services and supporting digital inclusion objectives. As postal services continue expanding their digital offerings, their security becomes increasingly critical not only for operational continuity but for broader national digital infrastructure protection.


Session transcript

Mayssam Sabra: Good afternoon, also good morning and good evening to our online participants who may be joining us from different time zones. Welcome and thank you for being here with us. For our UPU Open Forum on Strengthening Cybersecurity for the Global Posts and Logistics Sector. My name is Maysam Sabra from the DotPost Business Management Unit of the Postal Technology Center of the UPU and I will be your moderator for this session. Just quickly for those who may not know what is the UPU, the UPU is the Universal Postal Union, a United Nations agency dedicated to the postal sector. What we do is we mainly coordinate international postal policies and standards among our postal operators and member countries. We also assist the postal operators in the transformation of their services toward a secure digital connectivity and we help. We promote collaboration, we also help ensure an efficient and secure mail delivery worldwide. As you know, the Global Postal Network plays a critical role in facilitating trade, communication and economic development. However, as we rely more on digital technologies, we also face increasing cyber threats such as ransomware, phishing attacks, data breaches, supply chain attacks, and this is not disrupting our operations but also can compromise sensitive data, can damage reputation, and can erode the trust within consumers and businesses. So in today’s session, we aim to explore how we can strengthen our cyber security, how we can enhance the cyber resilience across the sector. We will be discussing maybe some strategies and best practices to build trust and security in the postal and logistics sector. So I am honored to share this platform with five distinguished panelists with me today. On my left, I have Mrs. Floretta Faber, the Deputy Director General and Director for International Project Coordination and Strategic Cyber Security Development at the Albanian National Cyber Security Authority, and Mr. Mats Lillesund, the Director of Governance and Communication Group Security Postenbrink AS, or Norwegian Post, and Mr. Kevin Hernandez, Digital Inclusion Expert at the Universal Postal Union. On my right, I have Mr. Nigel Cassimire, the Deputy Secretary General of the Caribbean Telecommunications Union, CTU and Mr. Tracey Hackshaw, the Head of the Dotpost Business Management Unit of the Universal Postal Union. So we have a limited amount of time. I encourage all of you please to keep your remarks concise so we can hear from all of you. And for our online participants, if you have any questions, please type them in the chat box and then we will raise it on your behalf. And now let’s kick off our discussion and I would like to start immediately with Kevin Hernandez. Kevin, you are the Digital Inclusion Expert at the Universal Postal Union and lately you have been working on a digital services report for the postal services in which you dedicated a specific section on the cyber security for posts. So maybe you could walk us through the state of cyber security in the postal sector so we can discuss further on the findings of what you will show us. Please proceed.


Kevin Hernandez: Thank you very much for the introduction, Massim. So as Massim said, my name is Kevin Hernandez. I am a Digital Inclusion Expert at the UPU where I work on a project called Connect.Post with the goal of connecting all post offices in the world to the internet and transforming them into one-stop shops for essential digital services. And for all of you interested in the project I just mentioned, I’m not going to speak about it in detail in this presentation, but I have some concept notes that I can share with you and they’re here in the front. And there’s also some at the .Post booth in the village. And although Connect.Post is not necessarily a cyber security, and I am the director of the U.S. Cyber Security Project. Ensuring that these newly connected post offices and the services that they offer are secure is one of our biggest concerns and I will explain why in this presentation. So as Massimiliano mentioned, I was recently working on a report for the UPU, which we call the Digital Panorama Report and it’s based on a survey which we did on digital services and cyber security and 52 countries responded to the survey. And the survey found that posts are offering many more digital services than we were even expecting. We thought that posts offered digital services, but we could never imagine how much. So 71, and these services go well beyond the postal sector, which is really important to highlight because posts are not just offering digital postal services, but are now also offering digital services across multiple sectors. And this is super exciting from an inclusion standpoint because there are, I don’t know if any of you know, but there are over 650,000 post offices in the world, the majority of which are located in rural areas, which are specifically the places where people are less likely to use the internet and where people are most at risk of being left behind. So digital services offered through the posts have significant potential to promote inclusion. For example, our survey found that 71% of posts are promoting economic inclusion for SMEs through e-commerce services, 58% are promoting financial inclusion through digital financial services, 51% are promoting social inclusion through e-government services, 11% are promoting universal health coverage through digital health services, and also 70% are directly contributing to bridging the digital divide and promoting digital inclusion by providing at least one digital connectivity service or solution. And going one step further, our survey also found that more than a third, So 34% of posts show signs of becoming a one-stop shop for economic, financial, social, and digital inclusion by providing all three services at once. So namely, digital financial services, e-commerce services, and e-governments all under the same roof. So this is one place where citizens can go and access all of these services. And this helps mitigate the risk of digital exclusion for less connected groups, while also helping governments achieve multiple public policy objectives related to these areas, and also the overarching leaving no one behind SDG goal. Also, we found that posts are offering these services through multiple channels. So as you would expect, the main channel that posts use to deliver digital services is a digitally equipped post office counter through interaction with postal staff. And this is, once again, this is especially useful for less connected users because they can receive help accessing a service in person that they may otherwise not be able to access on their own due to a lack of internet access, not having an adequate device, or not having the necessary digital skills to access that service on their own. However, many posts are also offering these digital services through fully digital channels, like a website or an app, while some posts are even leveraging their delivery staff to deliver these services through staff that are equipped with digital devices, like digital personal assistants, or tablets, or smartphones. And this can be especially useful for very remote communities or people whose mobility might be restricted. And it’s also important to note that in many cases, the post may only act as a physical extension of a partner’s digital service. So it’s not necessarily the case that all of these services belong to the post, but that the post is acting as a trusted partner. So, building on that, as posts begin to offer more and more digital services, they become an even more. critical infrastructure that must be secured, because they are now holding more sensitive data about customers and citizens across multiple sectors and across multiple aspects of life. And this makes the potential consequence of a disruption of a postal operator’s digital system more severe. And these disruptions would disproportionately impact people in rural areas and the elderly who rely on the post for digital services the most. This also makes the impacts of breaches, identity theft, and financial losses even more severe. And although, as I mentioned before, multi-channel service delivery is great from an inclusion perspective, it also opens up even more entry points for cyber attacks. And as a result of all this, the ability for posts to maintain trust is both more important than ever and more difficult. And it’s not just important to maintain this trust for customers or for citizens, but also, as I mentioned before, for partners. Because delivering digital financial services, e-government services, and e-commerce requires partnerships with private institutions and companies and government agencies who would be reluctant to partner with an institution that they see as insecure, especially when the digital service belongs to that institution. So at this point, you might be asking, how secure are posts across the world? Are they ready to offer these services in a secure way? And our survey found that the current state of cyber hygiene best practices within the postal sector is in need of a significant improvement. So we found suboptimal implementation rates across all cyber hygiene best practices which were surveyed. Posted websites were the only best practice implemented by at least two-thirds of posts. And only two other practices, namely secure emails, secure staff emails, sorry, and business continuity plans were implemented by at least half of posts. Meanwhile, other best practices, such as cybersecurity training, were implemented by less than half of POST. And this is extremely important given that, as mentioned before, POST are utilizing a multi-channel approach to digital service delivery, which means that the POST staff are likely to be involved in delivering these services, whether it’s at the counter of a POST office or through the delivery staff equipped with digital devices. And less than half of POST implement cybersecurity risk management plans, and only around 40% have incident response plans and crisis management plans. So this kind of paints a picture of POST that are largely unprepared and unable to adequately respond to cybersecurity threats. And the survey also found a drastic regional difference in the implementation of these best practices. I couldn’t fit all of them on this one slide, but this trend tends to hold true across all of the cyber hygiene best practices that were on the previous slide. So developing regions and POST from developing regions, and in particular three regions, Latin America and the Caribbean, Asia and the Pacific, and Africa regions, are the least likely to implement these cyber hygiene best practices. And I want to end by highlighting another scary finding from the survey. So cybersecurity budgets of POST are not keeping up with their cybersecurity workloads. So although around 70% of POST saw an increase in their cybersecurity workload in the last two years, less than half of POST reported that they increased their cybersecurity budget allocations. And POST, once again, from developing regions, and in those three regions in particular, were the least likely to increase their cybersecurity budgets in the last two years. So not only are they not well prepared from a cybersecurity standpoint, but their budgets are not keeping up with their workload. And one last thing. along with low implementation of cyber hygiene best practices and lagging budget allocations, posts are also not getting national level support responding to these cyber attacks. So only 35% of posts were affiliated with the National Information Security Incident Response Team. So as you can see, there is still a lot of work to do to secure the posts, especially as they begin to offer more digital services from multiple sectors through multiple channels. And that is it for me. I hope this presentation has helped set the stage for the discussion on cyber security and the posts. Thank you.


Mayssam Sabra: Thank you very much, Kevin, for the insightful presentation. Actually, it really indicates the urgent need for enhanced cyber resilience in some posts and in some regions. So in light of this, I would like to hear from Nigel. Nigel, as you see from the presentation of Kevin, it indicates a low percentage of cyber security being offered in some posts in the Caribbean region. So maybe now you will walk us through some slides to highlight the digital transformation in the Caribbean and the role that the STU is playing in improving the cyber resilience in the Caribbean. Please proceed, Nigel.


Nigel Cassimire: Yes, thank you. I’m Nigel Casimir from the Caribbean Telecommunications Union. I’ll be looking at our status of digital transformation in the postal industry, which is not very advanced. So I think that would be part of the reason why, sorry, Kevin’s results would have shown as he showed for Latin America and the Caribbean. Just to give a little background on the CTU, ICTU is an intergovernmental organization in the Caribbean specializing in ICT. Our members are 20 governments and, should I say, independent states and territories in the Caribbean. And we advise on ICT policy matters, and that includes things related to digital transformation, for example. Our involvement with the postal services in the member states would probably fall under our ICT policy formulation and project coordination type parts of our mandate, as shown on the screen there. So, just setting the context for the Caribbean as far as postal digital transformation is concerned. It’s happening in the more general context of our governments pursuing digital transformation generally. They’re looking at introducing e-government services, and they’ve done that in most of our member states, and also facilitating e-commerce generally to get the economies going and to help diversify their economies. Of course, the postal services has been evolving throughout the world, and the Caribbean is no different. We’ve seen traditional mail services going down, while things like courier services in support of e-commerce are going up. There is competition for the traditional postal services now. Private companies involved in the courier and delivery businesses, and logistics as well. And also, our traditional postal services have their obligations to continue. and others who continue to deal with. But in the face of all these environmental changes, there are opportunities noted for the postal services to modernize and become competitive in the markets. So really what you’ve been seeing, some of the initiatives, and a lot of them have been mentioned by Kevin already, they’ve been seeking to enhance their logistics and delivery, they’ve been trying to capitalize on that trusted nature of the postal services and being used as community hubs, delivering government services and products, facilitating access to government services for persons who may not have their own private internet connections. Some of the financial services that Kevin also mentioned, and in some cases even, they may have some facilities to help with some capacity building of the less technically savvy persons in the community. So there may be an area of a post office, especially in rural areas maybe, where the citizenry can come and get some help in maybe accessing some government services. But as the post offices try to modernize their operations and utilize the digital technologies and transform digitally, as mentioned, this comes with the attendant cyber risks and the requirements for resiliency. Now Kevin went into some of the very specific type things, but I’ll talk about the approach the CTU has been taking. Our involvement in the digital transformation and assisting our governments, of course, is more general. And the postal services is just one example. In 2022, at the ITU Plenipotentiary Conference in Romania, we had the opportunity to meet with the Universal Postal Union, and they apprised us of some of the new services that they had developed in the digital sphere. And we decided that, yes, we needed to partner with the UPU to help enhance the quality of our digital transformations in the postal services in the Caribbean. So, shortly after that then, an MOU was signed between the CTU and the UPU, and this is a picture. The initial meeting was in 2022, and this was the first half of 2023, when the Director General of the UPU signed this MOU with the Secretary General of the Caribbean Telecoms Union to cooperate in various areas. And this MOU, the focus of it was to promote the digital transformation of postal services in the Caribbean, and there were some specific things identified in there. Deployment of the UPU’s digital readiness for e-commerce assessment, that is a program whereby the UPU would come in to individual countries and do a comprehensive assessment of the state of the particular industry and the capabilities of the postal services, and make some specific recommendations in terms of how to go forward with modernization and secure digitalization of their operations. were seeking to promote the adoption of the UPU’s .post domain by our postal services. That is a secure domain that I think gels well with the trusted and would tend to preserve the trusted nature of doing business with the post offices and also implementing the UPU’s ConnectPost initiative in the region, which Kevin had mentioned. In fact, he said he works in the ConnectPost area. So since that 2023, we have had specific engagements in the Caribbean with, I think, at least three of our member states and typically with some of the larger ones, and we do have some specific recommendations now that they are implementing. Within the Caribbean, there’s a Caribbean Postal Union, which is an affiliate of the UPU, and we as CTU liaise as well with the CPU in terms of fulfilling the requirements of the MOU. So that’s kind of where we are. So we are getting the recommendations and start trying to implement the implementations from the UPU to enhance the cyber resiliency of the postal services in the Caribbean. Thank you.


Mayssam Sabra: Thank you very much, Nigel. It’s clear the focus of the CTU on cyber security through the adoption of initiatives like the .post domain or ConnectPost program. It’s absolutely vital to enhance the cyber security in the Caribbean region. Thank you very much for highlighting this. Now I would like to hear from Mrs. Floretta Faber. Floretta, you are the Deputy Director General at the Albanian National Cyber Security Authority, but you also coordinate and lead projects. on Cyber Security Strategies and Development. So, as you know today, as cyber threats evolve, how do you see these changes are impacting on cyber security strategies in our organizations? And maybe you can share with us any successful initiative or best practices from Albania that have improved cyber security in the post and logistics sector.


Floreta Faber: Thank you very much. I’m very happy to be here today and join this discussion. The Albanian National Authority on Cyber Security has been, especially in the last three years, focused on big changes in the cyber security domain in Albania. We had a strong state-sponsored cyber attack in mid-2022 all over the e-government services. Albania today has over 1,200 e-services towards its citizens. More than 95% of all the government services to citizens are given online. So, a cyber attack on all those services meaning really strong steps towards all the countries like Albania and democratization processes and transparency with the citizens. And since then, we have been taking strong reforms on cyber security and big steps forward. Part of all the transformation is the changes of law on cyber security. And we have a new one since May last year, which is according to the EU-NIST directive. And a number of sub-laws on cyber security, which still go through the model of the European best practices and the NIST directive. And we look at specific sectors on cyber security based on the criticality and the postal services. is one of this group, which we have been working specifically. The postal system has, as it was mentioned here from the studies in Albania as well, has increased its level of digitalization. And having the attacks through the postal service or using the name of the post has been really attacks of last year, over 15% of the attacks in the country, or efforts to attack the system. Out of 88 attacks we had last year, over 15% were through the postal system. And only three of them were successful cases, which the post office was dealing with and the National Authority on Cybersecurity was supporting hand to hand. We have seen specifically strong impersonating campaigns. And in 2024, the post office, according to our grouping, is with the transportation group. And almost all the attacks last year in this group were only through the post system, specifically through domain impersonification, aiming to exploit trust really in the public facing brand. The issue carries significant weight due to far reaching impact on the public trust and institutional integrity and the stability of critical infrastructures. Attacks on national post service are not isolated nor random. They have been calculated efforts to exploit institutions that serve as a fundamental touch point for millions of citizens who use the postal services and the digital services in their daily life. The impersonation of postal brand and the misuse of digital channels to spread fraudulent messages can erode confidence in public services and amplify the risk of financial and identity-related crimes. In this context, the importance of building a cyber-resilience in the postal sector goes far beyond working on the technical issues, and it becomes a matter of protecting civil trust and the continuity of essential services in the digital area, and in a time that most of the services through the post office as well are giving through the digital systems. Because these are not isolated cases, these patterns resonate globally. Maybe you have seen that in the last week, the FBI issued a new alert warning for iPhones and Android users about widespread smishing campaigns, as highlighted by Forbes and the New York Post as well. Malicious actors are leveraging the names of trusted institutions, including postal services, to decide to steal and to destabilize public trust. Our data confirmed that this was the case in Albania as well. Only from January to March 2025, we have seen that over 6,500 indicators of compromise that were found through our national search. Over 15% of them, again, were linked with the postal office. This means that the efforts to attack through the cyber systems on public infrastructure in Albania, over 15% really go to one of the critical infrastructures we have. And while the numbers remain high, we have seen promising numbers. I mentioned that last year… On the attacks we had, three of them really made some issues in the institution But this year, all the attempts, none of them has been coming to a point where there was an incident inside the post office In response to the growing threat landscape, the Albanian National Authority on Cybersecurity has taken concrete steps in partnership with the Albanian Post Office to strengthen the sector resilience These efforts include targeted cybersecurity training, implementation of early deduction system real-time monitoring of threat indicators and joint incident response simulation with the authority and the postal system We are also integrating cybersecurity requirements into the digital modernization roadmap on the national post system Our approach is not reactive, it is proactive, it is strategic, it is tailored to the unique challenges this sector face By aligning operational processes with security protocols, we aim to reduce the attack surface and enhance institutional readiness against evolving cyber threats The fact that nearly all the attacks on 2025 were smishing-based tells us something crucial that the human layer is still the weakest link in cybersecurity attacks That’s why our efforts must include not just the technical hardening, but also aware raising among citizens and postal employees So first, we have been investing heavily in capacity building and the cross-sector collaboration Cybersecurity is not a silent challenge, it requires ecosystem level resilience We have developed sector-specific early warning mechanisms and shared playbooks tailored for public services This is operators like the Post, the National CERT, which covers 7 days, 24 hours, overlooking at a number of institutions, 15 institutions, one of them is the Postal Office, because we believe that the strong efforts through this system are worth of having specifically focus on saving those systems. Second, we are working closely with postal operators to build internal cyber hygiene protocols and it’s part of all the cyber hygiene trainings that we are giving throughout the country, because since especially 2022, it was seen that it was given specific priority to the cyber security sector in the country, and only last year we had over 6000 people trained on cyber hygiene, including a specific focus on having all employees of the Postal Office, because among all the steps taken to change the cyber ecosystem in Albania and to take the steps for changing the laws, we believe it’s important to work with people and have the cyber hygiene and a new culture on cyber security in the country, and the request of people to have those training in increased numbers means that there is an awareness that people need to know more, unfortunately for bad reasons, because the attacks have been numerous and some of them have been successful on creating cyber incidents, people more and more are getting the awareness that they should know more, what they should do in order to protect themselves and the institutions they work with in cyber security. As we reflect on all the developments in the country and in the postal services, we understand that it’s important that this is a work in progress and this is something which doesn’t finish in a year or two. With the increase of the number of technologies used, with the increase of the number of attacks, with the increase of AI using on cyber attacks, it’s also important that we get prepared technically and not only on cyber security, even the postal office, and we believe that it’s very important that we have a strong bridge between people, how we get ready in changing technologies and having people prepared on facing those cyber attacks.


Mayssam Sabra: Thank you very much, Floretta, for these important highlights. The findings you indicated are very important and we hope the strategies you are developing will help your organizations become more resilient. Now I would like to move to Mr. Mats Lillesund, the Director of Governance and Security at Norwegian Post. So, Mats, as we mentioned earlier, we rely more and more on digital technologies and we also live in an increasingly connected world and the postal sector faces significant cyber threats, but sometimes we think that these threats are limited to developed countries but in reality they are also targeting big organizations and big countries. For example, we at the UPU, we have been informed of many, several cyber attacks that targeted big posts in big countries, and in today’s opening session, the speeches, most of the speeches highlighted, emphasized the collaboration part. I would like to know what is your view on the collaboration between postal organizations and industry stakeholders in enhancing cyber security in our sector and what initiatives or partnership has Norway Post taken or initiated to improve, to foster this cooperation?


Mats Lillesund: Thank you and thank you for the invitation of being here, ladies and gentlemen, I would like to address your question, it’s a big question and it’s an interesting one at that, but first off a little background about Postenbring, where I work, a Norwegian post, so we serve actually Nordic countries, so postal and logistic services, and we have approximately 14,000 employees, but our base is from Norway and Norway Post, where we have the largest market share and also an important society function, and of course it’s a global scale, as you point out, we have the same threat challenges as any other big company have in today’s threat landscape, being that our adversaries or nature-specific threats also, which I think is an important factor in today’s threat landscape, and addressing cyber resilience also. And just to point out a few of those, we have everything from from Fraud. The people in Norway, for example, is targeted for fraud, where the Posten logo and visual components are used in phishing campaigns. Two more aggressive computer attacks and vectors like that. So on the measurement side, we’re of course addressing this on various angles. We have both human and competence training, we have organizational measures, and of course a lot of technical measurements and controls in place to address this. And I think it was very interesting what Kevin and the other panelists described the different areas. I think that I recognize a lot of them, and I think that we’re always aiming to get better. I think we’re pretty good, but we’re always aiming to get better. And when it comes to cooperation, I think that’s a major factor in resolving issues. So here in Norway, I think we have a very open society, and we have a lot of openness in terms of security incidents. Companies and governments are very open, also in the media, talking about incidents. But it also means that this culture is something that you bring into your behind channels, and experts talking to experts on various issues. Some of the, I would like to emphasize, something that has grown in the financial sector in Norway, spread into something in the Nordic countries, is called financial cert, which is a very good example of how a sector cert function can work together among banks and insurance companies. Companies to leverage each other’s capacities and knowledge to address cyber issues. So I think it’s a very important aspect of both to be prepared and to discuss the issues when they arise as incidents.


Mayssam Sabra: Thank you very much. Actually it’s very important and inspiring, the cooperation you just highlighted. Thank you very much for your invaluable insights. I would move now to Tracy Hackshaw, last but not least. Tracy Hackshaw, you are the head of the post unit at the UPU and lately you have been working on some cyber resilience initiatives. So if you could please give us some details on the cyber resilience initiatives and the role of the UPU in promoting cyber resilience among the postal sector and logistics sector.


Tracy Hackshaw: Good morning, good evening, good afternoon, good night wherever you are in the world. I know time is short so I’m going to move pretty swiftly so we can get some questions in, if there are any questions already. I just wanted to make one observation. When Nigel pointed out the work we did with the CPU, the Caribbean Postal Union, just to reiterate that they were actually a dot post user. So they migrated their website from something else to cpu.post and then are running a secure email and secure hosting environment. I just wanted to make that observation. So I’m going to move pretty swiftly and show on this slide just reiterating some of the issues with attacks in the postal sector. As you can see from this slide, over the last several years we’ve had quite a number of reports, public reports of cyber attacks in the postal sector and not limited only to, as was said before, developing countries but also related to you know countries in North America and Europe and otherwise. So it’s not limited to countries which are least resourced but as Kevin’s research pointed out there’s a heavy risk or a large risk in those countries where the resources are the least deployed and therefore they could be seen as potential low-hanging fruit for cyber attackers. As we go into our initiatives, as Misa mentioned, we are implementing a series of projects at the UPU within our cyber resilience program. You would have heard mention already about the .post initiative and essentially that’s a program which looks to utilize the DNS, the domain name system, to secure what we call the edge of the network for the postal sector. So if you’re running a website, you’re running email and so on, you will be able to use a secure top-level domain which is what the UPU has, .post, that is dedicated to the postal sector and that you as the postal sector, as a post office, as a postal operator or as a postal player in the sector, meaning providing services, you’re a technology operator, you’re making envelopes, packages. you’re in the custom sector, you’re in the airline sector, you can utilize the .post domain and that’s available today via our Trust.post platform. As you can see from this slide, we’ve established a digital framework in which we look to, you know, wrap the entire sector with a series of services, including, as we just mentioned, our cyber-resilient services. Just briefly mentioning what .post brings to the table. It’s a major cyber-resilient infrastructure. Within that infrastructure, we have a series of compliance measures that relate to our overall cyber security framework and we look to implement it as a secure digital identity, that .post domain. We also are looking to deploy, as I said, services, secure email, secure hosting and other secure services. If you’re in the sector and you’re looking for secure services, please do reach out to us to see how best we can work with you to secure your online transactions and services. As I mentioned, we have a shared services platform via Trust.post. That’s live today and you can check it out as you speak right now. We also have an offer for all posts in the small island developing states category, SIDS or least developed countries. If you scan this QR code right now that you can see on screen, you can provide us with some information and we may be able to assist you with a funding package to get you up and running in your journey in digital transformation securely. We also have a project called secure.post, which we are currently rolling out. Today we are right now live only with our check URL. If you go to secure.post today, you’ll see a facility where You can test any URL that is suspicious to see if it’s been reported for scams or malware within the Global cyberspace and you can also use that same platform to report a suspicious link Potential phishing link etc. So that’s available via that platform today coming soon We’ll be rolling out an entire range of services on that platform learning Testing and also Potentially directing you to our various partners within the Secure Outposts framework I will just quickly run and show you who they are So today we’re running with just a short list of these partners who some of the top Alliances and and institutions within the cybersecurity space globally In addition to what we do in the Secure Outposts platform We also are about to implement something called an ISAC an information sharing and analysis platform and that ISAC essentially looks to provide a secure trusted platform where posts and other stakeholders within the sector can confidentially and secure securely share information and collaborate to Deal with the threat intelligence Landscape as was mentioned earlier by our colleague from Albania that intelligent that threatened landscape is Evolving on a daily basis and we encourage all posts and their stakeholders meaning in the supply chain vendors academic institutions As I said customs brokers and airlines etc to reach out to us to join us in this journey on building a global Postal ISAC. I will show you very rapidly a link in which you can reach out to us by completing this information on this form via QR code and you can express interest in joining us on this journey to implement a postal sector ISAC. I really no time is short and I apologize for being so rapid but I’m going to stop here to ensure we have some questions and maybe elaborate as a case maybe so thank you so much for listening to me and maybe hand back over to me somehow so she can facilitate any questions or comments either in the room or remotely. Thank you very much. Thank you very


Mayssam Sabra: much Tracy Hackshaw for your presentation. I believe like the cyber resilience program is essential for postal organizations to build a resilient infrastructure. So online with me actually we have eight minutes left so I’ll take a question from online from Mutu Sami. In the process of digitizing broadening and networking post offices and services is the physical non-tech infrastructure of the post offices and jobs expanded to suit the technological expansion. Also while postal services become digital the post office can become a fallback hub in extraordinary situations of disruption in digital infrastructure. Has the design of post modernization considered these possibilities and needs? Who would you like to answer the question?


Kevin Hernandez: That question had two parts. I guess I can go with the first part and from my understanding that the question was trying to get at whether this might the digitalization of the post replaces jobs in some way and I would say Not really. Actually, what’s happening is the post is now offering more services. But the problem is that these that the people who work at post offices need to be upskilled. Because in the past, they might not have been working on so many digital platforms at once, or they might not have been working on any digital platform at all. So they need to be upskilled, not just on how to use the digital technology, which is which is one thing, but then also on how to use these specific platforms and then also how to ensure that they’re doing it in a secure way. So you need basic digital literacy training, and then you’re going to also need digital training on the specific platforms that are used for each type of service because it might be the case that the e commerce platform is different from the digital government platform, which is different from the digital financial service platform. So they need to learn how to use all of them. And then on top of that, they need cyber hygiene training to ensure that you know, they’re not opening themselves up for some some cyber attacks. I have but just one last thing, we’re trying to really position the post as a place where you can access digital services with a human touch. Because I think that’s the key. And that’s the role that the postal sector can play is it’s a place where you can get help accessing a digital service. That’s that’s the unique value proposition of the post. So it is key. I mean, we need a postal staff. Yeah. Yeah, just just supplement what Kevin was saying. In fact, in terms of the question, I understood a little bit differently. And I see the questioner did put an additional comment that his implication was that it had the opportunity to create more jobs, right? And in addition to up scaling and so on. The other part of his question related to use of the postal infrastructure in in as a kind of a disaster fallback situation, it is something that could possibly


Mayssam Sabra: Thank you, Nigel and Kevin. I’ll take now a question from the audience. Please go ahead. Am I audible? Yes.


Ihita Gangavarapu: Okay, perfect. Hi everyone. Thank you so much for your insights. I’m Ahita. I’m representing the Youth IGF India. So I come from the, I work in the space of external threat monitoring. And when you talk about logistics and posts, that’s something that it’s an industry that we come across but not so often. So in terms of catering to a bunch of clientele, right. So I, my question is that when we are tracking a lot of these threats that could be your phishing kits that are impersonating the postal services or leaked credentials or you mentioned compromise third party vendors. These are all external threats. So when the focus, usually what we see in the industry is that the focus is on internal threats or having solutions that are monitoring all the internal possibilities of vulnerabilities. So I just want to understand how does and could be directed to Tracy Hackshaw because you mentioned post ISAC, that how is it positioned to shift that balance from bringing more attention to external risks and external threats, especially in regions which have very limited visibility or other resources to focus on external in addition to internal threats that are there in the space.


Tracy Hackshaw: Thank you very much. Very good question. That’s exactly what I think the ISAC is trying to do. So I’m not sure if I was able to convey the message clearly, but it’s focused on collaboration between all of the stakeholders in the sector. So the thinking is, I mean, just to give an example, if we are onboarding the post offices, let’s say all 1992 postal operators, the idea is that we will also onboard the supply chain for those who serve them. and it’s a very diverse and extensive supply chain including right up to the delivery partners, airlines, shipping companies, the whole thing. All of those external entities are essentially risk factors to the entire sector because they’re not only potentially targets by cyber attackers, but as you mentioned, the software that runs this environment which can be shared, you know, you’re sending messages between parties. So just to share with you and my colleague from the post may want to elaborate, when you scan a barcode, that message goes everywhere. So when you’re tracking and tracing, it’s going through a network and at every point in that network, there’s a potential failure and it’s a potential way of getting that attack, you know, literally speaking. So the ISAC is designed to identify those stakeholders and bring them together. I won’t say for the first time, but certainly in a way that would allow the information to be shared literally and for collaboration to begin happening so that it wouldn’t be seen only as an internal risk, but also identifying that the external risks and we deal with it from that standpoint. So I hope that will be the first and major time that that happens in the sector to get this done effectively. Thank you.


Mayssam Sabra: Okay, thank you very much. With this, we come to the end of our session. It was short, but I hope it was inspiring and insightful for all of you. I would like to thank my panelists for being with us today. I would like to thank the online participants. And if you would like to continue conversations or discuss further, please visit us at the UPUsecure.post booth. We will be happy to continue conversations. Thank you once again for your participation. Thank you.


K

Kevin Hernandez

Speech speed

162 words per minute

Speech length

1860 words

Speech time

688 seconds

Posts are offering extensive digital services beyond traditional postal operations, with 71% promoting economic inclusion through e-commerce and 58% offering digital financial services

Explanation

Kevin Hernandez presented findings from a UPU Digital Panorama Report survey of 52 countries showing that postal services have expanded far beyond traditional mail delivery. Posts are now serving as multi-sector digital service providers, offering e-commerce, financial services, e-government services, and digital health services, with many becoming one-stop shops for digital inclusion.


Evidence

Survey data showing 71% of posts promote economic inclusion through e-commerce services, 58% offer digital financial services, 51% provide e-government services, 11% offer digital health services, and 70% provide digital connectivity solutions. 34% of posts show signs of becoming one-stop shops by providing all three main service categories.


Major discussion point

Digital transformation of postal services


Topics

Development | Economic | Infrastructure


Agreed with

– Nigel Cassimire
– Floreta Faber

Agreed on

Postal services are expanding beyond traditional mail to become multi-sector digital service providers


Current cyber hygiene practices in the postal sector need significant improvement, with only basic practices like secure websites implemented by two-thirds of posts

Explanation

The survey revealed suboptimal implementation rates across all cybersecurity best practices in the postal sector. Only secure websites were implemented by at least two-thirds of posts, while other critical practices like cybersecurity training were implemented by less than half of postal operators.


Evidence

Survey findings showing secure websites implemented by two-thirds of posts, secure staff emails and business continuity plans by at least half, cybersecurity training by less than half, and only around 40% having incident response and crisis management plans.


Major discussion point

Cybersecurity preparedness gaps


Topics

Cybersecurity | Infrastructure


Developing regions, particularly Latin America and the Caribbean, Asia Pacific, and Africa, show the lowest implementation rates of cybersecurity best practices

Explanation

The survey identified significant regional disparities in cybersecurity implementation, with developing regions consistently showing lower adoption rates of cyber hygiene best practices. This creates particular vulnerabilities in regions that may already face resource constraints.


Evidence

Survey data showing regional differences with Latin America and the Caribbean, Asia and the Pacific, and Africa regions being least likely to implement cyber hygiene best practices across all measured categories.


Major discussion point

Regional cybersecurity disparities


Topics

Cybersecurity | Development


Cybersecurity budgets are not keeping pace with increased workloads, with less than half of posts increasing budget allocations despite 70% experiencing higher cybersecurity demands

Explanation

There is a significant mismatch between the growing cybersecurity challenges faced by postal operators and their financial commitment to addressing these challenges. This budget-workload gap is particularly pronounced in developing regions, creating sustainability concerns for cybersecurity efforts.


Evidence

Survey data showing around 70% of posts experienced increased cybersecurity workload in the last two years, but less than half increased their cybersecurity budget allocations, with posts from developing regions being least likely to increase budgets.


Major discussion point

Resource allocation challenges


Topics

Cybersecurity | Development | Economic


Digitalization requires upskilling postal staff in digital literacy, platform-specific training, and cyber hygiene practices rather than replacing jobs

Explanation

Kevin argued that digital transformation of postal services creates opportunities for human-assisted digital service delivery rather than eliminating jobs. However, this requires comprehensive training programs to ensure staff can effectively and securely operate multiple digital platforms while maintaining the human touch that differentiates postal services.


Evidence

Explanation that posts now offer services through multiple channels including digitally equipped post office counters with staff assistance, and that staff need training on basic digital literacy, specific platforms for different services, and cyber hygiene practices.


Major discussion point

Workforce transformation needs


Topics

Economic | Development | Cybersecurity


Agreed with

– Floreta Faber

Agreed on

Human factors and training are crucial components of postal cybersecurity


Disagreed with

– Floreta Faber

Disagreed on

Approach to addressing human factors in cybersecurity


Posts can serve as human-touch access points for digital services, particularly valuable for less connected users in rural areas

Explanation

Kevin emphasized the unique value proposition of postal services in digital inclusion, positioning them as places where citizens can access digital services with human assistance. This is especially important for rural populations and those lacking digital skills, devices, or internet access.


Evidence

Reference to over 650,000 post offices worldwide, majority in rural areas, serving populations most at risk of being left behind digitally. Posts offer services through staff-assisted counters and mobile delivery staff equipped with digital devices.


Major discussion point

Digital inclusion through postal services


Topics

Development | Infrastructure | Sociocultural


N

Nigel Cassimire

Speech speed

116 words per minute

Speech length

907 words

Speech time

466 seconds

Digital transformation in Caribbean postal services is not very advanced, contributing to lower cybersecurity implementation rates in the region

Explanation

Nigel explained that the Caribbean region’s postal digital transformation is happening within a broader context of government digitalization efforts, but progress has been limited. Traditional mail services are declining while courier services are growing, creating both challenges and opportunities for modernization.


Evidence

Description of Caribbean governments pursuing e-government services and e-commerce facilitation, traditional mail services declining while courier services increase, and competition from private delivery companies.


Major discussion point

Regional digital transformation challenges


Topics

Development | Infrastructure | Economic


Agreed with

– Kevin Hernandez
– Floreta Faber

Agreed on

Postal services are expanding beyond traditional mail to become multi-sector digital service providers


The Caribbean Telecommunications Union signed an MOU with UPU in 2023 to promote digital transformation and cybersecurity in Caribbean postal services

Explanation

Following discussions at the 2022 ITU Plenipotentiary Conference, the CTU and UPU formalized a partnership to enhance digital transformation quality in Caribbean postal services. The collaboration focuses on comprehensive assessments, secure domain adoption, and implementation of UPU digital initiatives.


Evidence

MOU signed in first half of 2023 between CTU Secretary General and UPU Director General, focusing on digital readiness assessments, .post domain adoption, and ConnectPost initiative implementation. Specific engagements with at least three member states and liaison with Caribbean Postal Union.


Major discussion point

International cooperation for postal cybersecurity


Topics

Cybersecurity | Development | Infrastructure


Agreed with

– Mats Lillesund
– Tracy Hackshaw
– Mayssam Sabra

Agreed on

International collaboration and partnerships are essential for strengthening postal cybersecurity


F

Floreta Faber

Speech speed

121 words per minute

Speech length

1265 words

Speech time

626 seconds

Albania experienced significant state-sponsored cyber attacks in 2022 affecting over 1,200 e-government services, leading to major cybersecurity reforms

Explanation

Floreta described how Albania, with over 95% of government services delivered online, faced a major state-sponsored cyber attack in mid-2022 that targeted all e-government services. This attack prompted comprehensive cybersecurity reforms including new legislation aligned with EU-NIST directive and sector-specific approaches.


Evidence

Albania has over 1,200 e-services and more than 95% of government services are delivered online. The 2022 attack led to new cybersecurity law in May (previous year) according to EU-NIST directive and multiple sub-laws based on European best practices.


Major discussion point

National cybersecurity crisis response


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Kevin Hernandez
– Nigel Cassimire

Agreed on

Postal services are expanding beyond traditional mail to become multi-sector digital service providers


Over 15% of cyber attacks in Albania target the postal system, primarily through domain impersonation and phishing campaigns exploiting public trust

Explanation

Floreta presented data showing that postal services represent a significant target for cyber attackers in Albania, with over 15% of attacks attempting to exploit the trusted postal brand. These attacks focus on impersonation campaigns and fraudulent messaging that can erode public trust in essential services.


Evidence

Out of 88 attacks in the previous year, over 15% were through the postal system with only three successful cases. In 2024, almost all attacks in the transportation group were through the post system. From January to March 2025, over 6,500 indicators of compromise were found, with over 15% linked to postal office.


Major discussion point

Postal services as cyber attack targets


Topics

Cybersecurity | Critical infrastructure


Agreed with

– Mayssam Sabra
– Mats Lillesund
– Tracy Hackshaw

Agreed on

Cyber threats targeting postal services are a global phenomenon affecting both developed and developing countries


Albania has implemented joint incident response simulations and real-time monitoring partnerships between the National Cybersecurity Authority and Albanian Post Office

Explanation

In response to growing threats, Albania developed a proactive, strategic approach including targeted cybersecurity training, early detection systems, and collaborative incident response exercises. The approach integrates cybersecurity requirements into digital modernization planning and includes 24/7 monitoring of critical institutions.


Evidence

Implementation of targeted cybersecurity training, early detection systems, real-time monitoring of threat indicators, joint incident response simulations, and National CERT covering 15 institutions including the Postal Office with 24/7 monitoring. Over 6,000 people trained on cyber hygiene last year including all postal office employees.


Major discussion point

Proactive cybersecurity collaboration


Topics

Cybersecurity | Capacity development


The human layer remains the weakest link in cybersecurity, requiring both technical hardening and awareness raising among citizens and postal employees

Explanation

Floreta emphasized that the prevalence of smishing-based attacks in 2025 demonstrates that human factors continue to be the primary vulnerability in cybersecurity. This requires a dual approach of technical security measures combined with comprehensive awareness and training programs for both employees and the general public.


Evidence

Nearly all attacks in 2025 were smishing-based, indicating human vulnerability. Over 6,000 people trained on cyber hygiene in the previous year, with specific focus on postal office employees. Increased public demand for cybersecurity training due to awareness of successful attacks.


Major discussion point

Human factors in cybersecurity


Topics

Cybersecurity | Capacity development | Sociocultural


Agreed with

– Kevin Hernandez

Agreed on

Human factors and training are crucial components of postal cybersecurity


Disagreed with

– Kevin Hernandez

Disagreed on

Approach to addressing human factors in cybersecurity


M

Mats Lillesund

Speech speed

97 words per minute

Speech length

429 words

Speech time

265 seconds

Norway faces similar global threat challenges including fraud campaigns using postal logos and more aggressive computer attacks

Explanation

Mats explained that despite Norway’s advanced cybersecurity posture, Norwegian Post faces the same global threat landscape as other major organizations. These include both fraud targeting citizens through postal brand impersonation and more sophisticated technical attacks, requiring comprehensive defensive measures.


Evidence

People in Norway are targeted for fraud using Posten logo and visual components in phishing campaigns, along with more aggressive computer attacks and vectors. PostenBring serves Nordic countries with 14,000 employees and has largest market share in Norway.


Major discussion point

Global nature of postal cyber threats


Topics

Cybersecurity | Cybercrime


Agreed with

– Mayssam Sabra
– Floreta Faber
– Tracy Hackshaw

Agreed on

Cyber threats targeting postal services are a global phenomenon affecting both developed and developing countries


Norway has developed a culture of openness regarding security incidents, with sector-specific cooperation models like Financial CERT demonstrating effective collaboration

Explanation

Mats highlighted Norway’s approach of transparency about cybersecurity incidents, with companies and governments openly discussing attacks in media and behind-the-scenes expert channels. The Financial CERT model, which has expanded from financial sector to Nordic countries, exemplifies how sector-specific collaboration can leverage shared knowledge and capabilities.


Evidence

Norway has very open society with openness about security incidents in media and behind-the-scenes expert discussions. Financial CERT grew from financial sector in Norway to Nordic countries, enabling banks and insurance companies to leverage each other’s capacities and knowledge for cyber issues.


Major discussion point

Collaborative cybersecurity culture


Topics

Cybersecurity | Economic


Agreed with

– Nigel Cassimire
– Tracy Hackshaw
– Mayssam Sabra

Agreed on

International collaboration and partnerships are essential for strengthening postal cybersecurity


T

Tracy Hackshaw

Speech speed

151 words per minute

Speech length

1288 words

Speech time

510 seconds

The .post domain initiative provides secure digital identity and services for postal operators, with special funding packages available for small island developing states and least developed countries

Explanation

Tracy described the UPU’s .post initiative as a comprehensive cyber-resilient infrastructure that utilizes the domain name system to secure the network edge for postal sector participants. The initiative includes secure hosting, email services, and digital identity solutions, with targeted support for resource-constrained countries.


Evidence

Trust.post platform is live and available to postal operators and sector participants including technology providers, envelope/package manufacturers, customs, and airlines. QR code available for SIDS and least developed countries to access funding packages for digital transformation.


Major discussion point

Secure digital infrastructure for postal sector


Topics

Cybersecurity | Infrastructure | Development


The secure.post platform offers URL checking services for suspicious links and will expand to include comprehensive cybersecurity testing and learning resources

Explanation

Tracy presented the secure.post platform as an operational cybersecurity tool that currently provides URL verification services for detecting scams and malware, with plans to expand into a comprehensive cybersecurity resource hub. The platform enables both checking suspicious links and reporting potential threats.


Evidence

Secure.post is live today with check URL facility for testing suspicious URLs and reporting scams/malware. Platform will expand to include learning, testing, and partner services within the Secure.post framework with top global cybersecurity alliances and institutions.


Major discussion point

Operational cybersecurity tools


Topics

Cybersecurity | Network security


A postal sector Information Sharing and Analysis Center (ISAC) is being developed to enable secure collaboration and threat intelligence sharing among posts and stakeholders

Explanation

Tracy outlined plans for a global postal ISAC that would provide a secure, trusted platform for confidential information sharing and collaboration on threat intelligence. The ISAC aims to include not just postal operators but the entire supply chain ecosystem including customs, airlines, and technology vendors.


Evidence

ISAC designed to onboard all 192 postal operators plus supply chain including delivery partners, airlines, shipping companies. Platform addresses risk factors throughout the network where barcode scanning and tracking messages create multiple potential failure and attack points.


Major discussion point

Sector-wide threat intelligence sharing


Topics

Cybersecurity | Infrastructure


Agreed with

– Nigel Cassimire
– Mats Lillesund
– Mayssam Sabra

Agreed on

International collaboration and partnerships are essential for strengthening postal cybersecurity


The postal supply chain involves diverse stakeholders including airlines, shipping companies, and delivery partners, all representing potential risk factors requiring collaborative security approaches

Explanation

In response to a question about external threats, Tracy emphasized that the postal sector’s extensive and diverse supply chain creates multiple potential attack vectors. The interconnected nature of postal operations, where tracking messages flow through networks touching multiple parties, requires a collaborative approach to security that extends beyond individual postal operators.


Evidence

When scanning a barcode, messages go through extensive networks with potential failure points at every stage. Supply chain includes delivery partners, airlines, shipping companies, and software systems that can be shared between parties, creating external risk factors.


Major discussion point

Supply chain cybersecurity risks


Topics

Cybersecurity | Economic | Infrastructure


Agreed with

– Mayssam Sabra
– Mats Lillesund
– Floreta Faber

Agreed on

Cyber threats targeting postal services are a global phenomenon affecting both developed and developing countries


I

Ihita Gangavarapu

Speech speed

188 words per minute

Speech length

204 words

Speech time

65 seconds

External threats including phishing kits impersonating postal services and compromised third-party vendors require attention beyond internal security measures

Explanation

Ihita raised the important point that while most cybersecurity focus in organizations tends to be on internal threats and vulnerabilities, the postal and logistics sector faces significant external threats that require dedicated attention. She questioned how the industry, particularly in resource-limited regions, can shift focus to address external risks alongside internal security measures.


Evidence

Reference to tracking external threats including phishing kits impersonating postal services, leaked credentials, and compromised third-party vendors. Observation that industry focus is usually on internal threats and solutions monitoring internal vulnerabilities.


Major discussion point

External vs internal threat focus


Topics

Cybersecurity | Network security


M

Mayssam Sabra

Speech speed

106 words per minute

Speech length

1244 words

Speech time

700 seconds

The Global Postal Network plays a critical role in facilitating trade, communication and economic development but faces increasing cyber threats that can disrupt operations and compromise sensitive data

Explanation

Mayssam emphasized that as postal services rely more on digital technologies, they face growing cyber threats including ransomware, phishing attacks, data breaches, and supply chain attacks. These threats not only disrupt operations but can also damage reputation and erode trust with consumers and businesses.


Evidence

Mentioned specific cyber threats: ransomware, phishing attacks, data breaches, supply chain attacks that can disrupt operations, compromise sensitive data, damage reputation, and erode trust within consumers and businesses


Major discussion point

Cyber threats to postal infrastructure


Topics

Cybersecurity | Infrastructure | Economic


There is an urgent need for enhanced cyber resilience in postal services, particularly in certain regions like the Caribbean where cybersecurity implementation rates are low

Explanation

Based on Kevin’s presentation findings, Mayssam highlighted that the research indicates low percentages of cybersecurity being offered in some posts, particularly in the Caribbean region. This demonstrates the urgent need for strengthening cyber resilience across different geographical areas.


Evidence

Reference to Kevin’s presentation showing low percentage of cyber security being offered in some posts in the Caribbean region


Major discussion point

Regional cybersecurity gaps


Topics

Cybersecurity | Development


Cyber threats are not limited to developing countries but also target large organizations and posts in developed nations, requiring global attention and collaboration

Explanation

Mayssam pointed out that while people might think cyber threats mainly affect developing countries, the reality is that they also target big organizations in developed countries. She mentioned that the UPU has been informed of several cyber attacks targeting major postal services in large countries.


Evidence

UPU has been informed of many, several cyber attacks that targeted big posts in big countries, and opening session speeches emphasized collaboration


Major discussion point

Global nature of postal cyber threats


Topics

Cybersecurity | Infrastructure


Agreed with

– Nigel Cassimire
– Mats Lillesund
– Tracy Hackshaw

Agreed on

International collaboration and partnerships are essential for strengthening postal cybersecurity


Agreements

Agreement points

Postal services are expanding beyond traditional mail to become multi-sector digital service providers

Speakers

– Kevin Hernandez
– Nigel Cassimire
– Floreta Faber

Arguments

Posts are offering extensive digital services beyond traditional postal operations, with 71% promoting economic inclusion through e-commerce and 58% offering digital financial services


Digital transformation in Caribbean postal services is not very advanced, contributing to lower cybersecurity implementation rates in the region


Albania experienced significant state-sponsored cyber attacks in 2022 affecting over 1,200 e-government services, leading to major cybersecurity reforms


Summary

All speakers acknowledge that postal services are transforming from traditional mail delivery to comprehensive digital service providers offering e-commerce, financial services, and e-government services, though at different stages of development across regions


Topics

Development | Infrastructure | Economic


Cyber threats targeting postal services are a global phenomenon affecting both developed and developing countries

Speakers

– Mayssam Sabra
– Mats Lillesund
– Floreta Faber
– Tracy Hackshaw

Arguments

Cyber threats are not limited to developing countries but also target large organizations and posts in developed nations, requiring global attention and collaboration


Norway faces similar global threat challenges including fraud campaigns using postal logos and more aggressive computer attacks


Over 15% of cyber attacks in Albania target the postal system, primarily through domain impersonation and phishing campaigns exploiting public trust


The postal supply chain involves diverse stakeholders including airlines, shipping companies, and delivery partners, all representing potential risk factors requiring collaborative security approaches


Summary

All speakers recognize that cyber threats against postal services are universal, affecting organizations regardless of their development level or geographic location, with attackers commonly exploiting trusted postal brands


Topics

Cybersecurity | Infrastructure | Cybercrime


International collaboration and partnerships are essential for strengthening postal cybersecurity

Speakers

– Nigel Cassimire
– Mats Lillesund
– Tracy Hackshaw
– Mayssam Sabra

Arguments

The Caribbean Telecommunications Union signed an MOU with UPU in 2023 to promote digital transformation and cybersecurity in Caribbean postal services


Norway has developed a culture of openness regarding security incidents, with sector-specific cooperation models like Financial CERT demonstrating effective collaboration


A postal sector Information Sharing and Analysis Center (ISAC) is being developed to enable secure collaboration and threat intelligence sharing among posts and stakeholders


Cyber threats are not limited to developing countries but also target large organizations and posts in developed nations, requiring global attention and collaboration


Summary

All speakers emphasize the critical importance of collaborative approaches, whether through formal MOUs, sector-specific cooperation models, or information sharing platforms, to address cybersecurity challenges effectively


Topics

Cybersecurity | Development | Infrastructure


Human factors and training are crucial components of postal cybersecurity

Speakers

– Kevin Hernandez
– Floreta Faber

Arguments

Digitalization requires upskilling postal staff in digital literacy, platform-specific training, and cyber hygiene practices rather than replacing jobs


The human layer remains the weakest link in cybersecurity, requiring both technical hardening and awareness raising among citizens and postal employees


Summary

Both speakers agree that addressing human vulnerabilities through comprehensive training and awareness programs is essential for postal cybersecurity, requiring investment in staff development and public education


Topics

Cybersecurity | Capacity development | Sociocultural


Similar viewpoints

Both speakers view postal services as critical infrastructure for digital inclusion, particularly for underserved populations, and emphasize the need for secure, accessible digital solutions tailored to resource-constrained environments

Speakers

– Kevin Hernandez
– Tracy Hackshaw

Arguments

Posts can serve as human-touch access points for digital services, particularly valuable for less connected users in rural areas


The .post domain initiative provides secure digital identity and services for postal operators, with special funding packages available for small island developing states and least developed countries


Topics

Development | Infrastructure | Cybersecurity


Both speakers identify significant regional disparities in cybersecurity preparedness, with developing regions facing the greatest challenges and requiring targeted support and intervention

Speakers

– Kevin Hernandez
– Mayssam Sabra

Arguments

Developing regions, particularly Latin America and the Caribbean, Asia Pacific, and Africa, show the lowest implementation rates of cybersecurity best practices


There is an urgent need for enhanced cyber resilience in postal services, particularly in certain regions like the Caribbean where cybersecurity implementation rates are low


Topics

Cybersecurity | Development


Both speakers advocate for proactive, collaborative approaches to cybersecurity that involve real-time monitoring, information sharing, and coordinated response mechanisms between postal operators and cybersecurity authorities

Speakers

– Floreta Faber
– Tracy Hackshaw

Arguments

Albania has implemented joint incident response simulations and real-time monitoring partnerships between the National Cybersecurity Authority and Albanian Post Office


A postal sector Information Sharing and Analysis Center (ISAC) is being developed to enable secure collaboration and threat intelligence sharing among posts and stakeholders


Topics

Cybersecurity | Infrastructure


Unexpected consensus

Postal services as digital inclusion facilitators rather than traditional mail providers

Speakers

– Kevin Hernandez
– Nigel Cassimire
– Tracy Hackshaw

Arguments

Posts can serve as human-touch access points for digital services, particularly valuable for less connected users in rural areas


Digital transformation in Caribbean postal services is not very advanced, contributing to lower cybersecurity implementation rates in the region


The .post domain initiative provides secure digital identity and services for postal operators, with special funding packages available for small island developing states and least developed countries


Explanation

There was unexpected consensus that postal services should be viewed primarily as digital inclusion facilitators rather than traditional mail providers. This represents a fundamental shift in how postal services are conceptualized, with all speakers agreeing on their potential role in bridging digital divides and serving as trusted intermediaries for digital services delivery


Topics

Development | Infrastructure | Sociocultural


Budget-workload mismatch as a critical systemic issue

Speakers

– Kevin Hernandez
– Floreta Faber

Arguments

Cybersecurity budgets are not keeping pace with increased workloads, with less than half of posts increasing budget allocations despite 70% experiencing higher cybersecurity demands


Albania has implemented joint incident response simulations and real-time monitoring partnerships between the National Cybersecurity Authority and Albanian Post Office


Explanation

There was unexpected consensus on the critical nature of the budget-workload mismatch in postal cybersecurity. While this might seem like an obvious operational challenge, the speakers’ agreement on its systemic importance and the need for strategic resource allocation represents a sophisticated understanding of cybersecurity as requiring sustained investment rather than ad-hoc responses


Topics

Cybersecurity | Economic | Development


Overall assessment

Summary

The speakers demonstrated strong consensus on the fundamental transformation of postal services from traditional mail providers to comprehensive digital service platforms, the global nature of cyber threats, the critical importance of international collaboration, and the need for human-centered approaches to cybersecurity. There was also agreement on regional disparities in cybersecurity preparedness and the importance of proactive, collaborative security measures.


Consensus level

High level of consensus with significant implications for the postal sector. The agreement suggests a shared understanding of the sector’s evolution and challenges, which could facilitate coordinated global action. The consensus on digital inclusion roles, collaborative security approaches, and the need for capacity building provides a strong foundation for developing unified strategies and standards across the global postal network. This alignment is particularly significant given the diverse geographic and developmental contexts represented by the speakers.


Differences

Different viewpoints

Approach to addressing human factors in cybersecurity

Speakers

– Floreta Faber
– Kevin Hernandez

Arguments

The human layer remains the weakest link in cybersecurity, requiring both technical hardening and awareness raising among citizens and postal employees


Digitalization requires upskilling postal staff in digital literacy, platform-specific training, and cyber hygiene practices rather than replacing jobs


Summary

Floreta emphasizes that humans are the weakest cybersecurity link requiring broad awareness campaigns for both employees and citizens, while Kevin focuses specifically on upskilling postal staff for digital service delivery without addressing broader public awareness needs


Topics

Cybersecurity | Capacity development


Unexpected differences

Scope of stakeholder inclusion in cybersecurity initiatives

Speakers

– Tracy Hackshaw
– Ihita Gangavarapu

Arguments

The postal supply chain involves diverse stakeholders including airlines, shipping companies, and delivery partners, all representing potential risk factors requiring collaborative security approaches


External threats including phishing kits impersonating postal services and compromised third-party vendors require attention beyond internal security measures


Explanation

While both recognize external threats, Tracy focuses on including supply chain partners in collaborative security frameworks, while Ihita questions whether the focus should shift from internal to external threat monitoring, representing different philosophical approaches to threat management


Topics

Cybersecurity | Infrastructure


Overall assessment

Summary

The discussion showed remarkable consensus on the fundamental challenges facing postal cybersecurity, with disagreements primarily centered on implementation approaches rather than core problems. Speakers agreed on the need for improved cybersecurity, the importance of collaboration, and the global nature of threats, but differed on specific methodologies and scope of solutions.


Disagreement level

Low to moderate disagreement level with high strategic alignment. The disagreements were constructive and complementary rather than conflicting, suggesting that different approaches could be implemented simultaneously or in different contexts. This consensus-building discussion indicates strong potential for coordinated action in postal cybersecurity, with various stakeholders contributing different but compatible solutions to address the sector’s cybersecurity challenges.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers view postal services as critical infrastructure for digital inclusion, particularly for underserved populations, and emphasize the need for secure, accessible digital solutions tailored to resource-constrained environments

Speakers

– Kevin Hernandez
– Tracy Hackshaw

Arguments

Posts can serve as human-touch access points for digital services, particularly valuable for less connected users in rural areas


The .post domain initiative provides secure digital identity and services for postal operators, with special funding packages available for small island developing states and least developed countries


Topics

Development | Infrastructure | Cybersecurity


Both speakers identify significant regional disparities in cybersecurity preparedness, with developing regions facing the greatest challenges and requiring targeted support and intervention

Speakers

– Kevin Hernandez
– Mayssam Sabra

Arguments

Developing regions, particularly Latin America and the Caribbean, Asia Pacific, and Africa, show the lowest implementation rates of cybersecurity best practices


There is an urgent need for enhanced cyber resilience in postal services, particularly in certain regions like the Caribbean where cybersecurity implementation rates are low


Topics

Cybersecurity | Development


Both speakers advocate for proactive, collaborative approaches to cybersecurity that involve real-time monitoring, information sharing, and coordinated response mechanisms between postal operators and cybersecurity authorities

Speakers

– Floreta Faber
– Tracy Hackshaw

Arguments

Albania has implemented joint incident response simulations and real-time monitoring partnerships between the National Cybersecurity Authority and Albanian Post Office


A postal sector Information Sharing and Analysis Center (ISAC) is being developed to enable secure collaboration and threat intelligence sharing among posts and stakeholders


Topics

Cybersecurity | Infrastructure


Takeaways

Key takeaways

The postal sector faces significant cybersecurity challenges with only basic practices like secure websites implemented by two-thirds of posts globally


Developing regions (Latin America/Caribbean, Asia Pacific, Africa) show the lowest cybersecurity implementation rates and are most vulnerable to attacks


Posts are rapidly expanding digital services beyond traditional mail, offering e-commerce, digital financial services, and e-government services, creating new attack vectors


Cybersecurity budgets are not keeping pace with increased workloads – less than half of posts increased budgets despite 70% experiencing higher cybersecurity demands


Human factors remain the weakest link in cybersecurity, requiring both technical solutions and comprehensive awareness training for staff and citizens


Collaboration between postal operators, national authorities, and international organizations is essential for effective cybersecurity resilience


Posts serve as critical infrastructure and trusted community hubs, making them attractive targets for cybercriminals seeking to exploit public trust


Resolutions and action items

UPU to continue rolling out the .post domain initiative to provide secure digital identity for postal operators


Implementation of the postal sector Information Sharing and Analysis Center (ISAC) to enable secure threat intelligence sharing


Expansion of the secure.post platform to include comprehensive cybersecurity testing and learning resources


Caribbean Telecommunications Union to continue implementing UPU digital readiness assessments in member states through their 2023 MOU


Albania to continue joint incident response simulations and real-time monitoring partnerships between national cybersecurity authority and postal services


UPU to provide special funding packages for small island developing states and least developed countries to support secure digital transformation


Continued upskilling of postal staff in digital literacy, platform-specific training, and cyber hygiene practices


Unresolved issues

How to effectively balance internal versus external threat monitoring with limited resources in developing regions


Specific mechanisms for scaling cybersecurity budget allocations to match increasing workloads across all postal operators


Detailed implementation timelines for the postal ISAC and how to ensure participation from diverse stakeholders across the supply chain


Standardization of cybersecurity practices across different regional postal unions and national postal operators


Integration of physical infrastructure resilience with digital transformation initiatives


Specific metrics and benchmarks for measuring cybersecurity improvement across the global postal network


Suggested compromises

Posts positioned as human-touch access points for digital services rather than fully automated systems, balancing efficiency with accessibility for less connected users


Phased implementation of cybersecurity measures starting with basic practices before advancing to more sophisticated solutions


Shared responsibility model where UPU provides frameworks and tools while national authorities and postal operators adapt them to local contexts


Collaborative funding approaches combining international support with national budget allocations for cybersecurity improvements


Thought provoking comments

Posts are offering many more digital services than we were even expecting… 71% of posts are promoting economic inclusion for SMEs through e-commerce services, 58% are promoting financial inclusion through digital financial services, 51% are promoting social inclusion through e-government services… More than a third (34%) of posts show signs of becoming a one-stop shop for economic, financial, social, and digital inclusion.

Speaker

Kevin Hernandez


Reason

This comment fundamentally reframed the discussion by revealing that postal services have evolved far beyond traditional mail delivery into comprehensive digital service hubs. It challenged the conventional understanding of what postal services do and highlighted their critical role in digital inclusion, especially for underserved populations.


Impact

This insight set the foundation for the entire discussion by establishing the stakes – if posts are now critical digital infrastructure serving multiple sectors, their cybersecurity becomes exponentially more important. It shifted the conversation from viewing postal cybersecurity as a niche concern to recognizing it as a matter of national digital infrastructure security.


As posts begin to offer more and more digital services, they become an even more critical infrastructure that must be secured, because they are now holding more sensitive data about customers and citizens across multiple sectors… These disruptions would disproportionately impact people in rural areas and the elderly who rely on the post for digital services the most.

Speaker

Kevin Hernandez


Reason

This comment was particularly insightful because it connected cybersecurity vulnerabilities to social equity issues. It demonstrated that postal cybersecurity isn’t just a technical problem but a social justice issue, as attacks would disproportionately harm the most vulnerable populations.


Impact

This observation elevated the urgency of the discussion and provided a compelling rationale for why postal cybersecurity deserves priority attention and resources. It helped other panelists frame their responses around the human impact of cyber threats.


Only 35% of posts were affiliated with the National Information Security Incident Response Team… cybersecurity budgets of posts are not keeping up with their cybersecurity workloads… less than half of posts reported that they increased their cybersecurity budget allocations.

Speaker

Kevin Hernandez


Reason

This stark revelation exposed a critical gap between the expanding digital responsibilities of postal services and their cybersecurity preparedness. It highlighted systemic underinvestment and lack of integration with national cybersecurity frameworks.


Impact

This data point created a sense of urgency that permeated the rest of the discussion. It prompted other panelists to share specific examples of how their regions/countries were addressing these gaps, turning the conversation toward concrete solutions and partnerships.


Out of 88 attacks we had last year, over 15% were through the postal system… The impersonation of postal brand and the misuse of digital channels to spread fraudulent messages can erode confidence in public services and amplify the risk of financial and identity-related crimes.

Speaker

Floretta Faber


Reason

This comment provided concrete evidence of the threat landscape with specific statistics, demonstrating that postal services are actively being targeted. More importantly, it highlighted how attacks on postal services undermine broader public trust in government institutions.


Impact

This real-world data validated Kevin’s theoretical framework with actual attack statistics, lending credibility to the urgency of the issue. It also introduced the concept that postal cybersecurity is tied to institutional trust and democratic governance, broadening the discussion’s scope.


The human layer is still the weakest link in cybersecurity attacks. That’s why our efforts must include not just the technical hardening, but also awareness raising among citizens and postal employees.

Speaker

Floretta Faber


Reason

This insight shifted focus from purely technical solutions to the human element of cybersecurity. It recognized that even the best technical defenses can be undermined by human error or lack of awareness, particularly relevant given posts’ multi-channel service delivery involving staff interaction.


Impact

This comment redirected the conversation toward training and capacity building as essential components of postal cybersecurity. It influenced subsequent speakers to discuss collaboration and knowledge sharing as key strategies.


We’re trying to really position the post as a place where you can access digital services with a human touch… That’s the unique value proposition of the post… it’s a place where you can get help accessing a digital service.

Speaker

Kevin Hernandez


Reason

This comment articulated a clear vision for the future role of postal services in the digital economy, emphasizing their unique position as trusted intermediaries that can bridge the digital divide through human-assisted digital service delivery.


Impact

This vision statement helped crystallize the discussion around why postal cybersecurity matters strategically. It provided a framework for understanding posts not as declining institutions but as evolving digital inclusion platforms that require protection.


Overall assessment

These key comments fundamentally transformed what could have been a narrow technical discussion about postal cybersecurity into a comprehensive examination of digital equity, institutional trust, and national infrastructure security. Kevin Hernandez’s opening presentation established the surprising scope of postal digital transformation and its implications, creating a foundation that elevated the entire discussion. Floretta Faber’s concrete examples from Albania provided real-world validation and introduced the critical connection between cybersecurity and public trust. Together, these insights shifted the conversation from viewing postal cybersecurity as a sector-specific concern to recognizing it as a cross-cutting issue affecting social inclusion, democratic governance, and national resilience. The comments created a compelling narrative arc: posts have become critical digital infrastructure (Kevin), they are actively under attack (Floretta), and protecting them requires both technical and human-centered approaches (multiple speakers). This framing influenced all subsequent contributions and positioned the UPU’s cybersecurity initiatives as essential infrastructure protection rather than optional enhancements.


Follow-up questions

How can posts effectively upskill their staff to handle multiple digital platforms while maintaining cybersecurity hygiene?

Speaker

Kevin Hernandez


Explanation

Kevin highlighted that postal staff need training on basic digital literacy, specific platforms for different services (e-commerce, e-government, digital financial services), and cyber hygiene practices, but the specific methodologies and best practices for this comprehensive training approach need further exploration.


What are the specific technical requirements and implementation strategies for posts in Small Island Developing States (SIDS) and Least Developed Countries (LDCs) to adopt secure digital infrastructure?

Speaker

Tracy Hackshaw


Explanation

Tracy mentioned funding packages available for SIDS and LDCs but the detailed technical requirements, implementation roadmaps, and success metrics for these countries need further research and documentation.


How can the postal sector effectively balance internal versus external threat monitoring, especially in resource-constrained regions?

Speaker

Ihita Gangavarapu (audience member)


Explanation

The question highlighted a gap in understanding how postal organizations can shift focus to include external threats (phishing kits, leaked credentials, compromised third-party vendors) in addition to internal vulnerabilities, particularly in regions with limited resources.


What are the specific mechanisms and protocols for information sharing within the proposed postal sector ISAC?

Speaker

Tracy Hackshaw


Explanation

While Tracy introduced the concept of a postal ISAC for secure information sharing, the detailed operational framework, governance structure, and specific protocols for confidential collaboration need further development and research.


How can post offices serve as disaster recovery and fallback infrastructure during digital disruptions?

Speaker

Mutu Sami (online participant)


Explanation

The question raised the important consideration of whether postal modernization design accounts for post offices serving as backup hubs during extraordinary digital infrastructure disruptions, which requires further research into disaster recovery planning.


What are the most effective sector-specific early warning mechanisms for postal services?

Speaker

Floreta Faber


Explanation

Floreta mentioned developing sector-specific early warning mechanisms but the detailed technical specifications, implementation strategies, and effectiveness metrics of these systems need further research and documentation.


How can the financial sector CERT model be adapted and implemented for the postal and logistics sector?

Speaker

Mats Lillesund


Explanation

Mats highlighted the success of financial CERT cooperation in Nordic countries as a model, but the specific adaptation requirements, governance structures, and implementation strategies for the postal sector need further exploration.


What are the long-term sustainability models for cybersecurity budget allocation in postal organizations, particularly in developing regions?

Speaker

Kevin Hernandez


Explanation

Kevin’s research showed that cybersecurity budgets are not keeping up with workloads, especially in developing regions, but sustainable funding models and budget allocation strategies need further research and development.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape

WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape

Session at a glance

Summary

This discussion focused on AI readiness in Africa within a shifting geopolitical landscape, hosted by the German Federal Ministry for Economic Cooperation and Development with representatives from government, civil society, and private sector across multiple African countries. The session addressed how African nations can build sovereign and resilient AI ecosystems while avoiding digital neocolonialism and ensuring AI serves local needs rather than external interests.


Key speakers emphasized that AI is already impacting governance in Africa through both beneficial applications and harmful uses like automated disinformation campaigns and surveillance tools. Government representatives from Mauritania and South Africa shared their experiences developing national AI strategies, highlighting challenges including limited infrastructure in rural areas, capacity gaps, and coordination difficulties across sectors. Smart Africa’s leadership stressed the need for deliberate, intentional action rather than passive optimism, noting that over 1,000 African startups currently rely on foreign AI models, creating dependency risks.


The discussion revealed that 19 African countries have developed national AI strategies, but implementation remains challenging. Speakers emphasized the importance of inclusive, transparent governance frameworks grounded in human rights and constitutional values. Civil society representatives highlighted their crucial role as watchdogs, educators, and advocates for marginalized communities, stressing the need for mandatory public interest impact assessments and regional coalitions.


Private sector perspectives emphasized creating enabling environments and moving from “paper to pavement” in policy implementation. The conversation underscored Africa’s potential to define its own AI race focused on usefulness rather than raw power, particularly in agriculture, healthcare, and education. Speakers concluded that success requires blended financing, multi-stakeholder participation, and policies that foster homegrown innovation while protecting citizen rights and digital sovereignty.


Keypoints

## Major Discussion Points:


– **AI Governance and Policy Frameworks**: The need for African countries to develop inclusive, transparent AI governance frameworks that are grounded in local values, human rights, and constitutional principles rather than importing wholesale benchmarks from other regions. Multiple speakers emphasized the importance of harmonizing national AI strategies across the continent while respecting individual country sovereignty.


– **Digital Sovereignty and Data Control**: Concerns about AI-driven digital neocolonialism and the risk of African data being processed and trained on models located outside the continent. Speakers stressed the importance of keeping African data in African hands and developing locally-rooted AI solutions that serve African needs rather than external interests.


– **Infrastructure and Capacity Building**: The critical challenges of limited computing power, inadequate internet coverage (especially in rural areas), digital literacy gaps, and the need for significant investment in technical expertise and infrastructure to support AI development across Africa.


– **Multi-stakeholder Collaboration and Inclusivity**: The importance of involving diverse stakeholders – governments, private sector, civil society, youth, and local communities – in AI development and governance, with particular emphasis on democratizing AI literacy and ensuring marginalized voices are heard in policy-making processes.


– **Practical Implementation and Funding**: Moving beyond policy documents to actual implementation, with discussions about blended financing models, public-private partnerships, and the need for realistic, actionable steps that governments can take to build AI readiness while addressing youth concerns about job displacement.


## Overall Purpose:


The discussion aimed to explore how African countries can build sovereign and resilient AI ecosystems that strengthen rather than undermine democratic governance, while navigating a complex geopolitical landscape marked by technological rivalry and economic pressures. The session sought to identify practical strategies for ensuring AI serves local African needs and values rather than perpetuating digital colonialism.


## Overall Tone:


The discussion maintained a constructive and collaborative tone throughout, characterized by cautious optimism balanced with realistic acknowledgment of significant challenges. Speakers demonstrated both urgency about the need for action and pragmatism about Africa’s current limitations. The tone was notably inclusive and pan-African in perspective, with participants building on each other’s points rather than disagreeing. There was a consistent thread of determination that Africa can succeed in AI development by defining its own terms and leveraging its unique strengths, particularly its youthful population and linguistic diversity.


Speakers

**Speakers from the provided list:**


– **Ashana Kalemera** – Programmes Manager at CIPESA (Collaboration on International ICT Policy for East and Southern Africa), Session Moderator


– **Neema Iyer** – Founder and Executive Director of POLISI, a civil society organisation based in Uganda


– **Mlindi Mashologu** – Deputy Director General of the Department of Communications and Digital Technologies, South African government


– **Lacina Kone** – Director General and CEO of Smart Africa from Côte d’Ivoire


– **Matchiane Soueid Ahmed** – Special Envoy of the Mauritanian Ministry of Digital Transformation and Public Administration Modernization for the government of Mauritania


– **Shikoh Gitau** – CEO of KALA, a private sector company in Kenya (joined virtually)


– **Audience** – Various participants from the floor including representatives from Gambia, Ghana, and Liberia


**Additional speakers:**


– **Dennis Brumand** – Advisor of the Global Project on Digital Transformation, GIZ (mentioned as online moderator but did not speak in the transcript)


Full session report

# AI Readiness in Africa: Building Sovereign and Resilient Digital Ecosystems


## Executive Summary


This comprehensive discussion on AI readiness in Africa took place at the Internet Governance Forum (IGF) in Oslo, hosted by the German Federal Ministry for Economic Cooperation and Development. The session brought together representatives from government, civil society, and private sector across multiple African countries to address how African nations can build sovereign and resilient AI ecosystems while avoiding digital neocolonialism.


The discussion revealed both the urgency and complexity of Africa’s AI governance challenge. With AI already impacting governance through automated disinformation campaigns and surveillance tools, while simultaneously offering transformative potential in agriculture, healthcare, and education, African stakeholders face the imperative of moving from passive optimism to deliberate intentional action. While 19 African countries have developed national AI strategies, the implementation gap remains substantial, requiring innovative approaches to financing, capacity building, and multi-stakeholder collaboration.


## Key Participants and Perspectives


The session was moderated by **Ashana Kalemera**, Programmes Manager at CIPESA, with **Dennis Brumand** serving as online moderator from GIZ. The discussion featured diverse perspectives from across the African continent.


**Neema Iyer**, Founder and Executive Director of POLISI, provided critical civil society insights, emphasizing that AI is already undermining governance in Africa through automated disinformation campaigns, erosion of public trust, manipulation of political discourse, and surveillance tools used to stifle voices at scale. She outlined civil society’s role as “watchdogs, advocates, educators, and storytellers” in AI governance.


**Mlindi Mashologu**, Deputy Director General of South Africa’s Department of Communications and Digital Technologies, offered governmental insights into developing national AI strategies. He emphasized transparent, inclusive AI governance frameworks grounded in constitutional values, while acknowledging significant infrastructure challenges including the digital divide and limited compute capabilities.


**Lacina Kone**, Director General and CEO of Smart Africa, provided a continental perspective that proved influential throughout the discussion. His assertion that “we cannot be passively optimistic, we have to be deliberately intentional” set the tone for the conversation. He highlighted that Africa is the number one frontier for data availability to train AI, yet over 1,000 African startups daily download APIs from foreign models, with training data leaving the continent permanently.


**Machane Babakar-Ahmed**, Special Envoy of Mauritania’s Ministry of Digital Transformation, contributed insights from a country actively developing its AI strategy. He emphasized that data sovereignty is as important as territorial sovereignty and that AI governance must be co-created with African voices, grounded in human rights with civic oversight.


**Shikoh Gitau**, CEO of KALA, participated virtually and brought private sector perspectives. Her pointed question about democratization—”Every time I hear democratising this, democratising that, I say, what’s the definition of democracy you’re talking about?”—prompted deeper reflection on whose interests are truly being served in AI policy development.


## Current State of AI in Africa: Challenges and Opportunities


### Existing AI Impact on Governance


The discussion began with Neema Iyer’s assessment of AI’s current impact on African governance, outlining how AI is already undermining democratic processes through automated disinformation campaigns, erosion of public trust, manipulation of political discourse, and surveillance tools deployed at scale. This established that the conversation was about governing AI systems already actively shaping political and social landscapes across the continent.


However, speakers also acknowledged AI’s positive applications in governance, including improved service delivery, enhanced agricultural productivity, and expanded access to healthcare and education.


### Infrastructure and Capacity Constraints


Significant infrastructure challenges constrain Africa’s AI readiness. Machane Babakar-Ahmed noted that in Mauritania, approximately 20% of the population in remote areas lacks adequate connectivity. Lacina Kone highlighted that Africa collectively lacks the necessary computing power across its 50+ countries to train AI models locally, creating dependency on external systems.


This infrastructure gap extends to human capacity, with speakers emphasizing the need for massive investment in digital literacy, technical expertise, and institutional capacity to support AI development.


### The Data Sovereignty Paradox


A central theme was the paradox of Africa’s data wealth and digital dependency. Lacina Kone observed that Africa generates abundant data for AI training, yet this data is processed through models located outside the continent. The irreversible nature of AI training—once data is used to train a model, it cannot be retrieved—adds urgency to sovereignty concerns.


## National AI Strategies and Continental Coordination


### Current Policy Landscape


The discussion revealed that 19 African countries have developed national AI strategies, representing significant progress in policy development. However, speakers consistently emphasized the gap between policy formulation and practical implementation. As Neema Iyer asked, “How do we operationalise beautiful policies and frameworks into actual ground-level implementation?”


South Africa’s approach, outlined by Mlindi Mashologu, emphasizes transparent, inclusive governance frameworks grounded in constitutional values. The country’s upcoming G20 presidency offers an opportunity to advance AI governance frameworks with focus on “solidarity, sustainability, and equality.”


### Smart Africa’s AI Council


Lacina Kone outlined Smart Africa’s AI Council for Africa, which focuses on five critical areas: computing power, datasets, algorithms, AI governance, and market development. The Council’s work on benchmarking the 19 national AI strategies represents an attempt to identify common elements while respecting national differences.


The discussion highlighted practical tools being developed, including the African AI Governance Toolkit (available at qbit.africa) and the African AI Maturity Index (at datawall.africa).


### Harmonization Versus Sovereignty


An audience member from Gambia expressed concern about countries working in silos rather than converging on a common continental AI policy framework. Lacina Kone’s response—that “not one size should fit all, but all sizes should fit together”—captured the nuanced approach required, recognizing that while harmonization is desirable, individual countries face different challenges and priorities.


## Multi-Stakeholder Collaboration


### Civil Society’s Role


Neema Iyer described civil society’s role as watchdogs monitoring AI deployment, advocates ensuring marginalized voices are heard, educators democratizing digital literacy, and storytellers documenting lived experiences of AI impacts. She emphasized the need for mandatory public interest impact assessments before AI system deployment.


### Private Sector Engagement


Shikoh Gitau highlighted the private sector’s role in creating enabling environments and conducting awareness campaigns. Her experience with teacher training campaigns across six African countries demonstrated high demand for AI education. She emphasized the need for blended financing approaches combining government resources with private sector investment.


### Government Responsibilities


Government representatives emphasized their role in creating enabling policy environments while acknowledging capacity constraints. Mlindi Mashologu highlighted the need for regulatory sandboxes and startup support mechanisms to encourage AI-enabled economic transformation.


## Data Sovereignty and Cultural Preservation


Machane Babakar-Ahmed’s assertion that “data must remain in African hands” and that “digital sovereignty is as important as territorial sovereignty” provided a framework for understanding sovereignty in the AI era.


Lacina Kone emphasized preserving Africa’s 2,000+ languages through locally trained AI systems, highlighting the cultural dimensions of AI sovereignty. The potential for AI to reach indigenous people in rural areas and educate them in their own languages represents a transformative opportunity.


## Economic Development and Innovation


### Redefining Success Metrics


Lacina Kone’s observation that “Africa is not looking for the most powerful AI, it’s looking for the most useful one” represented a fundamental reframing of AI development priorities. Rather than competing on computational power, this approach focuses on practical applications addressing African development needs.


### Concrete Examples


The discussion included specific examples of African AI initiatives, such as the African scientific panel with 25 doctors of African heritage working on healthcare AI benchmarking, demonstrating practical approaches to developing contextually relevant AI systems.


## Audience Engagement and Concerns


The lightning round featured questions from multiple African countries. A participant from Ghana raised concerns about youth fears regarding AI taking over jobs, highlighting the need for trust-building and proper education. A representative from Liberia asked about realistic steps for implementation, emphasizing the need for practical guidance beyond policy frameworks.


## Education and Capacity Building


A consistent theme was the need to democratize digital literacy and make AI education accessible at grassroots levels. Neema Iyer observed that elite policy discussions often don’t apply to most people living on the continent, highlighting the need for accessible communication strategies.


Shikoh Gitau’s teacher training campaigns provide a model for building capacity and confidence simultaneously, with high demand for AI education when delivered appropriately.


## Key Challenges and Future Directions


### Implementation Gap


Despite progress in policy development, the question of how to operationalize frameworks into ground-level implementation remains central. This reflects broader challenges including limited resources, capacity constraints, and coordination difficulties.


### Infrastructure and Financing


Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—require innovative financing approaches. The scale of investment required compared to available resources suggests difficult prioritization decisions ahead.


### Youth Engagement


Concerns about youth fears regarding AI and employment displacement highlight the need for effective communication strategies that build understanding of AI’s potential benefits while addressing legitimate concerns about economic disruption.


## Conclusion


The discussion demonstrated both the complexity and potential of Africa’s AI governance challenge. The emphasis on moving from passive optimism to deliberate intentional action captures the urgency of the moment. Africa’s abundant data resources and youthful population provide significant advantages, but realizing this potential requires coordinated action across policy development, infrastructure investment, capacity building, and institutional innovation.


The conversation revealed a mature understanding of Africa’s position in the global AI landscape, with stakeholders articulating a distinctive vision that prioritizes utility over power, inclusion over efficiency, and sovereignty over dependency. The path forward requires sustained commitment from all stakeholders, innovative financing mechanisms, and continued collaboration across national boundaries while addressing substantial practical challenges.


As Lacina Kone emphasized, the goal is not to develop the most powerful AI systems, but the most useful ones for African contexts, suggesting a sustainable development pathway that builds on existing strengths while serving African needs and values.


Session transcript

Ashana Kalemera: Music Good afternoon. Thank you so much for joining us this afternoon. I’ll also say good morning, good evening and good day, considering that there are participants joining us online from different time zones. Welcome to the session on AI readiness in Africa in a shifting geopolitical landscape. The session is hosted by the German Federal Ministry for Economic Cooperation and Development, BMZ, together with its partners on stage. I’m very, very, very honoured to be moderating this very timely discussion. My name is Ashna Kalemela. I work as programmes manager at CIPESA. CIPESA is the Collaboration on International ICT Policy for East and Southern Africa. I’m sure we’ll all agree with me that there’s huge transformative potential of AI in society, from innovation to socio-economic development. However, there are also very significant risks. These include inadequate governance frameworks, which risk deepening inequality, weakening democracy and reinforcing technological dependencies. In Africa particularly, countries are striving to build sovereign and resilient AI ecosystems in tandem with a fast-evolving geopolitical landscape. This landscape is marked by shifting alliances, intensifying technological rivalry and growing economic pressure. Whereas various stakeholders are engaging on these issues, the continent remains underrepresented in global AI development. as well as discourse. Locally driven solutions are constrained by limited investments in research, regulatory gaps, and the dominance of multinational tech companies. Meanwhile, concerns about digital exploitation and economic disparities when it comes to data processing, training of models, and low-wage labor markets in Africa also prevail. The risk of AI-driven digital neocolonialism is growing. As global powers compete for technological influence, Africa must strengthen its position to ensure AI serves local needs rather than external interests. African nations at the moment have a unique opportunity to establish AI governance models that are rooted in fairness, in transparency, and inclusion. These AI frameworks also have the potential to align with local realities and normative considerations. These frameworks are hopefully able to foster innovation, uphold democracy, and human rights. Our speakers today, who represent a very broad spectrum of stakeholder groups, will highlight the challenges and opportunities for securing an AI future that benefits Africa. The speaker lineup includes Shikoh Gitau, who’s joining us virtually. She’s the CEO of KALA, a private sector in Kenya. We have Mr. Lassina Kone, the Director General and CEO of Smart Africa from Cote d’Ivoire, Enchanté. Machane Babakar-Ahmed, the Special Envoy of the Mauritanian Ministry of Digital Transformation and Public Administration Modernization for the government of Mauritania. We have Melindi Msalango, the Deputy Director General of the Department of Communications and Digital Technologies, the South African government. And to my immediate left, a very old and good friend, Nima Ea. the Founder and Executive Director of POLISI, a civil society organisation based in my home country, Uganda. Our online moderation is being done by Dennis Brumand, the Advisor of the Global Project on Digital Transformation, GIZ. You’re all welcome once again, and we’ll kick off the conversation with a lightning round, which is a one-sentence response I expect from the speakers here. And the question is, what must be done today to ensure that AI strengthens rather than undermines democratic governance in Africa? I’ll start with my left over to you, Nima. Thank you so much for the question, and I’m very pleased to be here. I’ll start again. Hi, everyone. I’m very pleased to be here. Apologies for the mic malfunction.


Neema Iyer: So, I was saying that AI is already undermining governance in Africa, and we’re seeing this through automated disinformation campaigns, the eroding of public trust, manipulation of political discourse, and surveillance tools that are used to stifle voices at scale. These harms are often gendered, and a lot of our work is looking at a feminist perspective on new technological tools. And these harms are magnified in contexts that have weak data protection and limited digital literacy, which really opens the door to abuse. So, as such, I would say that we need AI that is based on accountability, trust, and a big component of public education. And this will include, for example, impact assessments, regional coalitions, investment in ethical and open-source alternatives that work within our context. and are based on our realities and that are focused on care and justice rather than control and extraction. Thank you, Ashna. Back to you. Thanks, Nimba. Same question to you, Mr. Malindi.


Ashana Kalemera: What must be done today to ensure that AI strengthens rather than undermines democratic governance on the continent? No, thank you. Thank you, Margaret. I think on my side, I just would like to say that what is important is that we need to institutionalize the transparent inclusive AI governance frameworks that are grounded in the constitutional values of public participation to ensure that AI supports accountable service delivery, social justice, as well as democratic resilience. But also some of the areas that I would like to highlight is the issue of removing bias from the data sets that are used to train the AI systems.


Mlindi Mashologu: Because if we are not removing these biases and also not including, you know, a large pool of, you know, demographics in terms of data sets, you’ll find that you can have, you know, the challenges, you know, and which will then undermine, you know, the democratic governance. But also, I think the last one that I just want to highlight is the explainable AI, whereby we would like to advocate to say that whatever decisions that are taken by the AI system, they need to be explainable and they need to be based on, you know, some human oversight, you know. So I think those are some of the areas that I would just like to highlight on my side. Thank you. Thank you very much.


Ashana Kalemera: Moving to my right, Mr. Kone. Thank you very much for inviting me. This is my application in honour of, of course, our partner, BMZ, In fact, can you hear me?


Lacina Kone: What you just mentioned, I would say the impact is actually worse than all of this. We talked about the bias of the AI. Don’t forget, today even if you have a PhD degree, you can still be ignorant in terms of AI. If we go based on a foundation that is not enough to go to school, but you have to adapt to AI, and we all know that the inclusions definitions in Africa is completely different from the inclusion definitions in the West. Because we believe that the AI will be the equalizer, be able to include people who do not speak any other language. Remember, we have more than 2,000 languages in Africa. So if we allow those languages to be trained on the AI system, but not trained by us, it basically means even indigenous people will be impacted by any cultural biases using AI. Therefore, all what my predecessors have just mentioned is very true. So what do we do about it? We cannot be passively, we can no longer be passively optimistic. We have to be deliberately intentional. That’s why Smart Africa put together last April Africa Council of AI Council for Africa. What does that actually mean? The Africa AI Council for Africa, the AI Council for Africa will look at five different things. One, computing power. Collectively today, more than 50 countries on our continent, do we have a necessary power and a computing power to be able to train our data. Two, it’s going to look at the data set. No matter what other people said, Africa today is the number one frontier in terms of availability of data. to train AI. Number three is going to look at the algorithm, algorithm which is actually the cultural bias, what goes on into the AI to be able to respond to people. And number four is the AI governance. In the AI governance side today, there are today about 19 countries in Africa who have already developed their national AI strategy. It’s our role at Smart Africa to be able to harmonize those policies, to be aligned exactly to be what the high level of the UN commissioners, they came up together with the AI governance, as well as looking into what European have put together in terms of safeguarding. Of course, all of these have to be aligned with the AI strategy developed by the African Union. And number five is the market. I think what is happening today, what has happened, happened. But what do we need to do about it? We need to be doing these things intentionally. It’s actually worse than you actually want to think. Because currently in Africa, on our continent, there are over a thousand startup who are downloading on a daily basis, the API from Open Frontier, Open AI Frontier model, as well as the DeepSeek, which is the Chinese. But they’re training those model. Those models are not located on our continent. They’re located outside the continent. And AI system is the way once you train the server, you can never get the information back again. So what do we do about it? It actually goes to even to look at the foundational model of our relationship with our partners in Europe. What do we do? Based on the fact that if you look at North America, everything is based on the private sector base, which is at the heart of the capitalism. When we look at this, it’s based on the control of the government. And Africa wants to go with the user-centric approach for everything. So it’s time for us, we actually be able to look at the AI, not the most powerful one, but the most useful one, by looking at our values, by looking or preserving our languages. to be able to create opportunity, not for a few, but for the many. Thank you.


Ashana Kalemera: Thank you very much, Mr. Kone. Over to you, Madam Ahmed.


Matchiane Soueid Ahmed: Thank you very much. Thank you very much, moderator. I’m fully agree with what my colleagues have mentioned right now, but I would like to summarize it in two words. First, to ensure AI makes democratic governance stronger in Africa, we must invest now in inclusive, transparent AI policy frameworks that are co-created with African voices. Second, these frameworks must be grounded in human rights and supported by strong civic oversight. Thank you very much. Thank you all for the very quick lightning round. We’ll now deep dive into specific questions, per sector, per experience, per expertise. And I’ll stay with you, Madam Ahmed. Mauritania was amongst the first African countries to launch a national AI strategy. What key challenges have you faced in translating this strategy into tangible actions? And how are you addressing coordinated institutional ownership and inclusivity to ensure a democratic and locally rooted AI in Africa? Thank you very much for this important question. Yes, Mauritania has developed an AI strategy with the support of German cooperation through BMZ and GIZ. But to implement that strategy, we are facing some challenges. Let me first start by infrastructure availability in rural areas. As you may know, Mauritania has a huge surface, about one square kilometer. And there is a lot of small villages. far away from each other. That makes it difficult to serve more than 20% of the population living in this area. So we are facing this reality, but we hope to address it as soon as possible with the support of government partners, like German cooperation through KFW, and European Union too, and World Bank. Another challenge is limited capacity and technical expertise, which slowed this strategy implementation. To address this, we are investing in partnerships with universities and international organizations to build local skills and knowledge. We faced also a coordination problem across government and sector, and to tackle this, we established an inter-ministerial working group to ensure alignment between the AI strategy and national development priorities with regular consultation among stakeholders. We are also aware that ownership and inclusivity are crucial. That’s why our approach has focused on creating a participatory process, engaging civil society, local tech communities, and youth in shaping policies and pilot projects. This will ensure that our AI applications are relevant, trusted, and anchored in local needs. Ultimately, our goal is to build an AI model made in Africa that reflects our values and development priorities, inclusive, ethical, and locally learned.


Ashana Kalemera: Thank you very much. Thank you very much, Ms. Ahmed. I’ll come back to you. Mr. Kone, having had the government perspective and being an intergovernmental organization, from your experience, how can Smart Africa help its member states build AIR governance models that protect national sovereignty while also advancing democratic values and public interest, as Ms. Ahmed has pointed out?


Lacina Kone: Thank you very much for the questions. In the current geopolitical situations, and I like to see this all the time, the multi-stakeholder multilateralism is a choice, but we’re living in a multi-polarity geopolitical moment. The only thing that should bind us together is our differences. That’s why Smart Africa is here. It means we are not looking for a national AI policy, it’s like a cookie cutter. One side should fit all, but all sides should fit together by preserving a human right and a digital right. And this is all drawn from the UN high level. If you look at the AI governance of the UN high level, they are structural. They are basically regulatory issue, they are national issue. The national AI governance may be a little bit different, but if you look at the common denominator based on ethical inclusion and sustainability, it has to be included in a national strategy. However, by saying that today Africa, no single nation will be able to build an AI system alone. My sister just mentioned about infrastructure. But you see, we need to understand, some people said, oh, Africa, you might be so behind because you only have a 40% of your population covered with the internet use, then why are you talking about AI? That’s not the question. Africa is not looking for the most powerful AI, it’s looking for the most useful one, looking at the agriculture, looking at the healthcare and looking at the education. Why? Because if you know the history, for the past 45 years since 1980, African continent has gained 1 billion people in 50. But in 45 years, if you gain 1 billion, has the number of school increased to that rate? No. Has the hospital increased to that rate? No. Has the school and the financial sector bank have grown to that rate? No. It means for us, digital transformation, when we look at the AI, it is the evolution within the ecosystem of digital economy. But it’s a revolution within itself, which means AI allows us today to actually reach indigenous people in the rural area, to educate them in their own language, so they become tech-savvy like anyone who’s been to the university. So the way we are doing that, Smart Africa, we do have a Council of ICT Minister, where Mauritania is a part of it. We have a Council of African Regulator. We have a Council of African IT Agency. And we have also the board member who are the head of the CID himself. That’s why we created AI Council, to be able to address all of this methodically and systematically in a way that we align and we mutualize our resources to be able to face this revolution, which is the AI. Thank you. Thank you. I don’t know about the audience, but I’m here nodding and taking notes frantically. We had hoped to have private sector perspective complement the government and intergovernmental ones, but unfortunately, our virtual speaker, Shiko, has not been able to join us. Oh, sure. Please go ahead, before I go to civil society. When we talk about a private sector perspective, I want ladies and gentlemen, all of you to remember, today when any nation talks about AI, they talk about a frontier model. We’ve seen the American talking about a frontier model, open AI, LAMA, GORC3, what name of. When we hear the French people talking, they are behind the mistrial. We hear about the Chinese talking about the AI, they’re talking about the deep-seek. All of these people are private sector, which basically means the government in Africa knows very well their role. They should be creating a conducive environment for private sector to have a kind of blended financing. Because when we talk about financing, America will declare about 500 billion, French about 200 billion. So what is Africa going to declare in terms of funding? This fund will be and should be a blended. Because don’t forget, we have over half a dozen MNO, which is a mobile network operator operating on our continent who owns data center. It is existential for them to be able to get into the AI because at the end of the day,


Ashana Kalemera: it’s about making money. So even in our financing, we should be taking that into account. That’s where the role of the private sector comes in. Thank you. Thank you very much, Mr. Kone. Shiko has joined us online, but before we move to him, we’ll first hear from civil society and then additional perspective from government. So Nima, civil society is crucial for ensuring that AI governance serves the public good. What unique contributions can civil society make in this space and what kinds of frameworks or coalitions are needed to support effective oversight and inclusive participation? Thanks so much for the question.


Neema Iyer: So of course, civil society is extremely crucial to the entire process. And I’m not just biased in saying that, but we play a role as watchdogs, as advocates, as educators, as storytellers, and shaping the entire narrative about AI on the continent. And we’re also there to question the political, the economic, and the social logic on why we’re actually deploying AI. Shikoh Gitau, Dennis Mwighusa, Shikoh Gitau, Dennis Mwighusa, Shikoh Gitau, Dennis Mwighusa, We need to resist this quiet import of oftentimes harmful tools, you know, such as excessive surveillance or predictive tech. There’s so much good that can come from AI, but of course there’s also a lot of harms and I think we should have a very balanced view of both. So we can’t just be on the side of, you know, AI is harmful or AI is beneficial, but really having that balanced view to think critically about why we’re bringing AI in. The second one that’s really important is to track the funding. to understand who is funding what in the African context and what are these different foreign interests of all these players, of all the tech giants in shaping our AI agenda. I think we need to be very critical about that as well. The third one I would say is that we need to document the lived experiences, both of the benefits and the harms, especially to marginalized communities. And, yeah, we can do tech audits, of course, but I think civil society is uniquely placed to address these harms and benefits and to document them and to tell these stories so that they then go back and inform the policies. The fourth one I would say is we really need to democratize digital literacy. I feel like, you know, we’re in Oslo here having this conversation that doesn’t apply to most people living on the continent. I think there’s really a need to take these conversations to a grassroots level. It is extremely important for us to be represented at these high-level multi-stakeholder meetings, but that doesn’t mean we do it at the cost of not involving local communities at different levels in this conversation. And, yeah, I think the education needs to be accessible. It needs to be forward-looking. It needs to be for common people, for leaders, in local languages. We need to use creative ways of talking about it. We can’t always use these huge data governance languages, and then, you know, people’s eyes glaze over because they don’t really know what you’re saying. We really need to make it accessible. I think that that is a very urgent need. The fifth one I would say is that we also need to get back to making things. And, yes, I agree with the point. We do not have the investment of that value, but it doesn’t mean that we don’t do anything. I want to propose that we have alternative design of products, of AI products that are based on different incentives and that, you know, are really tailored to our needs. And so, moving to the question of, like, what kinds of frameworks I would say that we should really have mandatory public interest impact assessments, so I touched upon that earlier. But it would be lovely if before an AI system is deployed that, you know, we do assessments of what it can impact, who it will impact, and that, you know, throughout the process of having it, we continue to do these assessments. And we understand how it impacts, you know, social, economic, gendered, environmental impacts of these AI systems. The next one, I think we need regional civic coalitions, so it’s amazing to bring governments together to talk about AI, but what other groups are we leaving out? What silos are we creating when we bring these groups together? Who is missing at this conversation, at this table, for example? So, you know, women’s rights organizations, trade unions, journalists, researchers, can we all come together to shape these policies and that they’re not just done at a very high government level? The third one, again, going to my colleague’s point, is funding. We really need better frameworks for how we are going to fund AI on the continent. We need to come up with our own funding models for innovation in a way that is sustainable, in a way that we bring young people in to actually create product. I feel like that is quite a bit of a missing link. And then the last one, I think, is, yeah, just bringing as many stakeholders into the conversation as possible, and then just questioning how are we going to operationalize all these beautiful policies and frameworks that we’re talking about? So how do we take it from this table here, actually putting it on the ground and actually seeing it in action? So thank you so much. Thanks, Neema. What you’re saying resonates a lot with me, coming from civil society, but absolutely what was also said from the government and intergovernmental perspective. Malindi, I’ll move over to you. In anticipation of South Africa’s G20 presidency, How can AI maturity assessments be leveraged to strengthen context-specific governance across Africa? What role do institutional readiness, democratic safeguards and inclusive policy-making play? Thank you for that question.


Mlindi Mashologu: As the country, South Africa, we assume the G20 presidency and I think it’s important to note our banner there is solidarity, sustainability and equality. So we see that we are presented with a unique opportunity to lead a new chapter in the digital governance, not only for ourselves but also, you know, for the community at large. I think it’s important to note that, you know, one of the powerful tools that we can use is the AI maturity assessments. That is one that we did work with the GIZ in terms of South Africa, which we participated on and one of the areas that we picked up is that, you know, these assessments, they allow us as governments to diagnose the strengths, you know, identify the gaps, but also chart, you know, clear actionable pathways for the responsible AI adoption. But also it’s important to note that they are not just technical diagnostics, they are also political governance instruments. So if we apply them with PIPOs, you’ll find that they can actually be transformative in nature and I think they can actually assist in terms of the African-led, you know, governance frameworks. But now if I can just also look into, you know, how we can anchor them into the African realities, you’ll find that too often, you know, the benchmarks are normally imported wholesale benchmarks, which are normally failing to account for. and other governance challenges such as limited compute infrastructure, fragmented data ecosystems or linguistic diversity. And these are some of the area things that are very critical when you look into the African continent. So for then the South African and our continental peers, so we are advocating that we need to develop and localize these tools to reflect our developmental priorities, which then include inclusive service delivery, ethical public sector automation as well as community trust. But also it’s important to recognize that institutional readiness is the cornerstone for any AI governance framework. We do understand that as policy makers, policy alone are sometimes not enough. But we need some of the institutions that are technically equipped, policy-coherent and operationally agile. So one of the areas that we are currently doing as a country is the development of an AI policy for the country. And I mean the policy, you know, it does have broad statements that we are looking into. And I must say that we have been quite behind in terms of that. But at least we are on the final stages. And I think if I can just highlight just a few areas that we are looking on as a country. One is the area of capacity development, where we are looking at strengthening the AI-related education. And these are some of the things that also came, you know, from when we are unpacking, you know, the policy, you know, the frameworks that we looked into. But also we’re looking on the areas of AI for economic transformation, where we are looking in terms of AI as well in public service delivery, but also supporting the startups as well as through regulatory sandboxes. The other area we’re looking on is the area of responsible governance. So we are advocating on some of, you know, various bodies that needs to be established, which includes your ethics board, AI ethics board, national AI. the AI Commission and the AI Regulatory Authority because we feel that some of the regulators that we’ve got might not be, you know, up to people in terms of, you know, regulating AI, but also the areas of ethical and inclusive AI, where we’re looking at developing, you know, localizing ethical standards, but also cultural preservation and international integration, as well as human-centered approach. And, I mean, if you were to look on all these broad aspects, you’ll find that, you know, some of the work that the colleagues are saying, it also aligns to that, because you’ll find that, you know, while we’re developing these policies, you’ll find that there are some challenges that are deep-rooted into our societies, which include now currently, if you look on digital divide, you’ll find that we still have got digital divide as a continent, which we need to address to make sure that, I mean, whenever we put, you know, these frameworks, they can address that. But also, if you were to look on the compute capabilities, you’ll find that, you know, we still don’t have compute capabilities in the continent. So it’s one of the things, then, that we need to make sure that we address significantly as part of us developing the governance frameworks. But I also want to add the other area, that we also need to embed, you know, the democratic safeguards into every layer of the AI development and deployment, because we see that without the robust mechanisms for transparency, public accountability, as well as end recourse, AI can deepen exclusion, entrench bias, or even erode civil liberties. So a democratic governance in this age of AI means placing human rights and constitutional values at the centre, from the procurement processes to the algorithmic audits. And I think the last point that I want to add critically is the role of inclusive policymaking. AI governance cannot just be a technocratic rule, but it needs to be


Ashana Kalemera: Thank you very much, Melendi. We are looking very keenly and proudly at South Africa as it steers the G20 presidency and we hope these are issues that will be driven forward during the tenure. Our fifth speaker online, Shikoh Gitau, was able to join us and I would like to put the question on private sectors role back to him. Private companies are at the forefront of AI innovation, Shikoh. From your experience, what governance frameworks are necessary to ensure this innovation aligns with democratic values and how can the private sector contribute to trust, transparency and accountability of AI, which have been resounding issues from all the speakers here? Thank you so much for having me. Can you hear me? Yes. Thumbs up. Awesome, thank you. So, I’m joining in from very cold and rainy Nairobi,


Shikoh Gitau: and I’m really glad to be here. Thank you so much for having me. And apologies for joining in late. So, the question around private sector and what private sector can do is start with building an enabling environment. A couple of months ago, I was speaking to a group of, like, policymakers and government asking, what do you need? We just need governments to, A, understand the potential of the space we are in. AI is not a passing fad. It is not a buzzword that is going away. It’s a consequential technology for our generation. And we need to be able to take care of that. And by building an enabling environment, I was hearing the interventions from Engineer Kone, and those ones from the representative from South Africa, and they’re quite interesting. But how do you then move that from paper to pavement? How do we do that? Is incentivizing private sector to come? And I hear the conversation around compute, around talent development. What does that actually mean in actual sense? I’ll give an example is, we have been working with a number of, like, A, governments on one hand, but also startups. And trying to understand is, what do you need for you to succeed? Because there’s so many pilots that are happening on the African continent, but they’re not getting any traction. But then you soon realize is that we are building all these things in the AI space, but our market does not understand AI, does not know what AI is, and the potential that AI has to. And what’s the difference between what we’re doing right now and what they were being sold for five years ago with mobile internet? And being able to concretize that to our Our market base, our customer base, our user base is critical. And while it is the work of the government to educate the populace, it is the work of the private sector to actually create massive awareness. But if the government comes in to interrupt this, there is a challenge. So for example, this week, we are running an AI awareness and fluency and literacy campaign across the continent in six markets, in six countries, and what we are targeting is teachers. And you’re not going through the normal, going through ministries of ICT or ministries of education. We are working with teachers directly through their associations, through their communities to train them. And in this, we said it last week, and in this week, what the difference and the demand is very different. The first one, we did a baseline study, there’s a lot of fear around AI. What we are hearing right now is, can we continuously do this training over the next six months? What is the incentive that the governments that we are working with have done is they’ve given us free space to create this programming. And they said, once you’re done with the pilot, because we are calling it a pilot, send us a report. That is a very concrete way of showing private sector is we as government don’t know what actually needs to be done, but if you can pilot and give us results, you can be able to scale whatever you’re doing, or we can be able to create even more space for you. Similar to investment in the startup sector in compute. So enabling private sector to invest in compute does not mean hindering what government can do in compute. I agree with Engineer Connors saying that funding for Africa has to be a blended instrument. And it means that private sector will have to invest in some of these compute facilities. be extremely expensive. But on the other hand, government and donor organizations have to invest in the earliest stages of this compete because we need to be able to have researchers and startups building and they cannot be able to afford the enterprise-grade computing that will be sold to them by enterprise versions. So it is bringing this blended thinking and creating the enabling environment. From a democratic point of view, every time I hear democratizing this, democratizing that, I say, what’s the definition of democracy you’re talking about? And for me, democratizing means that it’s enabling everybody everywhere to have access to the same opportunities and resources. And if you’re going to be able to bring that democratic tenancy in AI, it means that even the policies that we are making are not being prescribed to Africa. So what we have seen is we need policies and policy frameworks, as rightly said by South Africa, that are enabling the ecosystem. But who is drafting these policies? What agenda do they have? Do they have Africa at heart when they are doing this? Those are the questions you should be asking. And who is doing this? It’s enabling young Africans to be able to contribute to some of this conversation. So, for example, one of these things we are doing with, we actually support from Smart Africa, is this African scientific panel that is calling to young Africans, both in the continent and in diaspora, to enable and support their governments and governments across the continent in drafting some of these conversations, some of these frameworks that you’ve spoken about. But beyond drafting and writing them out is bringing them to life. So, for example, there’s a compute project and a benchmarking project that is being run by 25 doctors from across the world with African heritage to look at. Are these models for health care responding to African needs and African reality? That is what democratizing means. It’s creating, again, it starts with creating the enabling environment, getting out of the way, but also then starting to resource and accepting health from the democratized environment.


Ashana Kalemera: Thank you so much. Thank you very much, Shikoh. The clock, I tell you, is moving much faster than it should, so we need to speed this up a bit. Because we’re here to learn and exchange, I’d like to give an opportunity to our participants to share any comments, reactions, or questions to the five speakers. There are microphones on either side of the room there. Please feel free to walk over, introduce yourself, and share your comment, reaction, or question. As those in the room make their way to the microphone, I will also check with our online moderator if there are any questions from the online participants. No questions from the online participants, but I see one participant walking to the microphone. The floor is yours, sir. Okay. Good afternoon, and thank you very much. I’m a little bit tall. Good afternoon, everybody.


Audience: My name is from the Gambia, and the question actually I have is with all, it seems like each African country, or most of them, are on the path of creating their own policy. And I’m trying to find out from Smart Africa in particular, what can you do to bring all these countries on the table where I know that each country has your own, you know, different way of doing things, but at least there must be a convergence where we can actually have a policy that could be used from all across the country, like we had in the African Union Convention on the Cyber Security and Personal Data Protection. Can we have something that, you know, that goes across the continent instead of everybody being in silos, and again, it’s going to take a very long time. before that is done. So I want to ask Mr. Kone to see if they can bring everybody on the table, at least try to work onto a common platform. Thank you.


Ashana Kalemera: Thank you very much, Alhaji. We have a second question. We’ll take that before coming back to the speakers.


Audience: Good evening, everyone. Is it? Okay. My name is Lydia Lamisa Akamvareba from Ghana. I’m looking at the team up there. AI readiness in Africa in a shifting geopolitical landscape. Yes, it’s true. But we need those kind of policies that will create the enabled environment that will bring trust in the African society. Because currently, Africa is one of the youngest population that we have in the world. And the youth has a challenge, whether AI is coming to take over their jobs. So we as policymakers need to build that trust. I want to find out what are we doing to bring that trust among the youth in Africa? Thank you.


Ashana Kalemera: Thank you very much, Honorable Lydia. Yes, sir. If you walk a little bit quicker to the microphone, we’ll take your question. Good day, everyone. My name is Peter King. I’m from Liberia. Mine is in the form of a comment as well. I realize that there are four key areas that I can


Audience: talk about. In Africa now, what are we looking at? A lot of people work on policies. But we’re forgetting to look at the issue of digital literacy, the level of infrastructure. And these are things that once we are looking at readiness, this should come to the fore. So to the panel up there, thank you so much for the nice presentation. Can you help us to understand better, from our context from Africa, what are the realistic steps that governments can do? to ensure that this is a reality in terms of being ready, to ensure that youth alone is claimed and is real that Africa has a youthful population. These are challenges. We’re looking at what are the opportunities. Can they be leveraged for the youth to use it? We are always looking at, we are always using it, the OpenAI, the ChargeGBT is not created by Africa. So where do we go from here?


Ashana Kalemera: Thank you so much. Thank you very much, Peter King. So we have 11 minutes to the end of the session. So we’re going to do a crash course in closing remarks, responses to questions and final thoughts, all in one go. So I’ll invite Mr. Kone to take the question that was directed specifically at you. And then for the rest of the speakers, we’ll reflect on the questions and issues around trust and realistic steps that have come from the floor, while at the same time, sharing concluding remarks on what the future should look like five years from now. So Mr. Kone first.


Lacina Kone: Thank you very much for the questions regarding policy harmonization on the national AI strategy. As I said before, what really matters is not like one size should fit all, because each country has a sovereignty you have to take into account. But we have to make sure that each size should fit together. So what we’ve engaged in since April, we’ve actually scanned our continent. We found there’s close to about 17 to 18 countries who already develop, including, of course, Nigeria, including Mauritania, who have developed already the national AI strategy. So what we did, actually, we took all of these national AI strategy, and we also looked into the AI governance regulatory environment developed by the high level of the United Nations. What it entails exactly and we actually look also into the African Union AI strategy So we try to do a comparison to come up with a benchmarking What are the countries saying are they addressing exactly what is required from the United Nations? How does it compare to Europe and how does it compare also to the African Union? so we’re coming out with the benchmarking and this is also part of the work we are doing for the AI Council for Africa Starting from this July, of course, we’ll be sharing we’ll be having a CMICT Which is a council of ICT ministers and we’ll be sharing with them We’ll said if you look at the national AI strategy of Nigeria What was the best lesson learned from that compared to what Egypt has? Because you might see that Egypt for example is probably skewed towards more startup Action the startup hubs and creations where you while you will see that Nigeria is actually focused on something else So we’ve compiled all of these and of course, we not only talking we actually walking or talk what I’m saying Exactly. This would be shared with our Council of ICT minister which from there it goes to the regulator But it’s very true so far now we have about 19 country who have a different AI strategy But we need to remember something if you look at the fundamentals which are the common denominator looking at the ethical use of the AI looking at the inclusion of the AI inclusion of the Inclusivity if I said and look at the sustainability 90% of those a national strategy it address it. However, you may Elaborate the best AI national strategy, but is it conducive for private sector investment? That is the one question we need to ask because at the end of the day Government should be creating law and regulation to create a conducive environment for private sector to be able to chip in. Thank you


Matchiane Soueid Ahmed: Thank you very much. Mr Still on my right Ms. Ahmed, reactions to the questions from the floor, as well as concluding remarks on what the future of AI and the continent should look like. Thank you very much. I want to comment some reaction of the audience. I totally agree with that, working in SILO will not solve the problem, at least we need to look after a synergy between different initiatives. But the important question is how, because as you know, the context of each country is different, and even the context of the region in Africa is different. So we need to think out of the box to find solutions that are suitable for everyone’s context. So about the future of Africa AI, as we shape African AI, we must lead with principles. Data must remain, in my understanding, data must remain in African hands. Because as you know, in the era of digital and especially in the era of AI, sovereignty is not just territorial, but also digital. Second conclusion that I can mention, that equitable and democratic AI is only possible if policies, power, local communities, protect citizen rights and foster homegrown innovation. Thank you very much. Thank you very much Machane. I’ll move to my middle now, because I see Chico on my screen in front of me.


Ashana Kalemera: Chico, reactions to the questions from the floor, as well as concluding remarks on AI’s future on the continent. Thank you very much. I want to respond to the question that the gentleman from The Gambia


Shikoh Gitau: asked. So for the past year and a half, we were looking and doing a scan on what is the right governance framework for Africa. And what we concluded is not everybody needs a strategy. A strategy needs to be thought through. But what are the instruments that need to be in place for you as a government to be able to start on this AI journey? And so what we have developed and what was launched in April was the African AI Governance Toolkit. If you go to qbit.africa, qbit.africa, you should be able to see it. And it enables you as a government official, a regulator, a ministry to be able to see, should I just have like AI principles that are guiding our work around AI? Should we have a policy framework? Should we have a strategy and a simple cheat sheet that you can walk through and work through to help you develop the principles? I mean, as outlined by Engineer Kone of safety ethics and such to be able to develop like a very simple policy framework or principle guideline or a strategy that will help you on this AI journey, because we recognize that not every country is able to invest in having an AI strategy. It’s a little bit premature. There are a couple of countries you’re working with in Central Africa where they’re starting with a digital strategy, not even an AI strategy, but they need to start working on AI. So a simple framework is what we are trying to work with them to be able to start on the journey. On the second one, I love the second question because from Ghana, and it’s a very critical question, but I always say you cannot… work on what you cannot measure? So we’re talking about talent, a maturity, all these questions. Does anyone know where you stand? Does Ghana know where they stand in terms of AI maturity on the African continent or in the world or where you stand about your talent readiness? So again, the other instrument that we’ve worked, and again, as Minister Kone, I mean, Engineer Kone mentioned around the AI Council is developing the African AI Maturity Index, which is a live document and a live tool, not a document, a live tool that is constantly being updated as the different strategies are being put in place, different policies are being put in place. So if you go to data wall, d-a-t-a wall, w-a-l-l.africa, you’ll be able to see the maturity. We’ve broken it down to very, very simple metrics, broken down to the schools that are teaching STEM in our country. So it’s very, very granular to help you realize, as you’re solving for this AI, maybe we should start introducing more STEM courses, not even AI courses in our country. The second one is the Talent Index, which is primarily focused- I’m going to interrupt you. I’m finishing. I’m finishing. This is my last one. All right. Go ahead. This is measuring what talent actually means and what you need to put in place. Again, it’s being very actionable. What should Africa do? I’m very bullish about Africa. If you’ve heard me speak, is I always say Africa can, and we can win this AI race because, as I always quote Minister LACINA KONE is, because we are defining what the race is about. We are not following other people’s rules. Thank you very much. Thank you, Shikho. Somelindi, and then Nima.


Ashana Kalemera: The race is on. One minute each. No, thank you. I think I just want to touch a bit on what the other questions were.


Neema Iyer: where I’m doing my PhD. A lot of students are submitting AI assessments. What are you grading as a teacher? So the whole way that we even assess skills needs to completely change. And in terms of my closing remarks I think we need to continue to have radical and critical questions about AI. Always, always, always question everything. What are the harms? What are the benefits? How is it serving our future? How is it serving the future of our youth? Thank you so much for having me. Thank you. 28 seconds for me to wrap up this very rich, rich discussions. Quite clearly, it’s not yet Uhuru for AI in Africa, but given what’s been discussed here with the right investments the continent stands to benefit significantly. I think the areas for prioritizations that have come through include pushing for accountability, public education, transparency, inclusion, not only from a language perspective, a gender perspective, and ensuring co-creation, infrastructure investments, multi-stakeholder consultation and participation, impact assessments, that was a very, very good one, funding, procurement, and ensuring that, you know, policies are not just prescriptions, but actually live up to reality. Thank you so much.


Ashana Kalemera: We appreciate the perspectives that have been shared from Côte d’Ivoire, from Mauritania, from Ghana, from the floor, from Gambia, from South Africa, as well as Uganda. Once again, thank you to the speakers. Thank you to the organizers, BMC. We appreciate the conversation and ensuring that discourse about AI readiness on the continent is making it to the global stage, i.e. the IGF. Thank you very much. We wish you all a very good evening, and thank you for attending the session. Thank you. Thank you for watching!


N

Neema Iyer

Speech speed

164 words per minute

Speech length

1249 words

Speech time

456 seconds

AI is already undermining governance through automated disinformation, surveillance, and manipulation of political discourse

Explanation

Neema Iyer argues that AI is currently causing harm to governance in Africa through various mechanisms including automated disinformation campaigns, erosion of public trust, manipulation of political discourse, and surveillance tools used to suppress voices at scale. She emphasizes that these harms are often gendered and are magnified in contexts with weak data protection and limited digital literacy.


Evidence

Examples include automated disinformation campaigns, eroding of public trust, manipulation of political discourse, and surveillance tools that are used to stifle voices at scale. These harms are magnified in contexts that have weak data protection and limited digital literacy.


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Human rights | Legal and regulatory | Cybersecurity


Need for regional coalitions and bringing multiple stakeholders beyond just governments into AI policy discussions

Explanation

Neema Iyer advocates for inclusive policy-making that brings together various stakeholder groups beyond government officials. She emphasizes the importance of including women’s rights organizations, trade unions, journalists, and researchers in shaping AI policies rather than limiting discussions to high government levels.


Evidence

Examples of missing stakeholders include women’s rights organizations, trade unions, journalists, researchers who should be brought together to shape policies rather than having them done at a very high government level.


Major discussion point

Multi-stakeholder Collaboration and Policy Harmonization


Topics

Legal and regulatory | Human rights | Development


Critical need to democratize digital literacy and take AI conversations to grassroots level in accessible language

Explanation

Neema Iyer stresses the importance of making AI education accessible to common people and leaders in local languages using creative approaches. She argues that high-level conversations must be complemented by grassroots engagement and that technical jargon should be avoided in favor of accessible communication.


Evidence

She mentions that conversations like the one in Oslo don’t apply to most people living on the continent, and emphasizes the need for education in local languages using creative ways rather than technical data governance language that makes people’s eyes glaze over.


Major discussion point

Education and Digital Literacy


Topics

Development | Sociocultural | Human rights


Agreed with

– Lacina Kone
– Shikoh Gitau

Agreed on

Urgent need for AI education and digital literacy at grassroots level


Disagreed with

– Shikoh Gitau

Disagreed on

Primary responsibility for AI awareness and education


Need for mandatory public interest impact assessments before AI system deployment with ongoing monitoring

Explanation

Neema Iyer proposes that before any AI system is deployed, there should be mandatory assessments of its potential impacts on various groups and sectors. She advocates for continuous monitoring throughout the system’s lifecycle to understand social, economic, gendered, and environmental impacts.


Evidence

She suggests assessments should examine who will be impacted and that throughout the process of having AI systems, continuous assessments should be conducted to understand social, economic, gendered, environmental impacts.


Major discussion point

Civil Society Role and Accountability


Topics

Legal and regulatory | Human rights | Development


Importance of documenting lived experiences of both benefits and harms, especially for marginalized communities

Explanation

Neema Iyer emphasizes civil society’s unique role in documenting real-world impacts of AI on communities, particularly marginalized groups. She argues that while technical audits are important, civil society is uniquely positioned to capture and tell stories of how AI affects people’s daily lives.


Evidence

She mentions that civil society can do tech audits but is uniquely placed to address harms and benefits, document them, and tell stories so they inform policies.


Major discussion point

Civil Society Role and Accountability


Topics

Human rights | Development | Sociocultural


Critical need to track funding sources and understand foreign interests shaping Africa’s AI agenda

Explanation

Neema Iyer calls for transparency and critical analysis of who is funding AI initiatives in Africa and what their motivations are. She emphasizes the importance of understanding the various foreign interests of tech giants and other players in shaping the continent’s AI development agenda.


Evidence

She specifically mentions the need to understand who is funding what in the African context and what are the different foreign interests of tech giants in shaping Africa’s AI agenda.


Major discussion point

Civil Society Role and Accountability


Topics

Economic | Legal and regulatory | Development


Civil society serves as watchdogs, advocates, educators, and storytellers while questioning political and economic logic of AI deployment

Explanation

Neema Iyer outlines the multifaceted role of civil society in AI governance, emphasizing their function as watchdogs, advocates, educators, and storytellers. She stresses the importance of civil society questioning the underlying political, economic, and social rationale for deploying AI systems and resisting harmful technologies.


Evidence

She mentions civil society’s role in questioning why AI is being deployed and the need to resist quiet import of harmful tools such as excessive surveillance or predictive tech.


Major discussion point

Civil Society Role and Accountability


Topics

Human rights | Legal and regulatory | Sociocultural


M

Mlindi Mashologu

Speech speed

157 words per minute

Speech length

1017 words

Speech time

386 seconds

Need for transparent, inclusive AI governance frameworks grounded in constitutional values and public participation

Explanation

Mlindi Mashologu advocates for institutionalizing AI governance frameworks that are transparent and inclusive, based on constitutional values and public participation. He emphasizes that these frameworks should ensure AI supports accountable service delivery, social justice, and democratic resilience.


Evidence

He mentions the need to institutionalize transparent inclusive AI governance frameworks grounded in constitutional values of public participation to ensure AI supports accountable service delivery, social justice, and democratic resilience.


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Legal and regulatory | Human rights | Development


Agreed with

– Neema Iyer
– Matchiane Soueid Ahmed

Agreed on

Need for inclusive and transparent AI governance frameworks grounded in human rights and constitutional values


Importance of removing bias from datasets and including diverse demographics in AI training

Explanation

Mlindi Mashologu highlights the critical need to address bias in AI systems by removing bias from training datasets and ensuring diverse demographic representation. He warns that failure to address these issues will create challenges that undermine democratic governance.


Evidence

He explains that if biases are not removed and large pools of demographics are not included in datasets, there will be challenges that undermine democratic governance.


Major discussion point

Data Sovereignty and Cultural Preservation


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Neema Iyer
– Lacina Kone

Agreed on

Importance of addressing bias in AI systems and ensuring diverse representation


Digital divide and limited compute capabilities remain deep-rooted challenges requiring significant investment

Explanation

Mlindi Mashologu identifies the digital divide and lack of compute capabilities as fundamental challenges that need to be addressed for effective AI governance frameworks. He emphasizes that these are deep-rooted societal issues that must be tackled alongside policy development.


Evidence

He mentions that digital divide still exists as a continent and compute capabilities are still lacking, which need to be addressed significantly as part of developing governance frameworks.


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Infrastructure | Development | Economic


South Africa’s G20 presidency offers opportunity to lead digital governance with focus on solidarity, sustainability, and equality

Explanation

Mlindi Mashologu presents South Africa’s G20 presidency as a unique opportunity to lead digital governance initiatives with a focus on solidarity, sustainability, and equality. He sees this as a chance to influence global AI governance from an African perspective.


Evidence

He mentions South Africa’s G20 presidency banner of solidarity, sustainability and equality, and describes it as a unique opportunity to lead a new chapter in digital governance.


Major discussion point

Economic Development and Innovation


Topics

Legal and regulatory | Development | Economic


Need for regulatory sandboxes and support for startups through AI-enabled economic transformation

Explanation

Mlindi Mashologu advocates for creating regulatory sandboxes and supporting startups as part of South Africa’s AI policy framework. He emphasizes AI’s role in economic transformation and the need for supportive regulatory environments for innovation.


Evidence

He mentions that South Africa is looking at AI for economic transformation and supporting startups through regulatory sandboxes as part of their AI policy development.


Major discussion point

Economic Development and Innovation


Topics

Economic | Legal and regulatory | Development


M

Matchiane Soueid Ahmed

Speech speed

112 words per minute

Speech length

637 words

Speech time

340 seconds

AI governance must be co-created with African voices and grounded in human rights with civic oversight

Explanation

Matchiane Soueid Ahmed emphasizes that AI governance frameworks must be developed collaboratively with African stakeholders and be firmly rooted in human rights principles. She stresses the importance of strong civic oversight to ensure these frameworks serve African interests and values.


Evidence

She advocates for inclusive, transparent AI policy frameworks that are co-created with African voices and grounded in human rights and supported by strong civic oversight.


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Human rights | Legal and regulatory | Development


Agreed with

– Neema Iyer
– Mlindi Mashologu

Agreed on

Need for inclusive and transparent AI governance frameworks grounded in human rights and constitutional values


Major infrastructure challenges in rural areas with limited connectivity affecting 20% of population in remote locations

Explanation

Matchiane Soueid Ahmed describes significant infrastructure challenges in Mauritania, particularly in rural areas where geographic dispersion makes it difficult to provide services. She explains that Mauritania’s large surface area with scattered small villages creates connectivity challenges for a significant portion of the population.


Evidence

She mentions Mauritania has a huge surface area of about one square kilometer with small villages far from each other, making it difficult to serve more than 20% of the population living in rural areas.


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Infrastructure | Development | Digital access


Data must remain in African hands as digital sovereignty is as important as territorial sovereignty

Explanation

Matchiane Soueid Ahmed argues that data sovereignty is a critical component of national sovereignty in the digital age. She emphasizes that African countries must maintain control over their data as a fundamental principle of AI governance and development.


Evidence

She states that in the era of digital and especially AI, sovereignty is not just territorial, but also digital, and data must remain in African hands.


Major discussion point

Data Sovereignty and Cultural Preservation


Topics

Legal and regulatory | Human rights | Economic


Agreed with

– Lacina Kone

Agreed on

Critical importance of data sovereignty and keeping African data under African control


L

Lacina Kone

Speech speed

159 words per minute

Speech length

1830 words

Speech time

689 seconds

Africa needs deliberately intentional approach rather than passive optimism, focusing on user-centric AI models

Explanation

Lacina Kone argues that Africa cannot afford to be passively optimistic about AI development and must take a deliberately intentional approach. He emphasizes that Africa should focus on developing the most useful AI rather than the most powerful, with a user-centric approach that differs from the private sector-based North American model and government-controlled Chinese model.


Evidence

He mentions that Africa cannot be passively optimistic but must be deliberately intentional, and that Africa wants to go with a user-centric approach, looking at AI that is not the most powerful but the most useful, focusing on agriculture, healthcare, and education.


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Development | Economic | Legal and regulatory


Africa lacks necessary computing power collectively across 50+ countries to train AI models locally

Explanation

Lacina Kone highlights the critical infrastructure gap in computing power across the African continent. He points out that even collectively, more than 50 African countries lack the necessary computing power to train AI models locally, which is essential for developing indigenous AI capabilities.


Evidence

He asks whether collectively, more than 50 countries on the continent have the necessary power and computing power to train their data, implying the answer is no.


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Infrastructure | Development | Economic


Over 1000 African startups daily downloading APIs from foreign models, with training data leaving the continent permanently

Explanation

Lacina Kone reveals a concerning trend where African startups are heavily dependent on foreign AI models, downloading APIs from companies like OpenAI and DeepSeek daily. He warns that this creates a permanent data drain since AI training data cannot be retrieved once it leaves the continent.


Evidence

He states there are over a thousand startups on the continent downloading APIs from OpenAI Frontier models and DeepSeek (Chinese) daily, but these models are not located on the continent, and once AI systems are trained, information can never be retrieved.


Major discussion point

Data Sovereignty and Cultural Preservation


Topics

Economic | Legal and regulatory | Development


Agreed with

– Matchiane Soueid Ahmed

Agreed on

Critical importance of data sovereignty and keeping African data under African control


Need to preserve 2000+ African languages through locally trained AI systems to prevent cultural bias

Explanation

Lacina Kone emphasizes the importance of preserving Africa’s linguistic diversity through locally trained AI systems. He warns that if African languages are trained by external entities rather than Africans themselves, even indigenous people will be impacted by cultural biases embedded in these systems.


Evidence

He mentions Africa has more than 2,000 languages, and if these languages are trained on AI systems but not trained by Africans themselves, even indigenous people will be impacted by cultural biases using AI.


Major discussion point

Data Sovereignty and Cultural Preservation


Topics

Sociocultural | Human rights | Development


Agreed with

– Neema Iyer
– Mlindi Mashologu

Agreed on

Importance of addressing bias in AI systems and ensuring diverse representation


Smart Africa created AI Council focusing on computing power, datasets, algorithms, governance, and market development

Explanation

Lacina Kone describes Smart Africa’s comprehensive approach to AI development through the creation of an AI Council that addresses five key areas. This council aims to harmonize policies across African countries and align with international AI governance frameworks while maintaining African values and priorities.


Evidence

He explains the AI Council for Africa looks at five things: computing power, datasets, algorithms (cultural bias), AI governance (harmonizing policies from 19 countries with national AI strategies), and market development.


Major discussion point

Multi-stakeholder Collaboration and Policy Harmonization


Topics

Legal and regulatory | Development | Economic


19 African countries have developed national AI strategies requiring harmonization while respecting sovereignty

Explanation

Lacina Kone reports that 19 African countries have already developed national AI strategies, and Smart Africa’s role is to harmonize these policies while respecting each country’s sovereignty. He emphasizes that the approach is not one-size-fits-all but ensuring all strategies fit together with common denominators.


Evidence

He mentions about 19 countries in Africa have developed their national AI strategy, and Smart Africa’s role is to harmonize those policies, aligned with UN AI governance and European safeguarding measures.


Major discussion point

Multi-stakeholder Collaboration and Policy Harmonization


Topics

Legal and regulatory | Development | Economic


Disagreed with

– Audience from Gambia

Disagreed on

Approach to AI policy development – harmonized continental framework vs. national sovereignty


AI can reach indigenous people in rural areas and educate them in their own languages

Explanation

Lacina Kone presents AI as a revolutionary tool that can overcome traditional barriers to education and services in rural Africa. He argues that AI can enable indigenous people in remote areas to become tech-savvy through education in their native languages, bypassing the need for formal university education.


Evidence

He explains that AI allows reaching indigenous people in rural areas to educate them in their own language so they become tech-savvy like anyone who’s been to university, addressing the gap where Africa gained 1 billion people in 45 years but infrastructure didn’t grow proportionally.


Major discussion point

Education and Digital Literacy


Topics

Development | Sociocultural | Human rights


Agreed with

– Neema Iyer
– Shikoh Gitau

Agreed on

Urgent need for AI education and digital literacy at grassroots level


AI should focus on agriculture, healthcare, and education as most useful applications for Africa’s development needs

Explanation

Lacina Kone argues that Africa should prioritize AI applications in sectors most critical to its development needs rather than pursuing the most technologically advanced AI. He emphasizes that AI should address fundamental challenges in agriculture, healthcare, and education where Africa faces significant gaps.


Evidence

He states Africa is looking for the most useful AI, not the most powerful, focusing on agriculture, healthcare, and education, noting that in 45 years Africa gained 1 billion people but schools, hospitals, and banks didn’t increase proportionally.


Major discussion point

Economic Development and Innovation


Topics

Development | Economic | Sociocultural


Private sector investment essential as mobile network operators with data centers have existential need to engage in AI

Explanation

Lacina Kone emphasizes the critical role of private sector investment in Africa’s AI development, particularly highlighting mobile network operators who own data centers. He argues that these companies have an existential business need to engage in AI, making them natural partners for blended financing approaches.


Evidence

He mentions over half a dozen mobile network operators operating on the continent who own data centers, and it’s existential for them to get into AI because at the end of the day, it’s about making money.


Major discussion point

Economic Development and Innovation


Topics

Economic | Infrastructure | Development


Agreed with

– Shikoh Gitau

Agreed on

Need for blended financing approaches combining government, private sector, and international support


S

Shikoh Gitau

Speech speed

159 words per minute

Speech length

1562 words

Speech time

588 seconds

Private sector must create enabling environments and massive awareness campaigns while governments provide supportive policy frameworks

Explanation

Shikoh Gitau emphasizes the need for governments to understand AI’s potential and create conducive environments for private sector innovation. She argues that while governments should focus on policy and education, private sector must take responsibility for creating awareness and demonstrating AI’s practical value to markets that don’t yet understand the technology.


Evidence

She mentions running AI awareness campaigns across six African countries targeting teachers directly, and governments providing free space for programming with requests for reports after pilots, showing concrete government support.


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Economic | Development | Legal and regulatory


Disagreed with

– Neema Iyer

Disagreed on

Primary responsibility for AI awareness and education


Need for blended financing approach combining government, private sector, and donor investments in compute infrastructure

Explanation

Shikoh Gitau advocates for a blended financing model that combines private sector investment, government funding, and donor organization support for compute infrastructure. She emphasizes that while private sector can invest in expensive enterprise-grade computing, governments and donors must support early-stage researchers and startups who cannot afford such costs.


Evidence

She explains that private sector will invest in expensive compute facilities, but government and donor organizations must invest in earliest stages for researchers and startups who cannot afford enterprise-grade computing.


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Economic | Infrastructure | Development


Agreed with

– Lacina Kone

Agreed on

Need for blended financing approaches combining government, private sector, and international support


Massive teacher training campaigns across six African countries showing high demand for AI education

Explanation

Shikoh Gitau describes successful AI literacy campaigns targeting teachers across six African countries, demonstrating significant demand for AI education. She reports a dramatic shift from initial fear about AI to requests for continuous training over six months, indicating the effectiveness of direct engagement approaches.


Evidence

She mentions running campaigns in six countries targeting teachers through their associations, noting that initial baseline studies showed fear of AI, but current feedback shows requests for continuous training over six months.


Major discussion point

Education and Digital Literacy


Topics

Development | Sociocultural | Capacity development


Agreed with

– Neema Iyer
– Lacina Kone

Agreed on

Urgent need for AI education and digital literacy at grassroots level


A

Audience

Speech speed

170 words per minute

Speech length

444 words

Speech time

156 seconds

Concern about countries working in silos rather than converging on common continental AI policy framework

Explanation

An audience member from Gambia expressed concern that African countries are developing individual AI policies in isolation rather than working together toward a common continental framework. They referenced the successful African Union Convention on Cyber Security and Personal Data Protection as a model for continental cooperation and questioned whether a similar approach could be taken for AI governance.


Evidence

The speaker referenced the African Union Convention on Cyber Security and Personal Data Protection as an example of successful continental policy convergence and questioned why AI policies couldn’t follow a similar approach.


Major discussion point

Multi-stakeholder Collaboration and Policy Harmonization


Topics

Legal and regulatory | Development | Economic


Disagreed with

– Lacina Kone
– Audience from Gambia

Disagreed on

Approach to AI policy development – harmonized continental framework vs. national sovereignty


Need to address youth concerns about AI taking over jobs through trust-building and proper education

Explanation

An audience member from Ghana highlighted the need to build trust among Africa’s young population regarding AI technology. They emphasized that policymakers need to address youth concerns about AI displacing jobs and create enabling environments that build confidence in AI’s potential benefits rather than fears about its threats.


Evidence

The speaker noted that Africa has one of the youngest populations in the world and that youth have challenges about whether AI is coming to take over their jobs, requiring trust-building measures from policymakers.


Major discussion point

Education and Digital Literacy


Topics

Development | Economic | Human rights


A

Ashana Kalemera

Speech speed

130 words per minute

Speech length

1492 words

Speech time

686 seconds

Africa remains underrepresented in global AI development and discourse despite AI’s transformative potential

Explanation

Ashana Kalemera argues that while AI has huge transformative potential for innovation and socio-economic development, African countries are striving to build sovereign AI ecosystems but remain underrepresented in global AI development and discourse. She emphasizes that locally driven solutions are constrained by limited investments and regulatory gaps.


Evidence

Locally driven solutions are constrained by limited investments in research, regulatory gaps, and the dominance of multinational tech companies


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Development | Economic | Legal and regulatory


Risk of AI-driven digital neocolonialism is growing as global powers compete for technological influence

Explanation

Ashana Kalemera warns about the increasing risk of digital neocolonialism through AI as global powers compete for technological influence over Africa. She argues that concerns about digital exploitation, economic disparities in data processing, and low-wage labor markets in Africa are creating conditions for external control rather than local benefit.


Evidence

Concerns about digital exploitation and economic disparities when it comes to data processing, training of models, and low-wage labor markets in Africa prevail


Major discussion point

Data Sovereignty and Cultural Preservation


Topics

Economic | Human rights | Development


African nations have unique opportunity to establish AI governance models rooted in fairness, transparency, and inclusion

Explanation

Ashana Kalemera presents an optimistic view that African nations currently have a unique opportunity to establish AI governance frameworks that align with local realities and normative considerations. She argues these frameworks can foster innovation while upholding democracy and human rights, ensuring AI serves local needs rather than external interests.


Evidence

These AI frameworks have the potential to align with local realities and normative considerations and foster innovation, uphold democracy, and human rights


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Legal and regulatory | Human rights | Development


Inadequate governance frameworks risk deepening inequality, weakening democracy and reinforcing technological dependencies

Explanation

Ashana Kalemera identifies significant risks associated with poor AI governance, particularly how inadequate frameworks can exacerbate existing inequalities and democratic weaknesses. She warns that without proper governance, AI deployment could reinforce technological dependencies rather than building local capacity and sovereignty.


Evidence

Inadequate governance frameworks risk deepening inequality, weakening democracy and reinforcing technological dependencies


Major discussion point

AI Governance and Democratic Values in Africa


Topics

Human rights | Legal and regulatory | Development


Key priorities for AI governance include accountability, transparency, inclusion, infrastructure investment, and ensuring policies translate to reality

Explanation

Ashana Kalemera synthesizes the discussion to identify critical areas for prioritization in African AI governance. She emphasizes that successful AI governance requires not just policy development but practical implementation that addresses real-world challenges and serves diverse stakeholder needs.


Evidence

Areas for prioritization include pushing for accountability, public education, transparency, inclusion from language and gender perspectives, ensuring co-creation, infrastructure investments, multi-stakeholder consultation, impact assessments, funding, procurement, and ensuring policies are not just prescriptions but live up to reality


Major discussion point

Multi-stakeholder Collaboration and Policy Harmonization


Topics

Legal and regulatory | Development | Human rights


Agreements

Agreement points

Need for inclusive and transparent AI governance frameworks grounded in human rights and constitutional values

Speakers

– Neema Iyer
– Mlindi Mashologu
– Matchiane Soueid Ahmed

Arguments

Need for AI that is based on accountability, trust, and a big component of public education


Need for transparent, inclusive AI governance frameworks grounded in constitutional values and public participation


AI governance must be co-created with African voices and grounded in human rights with civic oversight


Summary

All three speakers emphasize the critical importance of developing AI governance frameworks that are transparent, inclusive, and firmly rooted in human rights principles and constitutional values, with strong public participation and civic oversight.


Topics

Legal and regulatory | Human rights | Development


Importance of addressing bias in AI systems and ensuring diverse representation

Speakers

– Neema Iyer
– Mlindi Mashologu
– Lacina Kone

Arguments

Need for impact assessments, regional coalitions, investment in ethical and open-source alternatives that work within our context


Importance of removing bias from datasets and including diverse demographics in AI training


Need to preserve 2000+ African languages through locally trained AI systems to prevent cultural bias


Summary

Speakers agree on the critical need to address bias in AI systems through diverse representation in datasets, cultural preservation, and ethical alternatives that reflect African contexts and realities.


Topics

Human rights | Sociocultural | Legal and regulatory


Critical importance of data sovereignty and keeping African data under African control

Speakers

– Lacina Kone
– Matchiane Soueid Ahmed

Arguments

Over 1000 African startups daily downloading APIs from foreign models, with training data leaving the continent permanently


Data must remain in African hands as digital sovereignty is as important as territorial sovereignty


Summary

Both speakers strongly emphasize that data sovereignty is fundamental to African AI development, warning against the permanent loss of African data to foreign systems and asserting that digital sovereignty is as crucial as territorial sovereignty.


Topics

Legal and regulatory | Economic | Human rights


Need for blended financing approaches combining government, private sector, and international support

Speakers

– Lacina Kone
– Shikoh Gitau

Arguments

Private sector investment essential as mobile network operators with data centers have existential need to engage in AI


Need for blended financing approach combining government, private sector, and donor investments in compute infrastructure


Summary

Both speakers advocate for comprehensive financing models that leverage private sector capabilities, government support, and international partnerships to address Africa’s AI infrastructure and development needs.


Topics

Economic | Infrastructure | Development


Urgent need for AI education and digital literacy at grassroots level

Speakers

– Neema Iyer
– Lacina Kone
– Shikoh Gitau

Arguments

Critical need to democratize digital literacy and take AI conversations to grassroots level in accessible language


AI can reach indigenous people in rural areas and educate them in their own languages


Massive teacher training campaigns across six African countries showing high demand for AI education


Summary

All speakers agree on the fundamental importance of making AI education accessible to all levels of society, particularly in local languages and through creative, accessible approaches that reach rural and indigenous communities.


Topics

Development | Sociocultural | Capacity development


Similar viewpoints

Both speakers acknowledge the critical infrastructure gaps in computing power and digital connectivity across Africa as fundamental challenges that must be addressed for successful AI implementation.

Speakers

– Lacina Kone
– Mlindi Mashologu

Arguments

Africa lacks necessary computing power collectively across 50+ countries to train AI models locally


Digital divide and limited compute capabilities remain deep-rooted challenges requiring significant investment


Topics

Infrastructure | Development | Economic


Both speakers emphasize the importance of multi-stakeholder participation in AI governance, ensuring that diverse voices beyond government officials are included in policy development processes.

Speakers

– Neema Iyer
– Matchiane Soueid Ahmed

Arguments

Need for regional coalitions and bringing multiple stakeholders beyond just governments into AI policy discussions


AI governance must be co-created with African voices and grounded in human rights with civic oversight


Topics

Legal and regulatory | Human rights | Development


Both speakers advocate for proactive, intentional approaches to AI development that focus on practical utility and user needs rather than simply following global trends or being passive recipients of technology.

Speakers

– Lacina Kone
– Shikoh Gitau

Arguments

Africa needs deliberately intentional approach rather than passive optimism, focusing on user-centric AI models


Private sector must create enabling environments and massive awareness campaigns while governments provide supportive policy frameworks


Topics

Development | Economic | Legal and regulatory


Unexpected consensus

Strong agreement on the existential threat of data dependency on foreign AI systems

Speakers

– Lacina Kone
– Neema Iyer
– Matchiane Soueid Ahmed

Arguments

Over 1000 African startups daily downloading APIs from foreign models, with training data leaving the continent permanently


Critical need to track funding sources and understand foreign interests shaping Africa’s AI agenda


Data must remain in African hands as digital sovereignty is as important as territorial sovereignty


Explanation

Unexpectedly, speakers from different sectors (intergovernmental, civil society, and government) showed remarkable consensus on the urgency of addressing Africa’s dependency on foreign AI systems, with specific concern about the irreversible nature of data loss to external platforms.


Topics

Legal and regulatory | Economic | Human rights


Consensus on the need for practical, user-centric AI rather than pursuing the most advanced technology

Speakers

– Lacina Kone
– Shikoh Gitau
– Mlindi Mashologu

Arguments

AI should focus on agriculture, healthcare, and education as most useful applications for Africa’s development needs


Private sector must create enabling environments and massive awareness campaigns while governments provide supportive policy frameworks


Need for regulatory sandboxes and support for startups through AI-enabled economic transformation


Explanation

There was unexpected alignment across different stakeholder groups on prioritizing practical, contextually relevant AI applications over pursuing cutting-edge technology, suggesting a mature understanding of Africa’s development priorities.


Topics

Development | Economic | Sociocultural


Overall assessment

Summary

The speakers demonstrated remarkable consensus on key foundational issues including the need for inclusive governance frameworks, data sovereignty, addressing bias and cultural preservation, infrastructure investment, and education. There was strong agreement on the importance of multi-stakeholder participation and the urgency of addressing Africa’s dependency on foreign AI systems.


Consensus level

High level of consensus with significant implications for coordinated continental AI strategy. The alignment across government, civil society, private sector, and intergovernmental perspectives suggests a mature understanding of challenges and a shared vision for African-led AI development. This consensus provides a strong foundation for developing harmonized policies and collaborative approaches to AI governance across the continent.


Differences

Different viewpoints

Approach to AI policy development – harmonized continental framework vs. national sovereignty

Speakers

– Lacina Kone
– Audience from Gambia

Arguments

19 African countries have developed national AI strategies requiring harmonization while respecting sovereignty


Concern about countries working in silos rather than converging on common continental AI policy framework


Summary

While the audience member from Gambia advocated for a unified continental AI policy similar to the AU Cyber Security Convention, Lacina Kone defended the current approach of harmonizing diverse national strategies while respecting each country’s sovereignty, stating ‘not one size should fit all, but all sizes should fit together’


Topics

Legal and regulatory | Development | Economic


Primary responsibility for AI awareness and education

Speakers

– Shikoh Gitau
– Neema Iyer

Arguments

Private sector must create enabling environments and massive awareness campaigns while governments provide supportive policy frameworks


Critical need to democratize digital literacy and take AI conversations to grassroots level in accessible language


Summary

Shikoh emphasized private sector’s role in creating massive awareness campaigns and working directly with communities, while Neema focused on the need for democratized digital literacy through accessible education in local languages, suggesting different primary actors for education efforts


Topics

Development | Sociocultural | Capacity development


Unexpected differences

Limited explicit disagreement on fundamental AI risks and benefits

Speakers

– All speakers

Arguments

Various arguments about AI governance, infrastructure, and development approaches


Explanation

Surprisingly, there was minimal direct disagreement about the fundamental nature of AI risks or benefits. All speakers acknowledged both opportunities and challenges, with most disagreements focusing on implementation approaches rather than whether AI should be pursued or the severity of risks


Topics

AI Governance and Democratic Values in Africa | Development | Human rights


Overall assessment

Summary

The discussion showed remarkably high consensus on fundamental goals (inclusive AI governance, data sovereignty, capacity building) with disagreements primarily focused on implementation approaches, institutional responsibilities, and policy mechanisms rather than core objectives


Disagreement level

Low to moderate disagreement level. The speakers demonstrated strong alignment on overarching goals but differed on tactical approaches, suggesting a mature policy discussion where stakeholders share common vision but debate optimal pathways. This indicates positive potential for collaborative implementation despite methodological differences.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers acknowledge the critical infrastructure gaps in computing power and digital connectivity across Africa as fundamental challenges that must be addressed for successful AI implementation.

Speakers

– Lacina Kone
– Mlindi Mashologu

Arguments

Africa lacks necessary computing power collectively across 50+ countries to train AI models locally


Digital divide and limited compute capabilities remain deep-rooted challenges requiring significant investment


Topics

Infrastructure | Development | Economic


Both speakers emphasize the importance of multi-stakeholder participation in AI governance, ensuring that diverse voices beyond government officials are included in policy development processes.

Speakers

– Neema Iyer
– Matchiane Soueid Ahmed

Arguments

Need for regional coalitions and bringing multiple stakeholders beyond just governments into AI policy discussions


AI governance must be co-created with African voices and grounded in human rights with civic oversight


Topics

Legal and regulatory | Human rights | Development


Both speakers advocate for proactive, intentional approaches to AI development that focus on practical utility and user needs rather than simply following global trends or being passive recipients of technology.

Speakers

– Lacina Kone
– Shikoh Gitau

Arguments

Africa needs deliberately intentional approach rather than passive optimism, focusing on user-centric AI models


Private sector must create enabling environments and massive awareness campaigns while governments provide supportive policy frameworks


Topics

Development | Economic | Legal and regulatory


Takeaways

Key takeaways

Africa needs deliberately intentional rather than passive approaches to AI development, focusing on user-centric models that serve local needs in agriculture, healthcare, and education


AI governance frameworks must be co-created with African voices, grounded in human rights, and include transparent, inclusive processes with civic oversight


Data sovereignty is critical – African data must remain in African hands as digital sovereignty is as important as territorial sovereignty


Blended financing approaches combining government, private sector, and donor investments are essential for building necessary compute infrastructure


Multi-stakeholder collaboration is crucial, requiring involvement of civil society, youth, women’s rights organizations, trade unions, and other groups beyond just governments


Digital literacy and AI education must be democratized and delivered in local languages at grassroots levels to build public trust


Africa has over 2000 languages that need preservation through locally trained AI systems to prevent cultural bias and digital neocolonialism


Civil society plays essential roles as watchdogs, advocates, educators, and storytellers while documenting lived experiences of AI impacts


Resolutions and action items

Smart Africa to share benchmarking analysis of 19 national AI strategies with Council of ICT Ministers starting July


Smart Africa’s AI Council for Africa to focus on five areas: computing power, datasets, algorithms, AI governance, and market development


South Africa to leverage its G20 presidency to advance AI governance frameworks with focus on solidarity, sustainability, and equality


Continued teacher training campaigns across six African countries based on high demand demonstrated in pilot programs


Development of African AI Governance Toolkit (available at qbit.africa) and African AI Maturity Index (at datawall.africa) for government use


Implementation of mandatory public interest impact assessments before AI system deployment with ongoing monitoring


Establishment of regulatory sandboxes and startup support mechanisms for AI-enabled economic transformation


Unresolved issues

How to effectively harmonize 19 different national AI strategies while respecting individual country sovereignty


Addressing the fundamental infrastructure gap affecting 20% of population in remote areas across the continent


Preventing over 1000 African startups from continuing to use foreign AI models that permanently extract training data


Building sufficient collective computing power across 50+ African countries to train AI models locally


Operationalizing beautiful policies and frameworks – moving from high-level discussions to ground-level implementation


Addressing youth concerns about AI taking over jobs and building trust in AI technology among African populations


Determining realistic funding levels for Africa’s AI development compared to America’s $500 billion and France’s $200 billion commitments


Establishing clear definitions and metrics for what ‘democratizing AI’ actually means in African contexts


Suggested compromises

Blended financing model combining government, private sector, and donor investments rather than relying solely on public funding


Flexible policy harmonization approach where ‘each size should fit together’ rather than ‘one size fits all’ to respect sovereignty while enabling collaboration


Graduated approach to AI governance where countries can start with simple principles or policy frameworks rather than requiring full strategies immediately


Focus on ‘most useful’ rather than ‘most powerful’ AI applications suited to African development priorities


Parallel development of both enterprise-grade computing for private sector and accessible computing for researchers and startups


Government role focused on creating enabling environments and supportive policies while private sector leads innovation and awareness campaigns


Thought provoking comments

We cannot be passively, we can no longer be passively optimistic. We have to be deliberately intentional… Africa today is the number one frontier in terms of availability of data to train AI… But they’re training those model. Those models are not located on our continent. They’re located outside the continent. And AI system is the way once you train the server, you can never get the information back again.

Speaker

Lacina Kone


Reason

This comment is deeply insightful because it reframes the entire AI discussion from a reactive to a proactive stance, while highlighting a critical paradox: Africa has abundant data but lacks control over how it’s processed. The irreversible nature of AI training he mentions introduces urgency to the sovereignty discussion.


Impact

This comment fundamentally shifted the conversation from theoretical policy discussions to concrete action items. It led other speakers to focus more on practical implementation and sovereignty issues, and established the framework for discussing the five pillars of the AI Council (computing power, data sets, algorithms, governance, and market).


Africa is not looking for the most powerful AI, it’s looking for the most useful one, looking at the agriculture, looking at the healthcare and looking at the education… AI allows us today to actually reach indigenous people in the rural area, to educate them in their own language, so they become tech-savvy like anyone who’s been to the university.

Speaker

Lacina Kone


Reason

This comment is thought-provoking because it challenges the dominant narrative of AI competition based on computational power and instead proposes a value-based approach centered on utility and inclusion. It redefines success metrics for African AI development.


Impact

This perspective influenced subsequent speakers to focus more on contextual applications and inclusive design. It helped establish a distinctly African approach to AI that prioritizes social impact over technological supremacy, which other panelists then built upon.


AI is already undermining governance in Africa, and we’re seeing this through automated disinformation campaigns, the eroding of public trust, manipulation of political discourse, and surveillance tools that are used to stifle voices at scale… we need AI that is based on accountability, trust, and a big component of public education.

Speaker

Neema Iyer


Reason

This comment is insightful because it grounds the discussion in current reality rather than future possibilities, providing concrete examples of AI’s negative impacts while maintaining a balanced view that acknowledges both benefits and harms.


Impact

This comment set a critical tone for the entire discussion, ensuring that subsequent speakers addressed both opportunities and risks. It influenced the conversation to consistently include safeguards, accountability measures, and the importance of civil society oversight in their responses.


Every time I hear democratizing this, democratizing that, I say, what’s the definition of democracy you’re talking about? And for me, democratizing means that it’s enabling everybody everywhere to have access to the same opportunities and resources… who is drafting these policies? What agenda do they have? Do they have Africa at heart when they are doing this?

Speaker

Shikoh Gitau


Reason

This comment is thought-provoking because it challenges the casual use of ‘democratization’ rhetoric and demands specificity about whose interests are being served. It introduces a critical lens about power dynamics in policy-making processes.


Impact

This comment deepened the analytical level of the discussion by questioning fundamental assumptions about who controls the AI narrative in Africa. It led to more nuanced discussions about agency, representation, and the need for African-led solutions rather than externally imposed frameworks.


We really need to democratize digital literacy. I feel like, you know, we’re in Oslo here having this conversation that doesn’t apply to most people living on the continent… We can’t always use these huge data governance languages, and then, you know, people’s eyes glaze over because they don’t really know what you’re saying.

Speaker

Neema Iyer


Reason

This meta-commentary is insightful because it critiques the very nature of high-level policy discussions while participating in one, highlighting the disconnect between elite discourse and grassroots reality. It calls for accessible communication and inclusive participation.


Impact

This self-reflective comment prompted other speakers to consider implementation and accessibility more seriously. It influenced the discussion to focus more on practical steps for community engagement and the need to translate policy into actionable, understandable terms for ordinary citizens.


Data must remain in African hands… sovereignty is not just territorial, but also digital.

Speaker

Matchiane Soueid Ahmed


Reason

This comment is thought-provoking because it expands the concept of sovereignty beyond traditional geographical boundaries to include digital assets, framing data control as a fundamental aspect of national independence in the AI era.


Impact

This comment reinforced and crystallized the sovereignty theme that ran throughout the discussion, providing a clear conceptual framework that other speakers could reference. It helped establish data sovereignty as a non-negotiable principle for African AI development.


Overall assessment

These key comments fundamentally shaped the discussion by establishing three critical frameworks: (1) the urgency of moving from passive to intentional action regarding AI governance, (2) the need to define African success metrics based on utility rather than power, and (3) the importance of questioning who controls the AI narrative and ensuring African agency. The comments created a progression from identifying current harms to proposing African-centered solutions, while consistently challenging assumptions about democratization, sovereignty, and inclusion. The discussion evolved from theoretical policy considerations to practical implementation strategies, with each insightful comment building upon previous ones to create a comprehensive vision for African AI governance that prioritizes local needs, democratic values, and genuine self-determination.


Follow-up questions

How do we operationalize beautiful policies and frameworks into actual ground-level implementation?

Speaker

Neema Iyer


Explanation

This addresses the critical gap between policy development and practical implementation, which is essential for ensuring AI governance frameworks actually work in practice rather than just existing on paper.


How can we develop funding models for AI innovation that are sustainable and bring young people into product creation?

Speaker

Neema Iyer


Explanation

This highlights the need for research into alternative funding mechanisms that support local innovation and youth engagement, moving beyond traditional donor-dependent models.


What are the foreign interests and agendas of tech giants in shaping Africa’s AI agenda?

Speaker

Neema Iyer


Explanation

Understanding the motivations and influences of external actors is crucial for developing truly sovereign AI strategies that serve African interests rather than external ones.


How do we move AI governance from paper to pavement – from policy documents to actual implementation?

Speaker

Shikoh Gitau


Explanation

This emphasizes the implementation gap that exists between developing AI policies and actually deploying them effectively in real-world contexts.


What realistic steps can governments take to ensure AI readiness, particularly regarding digital literacy and infrastructure?

Speaker

Peter King (Audience member from Liberia)


Explanation

This calls for concrete, actionable research on practical steps governments can take to build foundational capabilities needed for AI deployment.


How can we create convergence across African countries for AI policy instead of working in silos?

Speaker

Alhaji (Audience member from Gambia)


Explanation

This addresses the need for research into harmonization mechanisms that respect national sovereignty while enabling continental coordination.


What specific measures can build trust among African youth regarding AI and job displacement concerns?

Speaker

Lydia Lamisa Akamvareba (Audience member from Ghana)


Explanation

This highlights the need for research into youth perceptions of AI and effective strategies for building confidence rather than fear about AI’s impact on employment.


How do we assess and measure AI maturity across different African contexts?

Speaker

Shikoh Gitau


Explanation

Understanding where countries stand in terms of AI readiness is essential for developing targeted interventions and tracking progress over time.


How can we ensure that AI models respond to African healthcare needs and realities?

Speaker

Shikoh Gitau


Explanation

This points to the need for research into developing and benchmarking AI systems that are specifically designed for African contexts and needs.


How do we completely change assessment methods in education given AI’s impact on traditional evaluation?

Speaker

Neema Iyer


Explanation

This highlights the need for research into new educational assessment frameworks that account for AI tools and changing skill requirements.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #180 Cross Border Collaboration for Womens Digital Safety

Lightning Talk #180 Cross Border Collaboration for Womens Digital Safety

Session at a glance

Summary

This discussion focused on women’s digital safety challenges in the Global South, presented by Safaa Sarayrah, a digital safety trainer from Jordan who works with activists and human rights defenders in the MENA region and Africa. Sarayrah emphasized that women in conflict-affected areas of the Global South face severe online violence, harassment, blackmail, and hate speech, with limited support systems available to help them. She highlighted that these digital threats can have life-threatening consequences in traditional communities, where issues like leaked private photos could lead to honor killings.


The presenter identified several key barriers preventing women from accessing digital safety resources, including language barriers since most resources are only available in English, lack of basic technical knowledge about security measures like two-factor authentication and strong passwords, and insufficient collaboration between governments, tech companies, and NGOs. Sarayrah cited alarming statistics, noting that 60% of women in the MENA region face online violence, with 80% of Lebanese women experiencing online harassment. She shared examples of Sudanese women displaced by war who struggle to access digital safety training due to language and resource limitations.


The discussion included input from audience members, including Lila from Brazil who mentioned her work providing digital safety training in Portuguese locally, and Catherine Muma, a Kenyan senator who inquired about policy frameworks for legislative protection. Sarayrah proposed solutions at three levels: governments creating suitable local laws, tech companies providing support in local languages, and NGOs offering awareness sessions in native languages. The discussion concluded with acknowledgment that while some regional efforts exist, there remains a critical need for comprehensive, cross-border collaboration to address women’s digital safety in the Global South.


Keypoints

**Major Discussion Points:**


– **Language barriers and accessibility of digital safety resources**: Most digital safety resources are available only in English or major languages, creating significant barriers for women in the Global South who speak local languages and cannot access critical safety information or report harassment on platforms.


– **Cross-border collaboration needs between governments, tech companies, and NGOs**: There is a critical lack of coordination between these three sectors to create comprehensive support systems for women facing digital threats, particularly those displaced by conflicts and wars.


– **Severe consequences of digital violence in traditional communities**: Unlike in other regions, digital harassment, blackmail, and privacy violations can lead to life-threatening situations for women in the Global South, including honor-based violence, making digital safety a matter of survival rather than just privacy.


– **Gap in policy frameworks and legislative protection**: There is insufficient legal infrastructure to protect women from online violence, with limited channels between NGOs working on digital safety and government bodies that could implement protective legislation.


– **Basic digital literacy challenges**: Many women, especially those displaced by conflict or living in traditional communities, lack fundamental knowledge about digital security practices like two-factor authentication, strong passwords, and identifying phishing attempts.


**Overall Purpose:**


The discussion aimed to highlight the urgent need for comprehensive digital safety support systems for women in the Global South, advocating for multi-stakeholder collaboration to address the unique vulnerabilities these women face online, particularly in conflict-affected regions.


**Overall Tone:**


The tone was serious and urgent throughout, with the presenter expressing frustration about existing gaps in support systems. The discussion maintained a problem-focused approach, with the presenter seeking solutions and input from the audience. The tone became slightly more collaborative when audience members shared their experiences, but remained consistently concerned about the gravity of the issues being discussed.


Speakers

– **Safaa Sarayrah**: Digital safety trainer for activists and human rights defenders, specializing in the MENA region and Africa/Global South. From Jordan, Middle East.


– **Audience**: Multiple audience members participated in the discussion (specific roles/expertise not mentioned for this general category)


**Additional speakers:**


– **Lila**: From Brazil, part of trans feminist network of digital care, conducts digital safety trainings in Portuguese for Brazil


– **Catherine Muma**: Senator from Kenya


Full session report

# Comprehensive Report: Women’s Digital Safety Challenges in the Global South


## Executive Summary


This presentation by Safaa Sarayrah, a digital safety trainer from Jordan specializing in the MENA region and Africa, addressed the critical challenges facing women in the Global South regarding digital safety. Sarayrah highlighted severe gaps in support systems, language accessibility, and cross-sector collaboration, emphasizing that digital threats in these regions can escalate to life-threatening situations. The session included brief contributions from audience members, notably Lila from a trans feminist network of digital care and Senator Catherine Muma from Kenya, who provided additional perspectives on current efforts and policy needs.


## Key Presenter and Audience Contributors


**Safaa Sarayrah** served as the primary presenter, bringing extensive experience as a digital safety trainer working with activists and human rights defenders across the MENA region and Africa. Her presentation was characterized by urgency regarding existing gaps in support systems, emphasizing the life-or-death nature of digital safety issues in traditional communities. She specifically mentioned her work training Sudanese women who moved to Kenya and Uganda due to war.


**Lila** briefly described her work with a trans feminist network of digital care, providing digital safety training in Portuguese for Brazil, illustrating both grassroots initiatives and their geographical limitations.


**Senator Catherine Muma from Kenya** asked about specific policy recommendations, inquiring about legislative frameworks for women’s protection online.


## Critical Challenges Identified


### The Life-or-Death Nature of Digital Violence


Sarayrah established that digital safety concerns in the Global South can have fatal consequences. She emphasized that digital harassment, blackmail, and privacy violations can lead to physical violence or death in traditional communities. As she stated: “if someone take my like private photos or videos, maybe something it’s lead to killing this women in the global south, unfortunately. So it’s a life matter.”


### Statistical Examples Cited


Sarayrah provided examples of the scope of the problem, citing that “60% of the women in [the MENA] region facing online violence” and “80% women in Lebanon facing a violence online” to illustrate the widespread nature of digital violence against women in the region.


### Language Barriers as Systemic Exclusion


A central theme was the critical barrier posed by language accessibility. Sarayrah highlighted that most digital safety resources are available only in English, creating systematic exclusion for women who speak local languages. This barrier extends to platform functionality—women cannot report harassment or access help because they cannot navigate English-language interfaces and support systems.


She specifically asked the audience: “Do you know any platform that provide this kind of support in the local language?” and noted that even when she searches for solutions, “I cannot find anything in Arabic.”


### Knowledge Gaps in Basic Digital Security


Sarayrah identified significant gaps in fundamental digital literacy, particularly among displaced women. She mentioned specific areas where women lack knowledge:


– Two-factor authentication


– Creating strong passwords


– Recognizing phishing attempts and avoiding clicking unknown links


These gaps are particularly pronounced among women displaced by conflict, such as the Sudanese women she trains who moved to Kenya and Uganda due to war.


### Fragmented Support Systems and Lack of Regional Coordination


Sarayrah emphasized the absence of regional coordination, asking the audience: “Do you know if there is any regional hub in the global south that working on supporting women digital safety?” She noted that organizations typically work within their own countries rather than collaborating across the Global South, limiting effectiveness.


## Stakeholder Challenges


### Civil Society and NGO Limitations


Sarayrah noted that many NGOs do not prioritize digital safety training, and negotiations with tech companies have shown limited success. She identified a critical gap: “Unfortunately we like do not have like open like a channel between us and the government’s and what we that missed unfortunately.”


### Governmental Policy Gaps


Senator Muma’s inquiry about whether work had been done “to define how policy would look like” highlighted governmental interest but uncertainty about effective approaches. Sarayrah emphasized the need for “suitable local laws in each country to help women understand their rights and access help.”


### Tech Company Inadequacies


The presentation highlighted that tech companies do not provide adequate support or resources in local languages, with their commercial priorities often conflicting with user safety needs in the Global South.


## Proposed Solutions


### Three-Level Intervention Strategy


Sarayrah proposed comprehensive action at three levels:


1. **Governmental Level**: Creating suitable local laws in each country


2. **Tech Company Level**: Providing support and resources in local languages


3. **NGO Level**: Offering awareness sessions and training in native languages


### Regional Hub Development


A key proposal was establishing regional hubs connecting governments, tech companies, and NGOs to support women’s digital safety across the Global South, addressing current fragmentation while maintaining local sensitivity.


### Immediate Practical Steps


– Creating digital safety resources and platform guides in local languages


– Conducting community assessments to understand specific risks


– Developing culturally appropriate policy frameworks


– Establishing formal channels between NGOs and governments


## Unresolved Challenges


Several fundamental challenges remain unaddressed:


– Lack of formal channels between NGOs and governments


– Tech companies’ limited responsiveness to Global South needs


– Insufficient infrastructure for local language support


– Particular vulnerabilities of displaced populations


## Conclusion


This presentation illuminated the urgent nature of women’s digital safety challenges in the Global South, where digital violence can have fatal consequences. While Sarayrah identified clear needs for multi-stakeholder collaboration, language accessibility, and regional coordination, significant gaps remain in implementation and resource allocation. The session demonstrated both the severity of these issues and the potential for collaborative solutions, provided stakeholders can develop more effective cross-sector cooperation mechanisms.


The life-or-death nature of digital safety issues in these contexts demands immediate attention from governments, tech companies, and civil society organizations, requiring not only technical solutions but fundamental changes in how these stakeholders prioritize and support women in the Global South.


Session transcript

Safaa Sarayrah: Hello, everyone. To this slide talking about the women’s digital safety, especially in the Global South. Welcome. And first I want to introduce myself. My name is Safaa Sarayrah. I’m from Jordan, Middle East. And I work as a digital safety trainer for activists, human rights defenders, especially in the MENA region and Africa, Global South. First, when I come my idea to come to the idea of that, because from my work, respect of point, because we are working in the Middle East, and we are working on the Global South, which we have a lot of wars, unfortunately, and a lot of a lot of conflicts. And as a woman, yeah, and a girl in the Global South, we are suffering a lot of the online violence, harassment, block mail, whatever, hate speech, because we are working, like for women, we are working on the human rights, or the women also they working on the journalists, especially in the hot and the wars, like spot. So I wanted to talk about some points in this session. And I would like from the people they are so like to share also their experience if they have. First thing that we will talk about that collaboration, the cross border in the Global South, and the rest of the world. Why this matter? Because in the Global South, this the risk and the threat of even the digital threat, or the physical digital threat, because they are women, they are traveling, they are like moving from, like a war or a conflict country to another country, and they are facing a lot of risk. And they do not have that support, much support, especially in the digital, because we do not have that support. For example, if I as a journalist, or a human rights defender, or a woman, and I face a harassment, or a hate speech, or a digital like a threat, I don’t know where to go. Who can help me? Who can support me? So this is like we need the collaboration, why this is very, very important issues and matters. Okay, so what is the key points to get from this session or from these conferences, how we can open channels between the governments, between the tech company and the NGOs? Because this collaboration is like that the secret point to make like a system to support women in the global south. Let’s take like the Sudan case. I trained a lot of Sudanese women. They are because the war in the Sudan, they are moved from Sudan to Kenya, or Uganda. And when I train them on the like a digital security safety, even the basic things is hard for them. Because like the language barrier for them, because most of the resources are in English, or in other language. And as you know, Sudan, they have a lot of language inside it. They took some Arabic, English newbies, and they do not have like the much knowledge how to access the resources. So this is a weak point we can work on it, like to make a platforms, especially from the big tech company, supporting the local languages for the women in the global south, because they have a languages barrier. They do not know how, for example, to report a harassment on the platform. They do not know. They do not know that they are, for example, a guides or something. Let me give you an example. Most of the women I trained them, they do not have how like to active, for example, the 2FA. It’s a basic thing, or to like to establish or setting up a strong password, or when they have like a links and their WhatsApp and their other social media, they just click connect without know who the sender, who it’s safe to click links or not. So they facing a lot of issues, and they facing a lot of phishing and hacking in their forms. And especially in the global south, and like a traditional community, it’s not easy to facing something like this. For example, if someone take my like private photos or videos, maybe something it’s lead to killing this women in the global south, unfortunately. So it’s a life matter. It’s not like, for example, in the other countries. So another case in Amina region as a study of the UN women, that 60% of the women in Amina region facing online violence, and this is a high rate. Another example, 80% women in Lebanon facing a violence online, even harassment, even blackmail, even hate speech. Especially women, they are working in a critical roles in politics. They are facing the hate speech, who look, they wanna voting for her and something like a lot of things. Let me take, so this is the issue, and this is why it’s matter. So what we can do? We can do like searching for solutions, depending on three levels. On the government levels, they build like a suitable local laws in each country that help women to take their rights or to know what they have to do if they have issues, have a risk, where I have to go, who can help me. Even the psychologist thinks on the women when they are facing, they are afraid to speak about these issues because in the community that if you speak about some like, for example, if you’re facing sexual online like a threat, if you speak there, you have a question marks and the community look at you. So this is like also one of the points. We miss this supporting. Also the tech company, they do not have like the supporting like steps or guides in the local languages, especially in the global south. So we need like to establish these like a regional hub that connecting the governments, the tech hub and the NGO. As NGO, what we can do? We can like give the women like awareness sessions in their local languages so they can know even the basic steps, the strong password, the 2FA, how to check the links, the attachments. If she like facing any issue regarding their digital safety, they like find a people who support these women. We’re working on this and we give a lot like of that digital safety like a training in Sudan, in the Middle East, in Iraq and other issues. But unfortunately, regarding to some like problems regarding, for example, to the FATCAN that recently happened, a lot of women, they are now without any digital security support and they have to search about the solutions by themselves. And this is hard for them because they are facing a lot of issues. They are in our countries, they are moving from place to other place. they have a pressure on them especially in their work if they are work as a journalist or human right defenders they have a little bit of knowledge about the technology what about the normal women the normal women they are like stay as a home and they care about the children they do not have any like a little bit of the tech knowledge they’re just using the phone and they’re just clicking on anything and they facing a lot of issues but no one help them I wanted to ask to attend this if anyone know that any platform that like treating this kind of risk in the local languages unfortunately no and this is like like a big issue because there is a resources but they can’t reach it and if I know what I want you should I have a language barrier even for me as a stick person I have a language barrier and sometimes I need to translate the resources to my language to have like a full understanding how like to implement these solutions exactly also I have like as an NGO they are focusing on like some another topics it’s important I know but the digital like safety tips and training awareness it’s not like take that much care from the NGO unfortunately but it is very important I want to also to ask that attendance like especially who are working on the global south is there any regional hub in the global south they are work on this issue yeah can you yeah just one minute like please.


Audience: I’m Lila from Brazil I’m part of trans feminist network of digital care so we do that kind of trainings and everything but in Portuguese for our country only yeah so it’s a local thing for a huge country but that’s it.


Safaa Sarayrah: yeah this is the most important point because she said in I work in only my country what about the other countries within the global South what about the Africa North and South and they have a like a different issues different traditionals even a lot of languages and Alex even in the same country so we need like a solutions we can like for example like innovation to create a like a regional hub for all the global South or even for the countries that have a little bit of common like languages or traditionals and make them like this to like open channel between the governments to make like a suitable laws for the human rights like the women especially the women digital safety rights also as a tech company we encourage them like to establish like a supporting the local languages in their platform so they can help women and normal women with their languages how to know to deal with these issues in the NGO we encourage them to make like more like efforts on these countries especially the countries that like facing a wars like Sudan because Lebanon unfortunately in the Middle East we have a lot of issues unfortunately that’s a little sad things so we need more efforts that specially in the digital safety and we especially in the main region we are facing a lot of hate speech against the women who are working specially in the political things because you are women while you talk in the politics and blah blah blah and these issues I will open the questions or example or any feedback because I wanted to like to hear from you that I’m happy yeah please


Audience: thank you my name is Catherine Muma I’m a senator from Kenya and thank you so much for your presentation and the issues you raise real issues in the global south I would want to know whether you have done any work that helps to define how policy would look like how if we were to put to review our penal laws to include offenses that are committed through the internet and the social media have you and your organization thought through the kind of legislative frameworks and policy frameworks you would want to see governments and parliaments pass for the protection of women I believe it’s protection of women and and children as well.


Safaa Sarayrah: thank you so much for your questions this is important point unfortunately we like do not have like open like a channel between us and the government’s and what we that missed unfortunately but we have like an some negation with the tech company like and Twitter we have a lot of meetings with them about like to creating the suitable policies that are will be fair and good for special countries in the global south but they have as you know it’s a commercial company and we have like their standards we do not hear from them that much but we are keep going to working on this at least we have like it’s not a formal policy we are like as NGO we like establishing policy framework for us for the people who work for the women and children who are working with us and even we are go and giving a like a digital training for any community in the global south we like have a meeting with them understand and make some like resurgent and a pre-assessment to understand all the risk and all like the points that they are care about it so after that pre-assessment we make the policy that fit them specially so what we are working with the special groups with a special civil society but unfortunately we do not have like the big policy framework for like oh even a one country not even for the global south any question or feedback I will happy okay thank you so much and yeah thank you you


S

Safaa Sarayrah

Speech speed

126 words per minute

Speech length

1962 words

Speech time

928 seconds

Women in the Global South face severe online violence, harassment, blackmail, and hate speech, especially those working in human rights and journalism

Explanation

Women in conflict-affected regions of the Global South experience significant digital threats including harassment, blackmail, and hate speech, particularly when they work as human rights defenders or journalists. These threats are especially severe in war zones and conflict areas where women are already vulnerable.


Evidence

Examples from Sudan where women moved to Kenya or Uganda due to war, and women working in politics facing hate speech questioning why they participate in political discourse


Major discussion point

Women’s Digital Safety Challenges in the Global South


Topics

Gender rights online | Human rights principles


60% of women in the MENA region face online violence, and 80% of women in Lebanon experience online harassment

Explanation

Statistical data shows extremely high rates of online violence against women in the Middle East and North Africa region. Lebanon has particularly alarming rates with 80% of women experiencing some form of online harassment, violence, blackmail, or hate speech.


Evidence

UN Women study showing 60% rate in MENA region and 80% rate specifically in Lebanon


Major discussion point

Women’s Digital Safety Challenges in the Global South


Topics

Gender rights online | Human rights principles


Digital threats can lead to physical violence or even death in traditional communities, making it a life-or-death matter

Explanation

In traditional communities within the Global South, digital harassment such as sharing private photos or videos can escalate to physical violence or honor killings. This makes digital safety not just a privacy concern but literally a matter of life and death for women in these contexts.


Evidence

Example given of private photos or videos being shared potentially leading to killing of women in traditional Global South communities


Major discussion point

Women’s Digital Safety Challenges in the Global South


Topics

Gender rights online | Human rights principles


Women lack basic digital security knowledge, such as two-factor authentication, strong passwords, and safe link practices

Explanation

Many women in the Global South lack fundamental digital security skills and knowledge. They don’t know how to implement basic security measures like two-factor authentication or create strong passwords, and they often click on suspicious links without understanding the risks.


Evidence

Examples of women not knowing how to activate 2FA, set up strong passwords, or safely handle links in WhatsApp and social media, leading to phishing and hacking incidents


Major discussion point

Women’s Digital Safety Challenges in the Global South


Topics

Cybersecurity | Capacity development


Most digital safety resources are in English, creating barriers for women who speak local languages

Explanation

Digital safety resources and guides are predominantly available in English, creating significant accessibility barriers for women in the Global South who speak various local languages. This language barrier prevents them from accessing crucial safety information and support.


Evidence

Sudan example where women speak Arabic, English, and various local languages but struggle to access English-language resources; trainer’s own experience of needing to translate resources to fully understand implementation


Major discussion point

Language Barriers and Resource Accessibility


Topics

Multilingualism | Digital access | Capacity development


Women cannot access help or report harassment due to language barriers on tech platforms

Explanation

Tech platforms lack support in local languages, preventing women from understanding how to report harassment or access help when facing digital threats. This creates a significant gap in protection and support systems.


Evidence

Examples of women not knowing how to report harassment on platforms due to language barriers and lack of guides in local languages


Major discussion point

Language Barriers and Resource Accessibility


Topics

Multilingualism | Gender rights online | Liability of intermediaries


Even digital safety trainers face language barriers when trying to understand and implement solutions

Explanation

The language barrier problem is so pervasive that even professional digital safety trainers struggle with accessing and understanding resources that are primarily in English. This highlights the systemic nature of the language accessibility problem in digital safety resources.


Evidence

Speaker’s personal experience as a digital safety trainer needing to translate resources to her own language to fully understand implementation


Major discussion point

Language Barriers and Resource Accessibility


Topics

Multilingualism | Capacity development


There is insufficient support for women facing digital threats, with no clear channels for help

Explanation

Women in the Global South lack adequate support systems when facing digital threats, with no clear pathways to seek help or assistance. This creates a dangerous situation where women are left to handle serious digital security threats on their own.


Evidence

Speaker’s question about where to go for help when facing harassment, hate speech, or digital threats, and the lack of known platforms treating these risks in local languages


Major discussion point

Need for Cross-Border Collaboration and Regional Hubs


Topics

Gender rights online | Human rights principles


A regional hub connecting governments, tech companies, and NGOs is needed to support women in the Global South

Explanation

There is a critical need for establishing regional coordination mechanisms that bring together governments, technology companies, and non-governmental organizations to create comprehensive support systems for women’s digital safety. This collaboration is essential for creating effective protection frameworks.


Evidence

Discussion of the need to open channels between governments, tech companies, and NGOs, and the proposal for regional hubs connecting these stakeholders


Major discussion point

Need for Cross-Border Collaboration and Regional Hubs


Topics

Gender rights online | Human rights principles | Data governance


Current efforts are fragmented, with organizations working only within their own countries rather than collaboratively across the Global South

Explanation

Existing digital safety initiatives are limited in scope, with organizations typically focusing only on their individual countries rather than coordinating across the broader Global South region. This fragmentation limits the effectiveness and reach of support efforts.


Evidence

Discussion following Brazilian audience member’s comment about working only in Portuguese for Brazil, highlighting the need for broader regional coordination across different countries in the Global South


Major discussion point

Need for Cross-Border Collaboration and Regional Hubs


Topics

Gender rights online | Capacity development


Agreed with

– Audience (Lila from Brazil)

Agreed on

Fragmented nature of current digital safety efforts limits effectiveness


Disagreed with

– Audience (Lila from Brazil)

Disagreed on

Scope of digital safety initiatives – national vs regional approach


There is a need for suitable local laws in each country to help women understand their rights and where to seek help

Explanation

Countries in the Global South need to develop appropriate local legislation that clearly defines women’s digital rights and establishes clear pathways for seeking help when facing digital threats. Current legal frameworks are inadequate for addressing digital safety concerns.


Evidence

Discussion of building suitable local laws at government level and the need for women to know their rights and where to go for help


Major discussion point

Policy and Legislative Framework Gaps


Topics

Gender rights online | Legal and regulatory | Human rights principles


Agreed with

– Audience (Senator Catherine Muma)

Agreed on

Need for comprehensive legislative frameworks to address digital crimes against women and children


NGOs lack formal channels with governments to develop comprehensive policy frameworks, working instead with specific groups and communities

Explanation

Non-governmental organizations do not have established formal communication channels with governments to develop broad policy frameworks for digital safety. Instead, they work on a smaller scale with specific communities and groups, limiting the scope and impact of their policy development efforts.


Evidence

Speaker’s acknowledgment that they don’t have open channels with governments but work with specific groups, conducting pre-assessments and creating policies that fit particular communities rather than comprehensive frameworks


Major discussion point

Policy and Legislative Framework Gaps


Topics

Human rights principles | Data governance | Legal and regulatory


Tech companies do not provide adequate support or guides in local languages for Global South users

Explanation

Technology companies fail to provide sufficient support materials, guides, and resources in the local languages spoken by users in the Global South. This creates significant barriers to accessing help and understanding platform safety features.


Evidence

Discussion of tech companies not having supporting steps or guides in local languages, and the need to encourage them to establish support in local languages on their platforms


Major discussion point

Inadequate Support from Tech Companies and NGOs


Topics

Multilingualism | Liability of intermediaries | Gender rights online


NGOs often do not prioritize digital safety training and awareness, focusing on other topics instead

Explanation

Many non-governmental organizations do not give adequate attention or resources to digital safety training and awareness programs, instead focusing their efforts on other issues they consider more important. This leaves a significant gap in digital safety education and support.


Evidence

Speaker’s observation that NGOs focus on other important topics but digital safety tips, training, and awareness don’t receive much care from NGOs despite being very important


Major discussion point

Inadequate Support from Tech Companies and NGOs


Topics

Capacity development | Gender rights online


Negotiations with tech companies like Twitter have limited success due to their commercial standards and priorities

Explanation

While some NGOs attempt to engage with technology companies to create suitable policies for Global South countries, these efforts have limited success because tech companies operate according to commercial standards and priorities that may not align with local needs. The commercial nature of these companies creates barriers to implementing region-specific solutions.


Evidence

Speaker’s experience of having meetings with Twitter about creating suitable policies for Global South countries, but noting limited response due to their commercial company status and standards


Major discussion point

Inadequate Support from Tech Companies and NGOs


Topics

Liability of intermediaries | Gender rights online | Legal and regulatory


A

Audience

Speech speed

95 words per minute

Speech length

157 words

Speech time

98 seconds

Local organizations like the trans feminist network in Brazil provide training only in Portuguese for their country

Explanation

While some local organizations do provide digital safety training in local languages, their scope is limited to their own countries. This example from Brazil shows both the existence of localized efforts and their geographical limitations.


Evidence

Lila from Brazil mentioning her work with trans feminist network of digital care providing trainings in Portuguese for Brazil only


Major discussion point

Language Barriers and Resource Accessibility


Topics

Multilingualism | Gender rights online | Capacity development


Agreed with

– Safaa Sarayrah
– Audience (Lila from Brazil)

Agreed on

Fragmented nature of current digital safety efforts limits effectiveness


Disagreed with

– Safaa Sarayrah
– Audience (Lila from Brazil)

Disagreed on

Scope of digital safety initiatives – national vs regional approach


Governments and parliaments should review penal laws to include internet and social media offenses for protecting women and children

Explanation

There is a need for legislative bodies to update their legal frameworks to specifically address crimes committed through internet and social media platforms. This would provide better protection for women and children facing digital threats and harassment.


Evidence

Senator Catherine Muma from Kenya asking about legislative frameworks and policy frameworks for protection of women and children, specifically mentioning review of penal laws to include internet and social media offenses


Major discussion point

Policy and Legislative Framework Gaps


Topics

Legal and regulatory | Gender rights online | Children rights


Agreed with

– Safaa Sarayrah
– Audience (Senator Catherine Muma)

Agreed on

Need for comprehensive legislative frameworks to address digital crimes against women and children


Agreements

Agreement points

Need for comprehensive legislative frameworks to address digital crimes against women and children

Speakers

– Safaa Sarayrah
– Audience (Senator Catherine Muma)

Arguments

There is a need for suitable local laws in each country to help women understand their rights and where to seek help


Governments and parliaments should review penal laws to include internet and social media offenses for protecting women and children


Summary

Both speakers agree that current legal frameworks are inadequate and need to be updated to specifically address digital crimes and provide clear pathways for protection and justice for women and children facing online threats.


Topics

Legal and regulatory | Gender rights online | Children rights


Fragmented nature of current digital safety efforts limits effectiveness

Speakers

– Safaa Sarayrah
– Audience (Lila from Brazil)

Arguments

Current efforts are fragmented, with organizations working only within their own countries rather than collaboratively across the Global South


Local organizations like the trans feminist network in Brazil provide training only in Portuguese for their country


Summary

Both speakers acknowledge that while local efforts exist, they are limited to individual countries and lack regional coordination, which reduces their overall impact and effectiveness.


Topics

Gender rights online | Capacity development | Multilingualism


Similar viewpoints

Both speakers recognize the gap between NGO work and government policy-making, with the need for better coordination to develop effective legislative frameworks for digital safety.

Speakers

– Safaa Sarayrah
– Audience (Senator Catherine Muma)

Arguments

NGOs lack formal channels with governments to develop comprehensive policy frameworks, working instead with specific groups and communities


Governments and parliaments should review penal laws to include internet and social media offenses for protecting women and children


Topics

Legal and regulatory | Human rights principles | Gender rights online


Both speakers understand the critical importance of providing digital safety resources and training in local languages, though they also recognize the limitations of country-specific approaches.

Speakers

– Safaa Sarayrah
– Audience (Lila from Brazil)

Arguments

Most digital safety resources are in English, creating barriers for women who speak local languages


Local organizations like the trans feminist network in Brazil provide training only in Portuguese for their country


Topics

Multilingualism | Gender rights online | Capacity development


Unexpected consensus

Cross-sector collaboration necessity

Speakers

– Safaa Sarayrah
– Audience (Senator Catherine Muma)

Arguments

A regional hub connecting governments, tech companies, and NGOs is needed to support women in the Global South


Governments and parliaments should review penal laws to include internet and social media offenses for protecting women and children


Explanation

It’s notable that both a grassroots digital safety trainer and a government senator independently recognize the need for multi-stakeholder collaboration, suggesting broad consensus across different levels of governance and civil society about the inadequacy of siloed approaches.


Topics

Gender rights online | Legal and regulatory | Data governance


Overall assessment

Summary

There is strong consensus among speakers on the severity of digital safety challenges facing women in the Global South, the inadequacy of current support systems, the critical importance of language accessibility, and the need for comprehensive multi-stakeholder solutions involving governments, tech companies, and NGOs.


Consensus level

High level of consensus with complementary perspectives – the speakers approach the issues from different angles (grassroots training, government policy, local implementation) but arrive at similar conclusions about problems and solutions. This suggests the issues are well-understood across different stakeholder groups and creates a strong foundation for collaborative action. The consensus spans technical, legal, linguistic, and institutional dimensions of the problem.


Differences

Different viewpoints

Scope of digital safety initiatives – national vs regional approach

Speakers

– Safaa Sarayrah
– Audience (Lila from Brazil)

Arguments

Current efforts are fragmented, with organizations working only within their own countries rather than collaboratively across the Global South


Local organizations like the trans feminist network in Brazil provide training only in Portuguese for their country


Summary

Safaa advocates for broader regional coordination across the Global South, while the Brazilian representative describes their work as intentionally limited to their own country with Portuguese-only resources. This represents different philosophies about whether digital safety work should be localized or regionalized.


Topics

Gender rights online | Capacity development | Multilingualism


Unexpected differences

Effectiveness of current localized approaches vs need for regional coordination

Speakers

– Safaa Sarayrah
– Audience (Lila from Brazil)

Arguments

Current efforts are fragmented, with organizations working only within their own countries rather than collaboratively across the Global South


Local organizations like the trans feminist network in Brazil provide training only in Portuguese for their country


Explanation

This disagreement is unexpected because both speakers work on the same issue (women’s digital safety) in Global South contexts, yet they have fundamentally different approaches. The Brazilian representative seems content with country-specific work, while Safaa sees this as a problematic fragmentation. This suggests a deeper philosophical divide about whether digital safety solutions should be localized or coordinated regionally.


Topics

Gender rights online | Capacity development | Multilingualism


Overall assessment

Summary

The discussion shows minimal direct disagreement, with most tension arising around implementation approaches rather than fundamental goals. The main areas of difference involve the scope of initiatives (national vs regional) and the pathways to policy development (grassroots vs institutional).


Disagreement level

Low to moderate disagreement level. The speakers largely agree on the problems and general solutions needed, but differ on implementation strategies and scope. This suggests that while there is consensus on the urgency of women’s digital safety issues in the Global South, there are legitimate debates about the most effective approaches to address them. The implications are positive – the shared understanding of problems provides a foundation for collaboration, while the different approaches could be complementary rather than contradictory.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers recognize the gap between NGO work and government policy-making, with the need for better coordination to develop effective legislative frameworks for digital safety.

Speakers

– Safaa Sarayrah
– Audience (Senator Catherine Muma)

Arguments

NGOs lack formal channels with governments to develop comprehensive policy frameworks, working instead with specific groups and communities


Governments and parliaments should review penal laws to include internet and social media offenses for protecting women and children


Topics

Legal and regulatory | Human rights principles | Gender rights online


Both speakers understand the critical importance of providing digital safety resources and training in local languages, though they also recognize the limitations of country-specific approaches.

Speakers

– Safaa Sarayrah
– Audience (Lila from Brazil)

Arguments

Most digital safety resources are in English, creating barriers for women who speak local languages


Local organizations like the trans feminist network in Brazil provide training only in Portuguese for their country


Topics

Multilingualism | Gender rights online | Capacity development


Takeaways

Key takeaways

Women in the Global South face severe digital safety challenges including online violence, harassment, blackmail, and hate speech, with statistics showing 60% of women in MENA region and 80% in Lebanon experiencing online violence


Language barriers are a critical obstacle preventing women from accessing digital safety resources and reporting mechanisms, as most resources are only available in English


Digital threats in traditional communities can escalate to physical violence or death, making this a life-or-death issue rather than just an online concern


There is a significant knowledge gap regarding basic digital security practices like two-factor authentication, strong passwords, and safe link verification among women in the Global South


Current support systems are fragmented and inadequate, with no clear channels for women to seek help when facing digital threats


Cross-border collaboration between governments, tech companies, and NGOs is essential but currently lacking in addressing these challenges systematically


Resolutions and action items

Establish regional hubs that connect governments, tech companies, and NGOs to support women’s digital safety across the Global South


Create digital safety resources and platform guides in local languages to overcome language barriers


Develop suitable local laws in each country to help women understand their rights and access help


Encourage tech companies to provide better support for local languages on their platforms


Increase NGO focus and efforts on digital safety training and awareness programs


Conduct pre-assessments with communities to understand specific risks and create tailored policy frameworks


Unresolved issues

How to establish formal channels between NGOs and governments for policy development


How to overcome tech companies’ commercial priorities that limit their responsiveness to Global South needs


How to address the lack of existing platforms providing digital safety support in local languages


How to create comprehensive legislative frameworks that include internet and social media offenses


How to scale local efforts (like Brazil’s Portuguese-language training) to cover the diverse languages and cultures across the Global South


How to provide adequate support for women displaced by wars and conflicts who face additional vulnerabilities


Suggested compromises

Working with specific groups and communities to create tailored policy frameworks when broader governmental collaboration is not available


Continuing negotiations with tech companies despite limited success, maintaining dialogue even when commercial standards conflict with advocacy goals


Focusing on countries with common languages or traditions as a starting point for regional collaboration rather than attempting to address all Global South countries simultaneously


Thought provoking comments

For example, if someone take my like private photos or videos, maybe something it’s lead to killing this women in the global south, unfortunately. So it’s a life matter. It’s not like, for example, in the other countries.

Speaker

Safaa Sarayrah


Reason

This comment is profoundly insightful because it reframes digital safety from a convenience or privacy issue to a literal life-or-death matter. It highlights how cultural contexts in the Global South can escalate digital violations into physical violence, challenging Western-centric perspectives on online harassment that may view it as less severe.


Impact

This comment established the gravity and urgency of the entire discussion, shifting it from a technical problem to a human rights crisis. It provided the emotional and moral foundation that justified all subsequent calls for action and collaboration.


Most of the resources are in English, or in other language… They do not know how, for example, to report a harassment on the platform. They do not know.

Speaker

Safaa Sarayrah


Reason

This observation is thought-provoking because it exposes a fundamental barrier that is often overlooked in digital safety discussions – linguistic accessibility. It reveals how language barriers create a systemic exclusion that leaves vulnerable populations without basic protective tools.


Impact

This comment introduced a concrete, actionable dimension to the discussion, moving beyond abstract calls for help to specific solutions like multilingual platforms and localized resources. It provided a clear pathway for tech companies and NGOs to make immediate improvements.


I’m Lila from Brazil I’m part of trans feminist network of digital care so we do that kind of trainings and everything but in Portuguese for our country only yeah so it’s a local thing for a huge country but that’s it.

Speaker

Lila (Audience member)


Reason

This brief response is insightful because it perfectly illustrates the fragmentation problem Safaa described. It shows how even well-intentioned efforts remain isolated and limited in scope, unable to address the broader regional challenges.


Impact

Lila’s comment served as a crucial validation and real-world example of Safaa’s thesis. It transformed the discussion from theoretical to practical, demonstrating both that solutions exist and that they are insufficient in scale. This prompted Safaa to elaborate on the need for regional coordination.


I would want to know whether you have done any work that helps to define how policy would look like… have you and your organization thought through the kind of legislative frameworks and policy frameworks you would want to see governments and parliaments pass for the protection of women

Speaker

Catherine Muma (Senator from Kenya)


Reason

This question is particularly thought-provoking because it comes from someone with actual legislative power, shifting the conversation from advocacy to potential implementation. It challenges the NGO perspective by asking for concrete policy solutions rather than just problem identification.


Impact

Senator Muma’s question created a pivotal moment that exposed a critical gap – the lack of formal channels between civil society and government. Safaa’s response revealed that despite extensive grassroots work, there’s insufficient connection to policy-making processes. This highlighted the need for better advocacy strategies and government engagement.


Unfortunately we like do not have like open like a channel between us and the government’s and what we that missed unfortunately

Speaker

Safaa Sarayrah


Reason

This admission is remarkably candid and insightful because it acknowledges a fundamental weakness in the current approach to digital safety advocacy. It reveals how civil society organizations may be working in isolation from the very institutions that could implement systemic change.


Impact

This honest response shifted the discussion toward structural problems in advocacy and governance. It highlighted that technical solutions and training, while important, are insufficient without policy frameworks and government support. This comment underscored the need for more strategic approaches to engaging with power structures.


Overall assessment

These key comments collectively transformed what could have been a standard presentation about digital safety into a nuanced exploration of systemic barriers, cultural contexts, and structural challenges. Safaa’s framing of digital safety as a life-or-death issue established the moral urgency, while her observations about language barriers provided concrete solutions. The audience interventions – particularly from Lila and Senator Muma – served as reality checks that both validated and challenged Safaa’s perspective. Lila’s example demonstrated the limitations of current efforts, while Senator Muma’s policy-focused question exposed the gap between grassroots work and institutional change. Together, these comments created a comprehensive picture of both the problems and potential pathways forward, elevating the discussion from awareness-raising to strategic planning for systemic change.


Follow-up questions

Is there any platform that is treating digital safety risks in local languages for women in the Global South?

Speaker

Safaa Sarayrah


Explanation

This is crucial because language barriers prevent women from accessing digital safety resources and reporting harassment on platforms, leaving them vulnerable to online violence


Is there any regional hub in the Global South that works on women’s digital safety issues?

Speaker

Safaa Sarayrah


Explanation

Regional coordination is needed to address the fragmented support system where organizations work only within their own countries, leaving gaps in coverage across the Global South


How can policy frameworks be defined to review penal laws to include internet and social media offenses against women?

Speaker

Catherine Muma (Senator from Kenya)


Explanation

There is a need for legislative frameworks that specifically address digital violence against women and children, but current policy development lacks clear direction


What kind of legislative frameworks and policy frameworks should governments and parliaments pass for the protection of women online?

Speaker

Catherine Muma (Senator from Kenya)


Explanation

Specific policy recommendations are needed to guide lawmakers in creating effective legal protections for women facing digital violence


How can channels be opened between governments, tech companies, and NGOs to create systematic support for women in the Global South?

Speaker

Safaa Sarayrah


Explanation

The lack of formal collaboration between these key stakeholders is identified as a major barrier to addressing women’s digital safety comprehensively


How can tech companies be encouraged to provide support in local languages on their platforms?

Speaker

Safaa Sarayrah


Explanation

Most digital safety resources and platform support are only available in English or major languages, creating barriers for women who speak local languages in the Global South


How can regional hubs be created to connect countries with common languages or traditions in the Global South?

Speaker

Safaa Sarayrah


Explanation

Current efforts are fragmented by country, but regional coordination could provide more comprehensive support for women facing similar challenges across borders


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #254 Spyware Accountability in Global South

Day 0 Event #254 Spyware Accountability in Global South

Session at a glance

Summary

This roundtable discussion focused on surveillance and spyware accountability from Global South perspectives, examining how commercial cyber intrusion capabilities like Pegasus are deployed against civil society actors worldwide. The conversation was moderated by Nighat Dad and featured speakers from Mexico, India, Lebanon, the UK, and Meta, alongside a former UN Special Rapporteur on Freedom of Expression.


Speakers highlighted how surveillance in the Global South differs significantly from the Global North context, often occurring within environments characterized by weak legal safeguards, corruption, and authoritarian tendencies. Ana Gaitan from Mexico described how governments exploit security crises to justify surveillance powers, which are then systematically abused to target human rights defenders and journalists investigating military abuses. Apar Gupta from India detailed how Pegasus revelations exposed targeting of reporters, opposition leaders, and even Supreme Court judges, demonstrating threats to democratic institutions built on colonial-era telecommunications laws.


Mohammad Najem from Lebanon explained how the MENA region experienced a dramatic shift after the Arab Spring, with authoritarian regimes implementing extensive surveillance alongside restrictive cybercrime laws. He noted that Gulf countries are not only using surveillance tools but also developing and selling their own spyware as part of geopolitical strategies. The discussion revealed that surveillance has become a thriving business, with over 500 companies selling tools to approximately 65 governments globally.


Elizabeth Davies from the UK presented the Pall Mall Process, a multi-stakeholder initiative launched by the UK and France to address commercial cyber intrusion capabilities through international consensus-building and codes of practice for states. David Kaye emphasized the importance of moving from soft law to concrete implementation, highlighting successful litigation like Meta’s $168 million award against NSO Group. Rima Amin from Meta described the company’s efforts to investigate and disrupt over 20 surveillance-for-hire firms targeting people across 200 countries.


The discussion concluded with calls for enhanced export controls, human rights due diligence, victim notification systems, and meaningful inclusion of Global South voices in international accountability processes.


Keypoints

## Major Discussion Points:


– **Global South surveillance patterns and impacts**: Speakers from Mexico, India, and MENA regions described how surveillance technologies like Pegasus are systematically used to target journalists, human rights defenders, and civil society activists, often exploiting security crises to justify expanded surveillance powers while operating with complete impunity and lack of accountability.


– **Business and geopolitical dimensions of spyware**: The discussion revealed how surveillance has become a lucrative business model, with countries like UAE and Gulf states not only using spyware domestically but developing and selling their own surveillance technologies to gain geopolitical influence and generate revenue, including surveillance of allies and friends.


– **Legal and accountability gaps**: Multiple speakers highlighted the failure of domestic legal systems to provide remedies for spyware victims, with court cases stalling, expert committee findings remaining secret, and authorities claiming no documentation exists of surveillance programs, demonstrating systemic obstacles to justice in the Global South.


– **International initiatives and their limitations**: The Pall Mall Process led by UK and France was discussed as a potential solution, but speakers emphasized the need for meaningful Global South inclusion, moving beyond soft law to binding implementation, and ensuring that international standards address the specific contexts of corruption, weak institutions, and authoritarian tendencies in developing countries.


– **Technical capacity building and victim support**: The conversation addressed the urgent need for Global South civil society organizations to build their own technical capacity for investigating spyware, supporting victims, and conducting device forensics, as current expertise and resources are concentrated in Global North organizations.


## Overall Purpose:


The discussion aimed to center Global South perspectives in international debates about surveillance accountability, examining how spyware technologies manifest differently in developing regions compared to the Global North, and exploring how international initiatives like the Pall Mall Process can be made more inclusive and effective for addressing surveillance abuses in contexts characterized by weak legal safeguards and authoritarian governance.


## Overall Tone:


The tone was serious and urgent throughout, with speakers conveying deep concern about the expanding surveillance threat. While maintaining a professional academic discourse, there was an underlying frustration about the lack of accountability and the concentration of power in surveillance technologies. The tone became slightly more optimistic when discussing potential solutions and international cooperation, but remained soberly realistic about the significant challenges ahead, particularly regarding implementation and the need for sustained commitment to meaningful change.


Speakers

**Speakers from the provided list:**


– **Nighat Dad** – Runs Digital Rights Foundation, based in Pakistan, working in South Asia on surveillance and digital rights issues (appears to be the moderator)


– **Ana Gaitan** – From R3D Mexico, expert on surveillance and digital rights issues in Latin America


– **Apar Gupta** – From Internet Freedom Foundation India, working on surveillance accountability and digital rights in South Asia


– **Mohamad Najem** – Runs SMEX (organization), based in Lebanon, working on digital rights issues in the MENA region


– **Elizabeth Davies** – Policy lead at Pall Mall Process Policy, UK Foreign Commonwealth and Development Office Cyber Policy Department


– **David Kaye** – Law professor at the University of California, former UN Special Rapporteur on Freedom of Expression


– **Rima Amin** – Security policy manager at META, focused on community defense


– **Jennifer Brody** – From Freedom House (asked a question from the audience)


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Surveillance and Spyware Accountability: Global South Perspectives


## Discussion Summary


### Introduction and Context


This roundtable discussion, moderated by Nighat Dad from the Digital Rights Foundation in Pakistan, brought together experts from across the Global South and international stakeholders to examine surveillance and spyware accountability. The conversation featured speakers from Mexico, India, Lebanon, the United Kingdom, and Meta, alongside a former UN Special Rapporteur on Freedom of Expression.


The discussion emerged against the backdrop of revelations about widespread spyware abuse, with over 500 companies now selling surveillance tools to approximately 65 governments globally, creating what speakers described as a thriving surveillance business that poses fundamental threats to democratic institutions and human rights.


### Global South Surveillance Patterns and Impacts


#### Latin America: Security Narratives Masking Repression


Ana Gaitan from R3D Mexico revealed how governments exploit security crises to justify expanded surveillance powers while systematically targeting those who challenge state authority. She explained that “these narratives are actually being used to criminalise citizens in contexts usually represented by high rates of impunity, corruption, and collusion with organised crime.”


Gaitan described how surveillance powers in Latin America are abused to target human rights defenders and journalists, particularly in countries with “legacies of past military dictatorships and systemic human rights violations where the rule has been to control, repress, and censor all dissent.” She highlighted Mexico, where military control of surveillance systems enables targeting of human rights defenders investigating army abuses.


The accountability gap in Mexico is particularly striking. Despite clear evidence of Pegasus targeting victims including “undersecretary Encinas” and “Centro Pro,” criminal complaints are systematically obstructed by authorities who claim no documentation exists of surveillance programmes, creating a cycle of impunity.


#### South Asia: Colonial Legacies and Democratic Threats


Apar Gupta from the Internet Freedom Foundation India emphasized that spyware represents “not a hypothetical threat and it is not a threat which is individualised, but is a societal threat to already democratic systems which are under strain and rule of law which exists inconsistently.”


Gupta explained how post-colonial telecommunications laws enable secretive executive surveillance without proper judicial oversight. The Pegasus revelations in India exposed targeting of reporters, opposition leaders, and even Supreme Court judges. Despite a Supreme Court-ordered investigation, the expert committee’s findings remain secret even from petitioners whose devices were examined.


Moderator Nighat Dad provided crucial context about Pakistan’s surveillance infrastructure, revealing that telecommunications providers are required to ensure surveillance capabilities for “at least 2% of their customer base, which is around 4 million people, so 4 million people are under surveillance at any given time in this country.”


#### MENA Region: Post-Arab Spring Surveillance Expansion


Mohamad Najem from SMEX in Lebanon described how the Middle East and North Africa region experienced dramatic transformation following the Arab Spring, with authoritarian regimes implementing extensive surveillance alongside restrictive cybercrime laws. He noted that “this kind of regulation affected a lot the space, and we started seeing a lot of people going to jail for, like, 10 years, 15 years, for things they have said online.”


Najem revealed that Gulf countries are not only using surveillance tools domestically but are also developing and selling their own spyware. He explained that “a lot of these countries are making these softwares to make money,” and noted concerning examples like the UAE providing surveillance software to “RSF Like the rapid support group in Sudan.”


The scope of surveillance extends beyond traditional targets, with Najem noting that “they’re not doing surveillance on their enemies, on activists, but they’re also doing surveillance on other politicians, on their friends, on their cousins.”


### Legal and Accountability Gaps


A consistent theme was the systematic failure of domestic legal systems to provide remedies for spyware victims. Ana Gaitan described how Mexican authorities obstruct criminal complaints by claiming no documentation exists despite clear evidence of abuse. In India, Apar Gupta highlighted how even Supreme Court interventions prove inadequate, with institutional limitations preventing effective parliamentary or judicial remedies.


Mohamad Najem pointed to high-profile cases like the Khashoggi assassination, where despite known spyware use and international attention, “no accountability even for major crimes” has been achieved.


### International Initiatives: The Pall Mall Process


Elizabeth Davies from the UK Foreign Commonwealth and Development Office presented the Pall Mall Process, launched by the UK and France in February 2024, as a multi-stakeholder approach to establish international consensus on surveillance technologies. The process has achieved support from 24 other states for a code of practice.


Davies announced the UK’s Common Good Cyber Fund, developed with Canadian partners, to support civil society actors at high risk of digital transnational repression. However, she emphasized that “implementation is crucial next step,” acknowledging that soft law commitments must translate into concrete action.


David Kaye, former UN Special Rapporteur on Freedom of Expression, warned that “states are just kind of driving trucks through any small space that they can carve out to do what they wanna do.” He highlighted concerning developments like the European Union’s Media Freedom Act, which “actually carves out a little bit of space for the use of spyware against journalism.”


### Private Sector Role and Litigation


Rima Amin from Meta described the company’s efforts to investigate and disrupt over 20 surveillance-for-hire firms targeting people across 200 countries. The WhatsApp lawsuit against NSO Group resulted in a $168 million award, which David Kaye highlighted as demonstrating that “legal action is possible.”


However, Amin emphasized that “legal recourse must be accessible specifically for those targeted by surveillance technologies,” highlighting the need for more comprehensive victim support systems.


### Capacity Building and Knowledge Transfer


Nighat Dad explained that “transfer of that knowledge is happening at a very slow pace,” forcing Global South organisations to “build our own knowledge and capacity so that we are on ground, can provide the support to the victims and survivors as first responders.” She mentioned that DRF is building an emerging threat lab to provide device forensics capabilities.


Apar Gupta highlighted the need for victims to access device testing methodology, given the high barriers that exist in domestic jurisdictions.


### Key Recommendations and Proposals


Apar Gupta outlined three specific recommendations:


1. A moratorium on commercial spyware


2. Export control alignments between countries


3. Victim notification rights


The discussion revealed tension between restrictive approaches that prioritise preventing abuse and permissive approaches that seek to balance legitimate uses with human rights protections. Kaye advocated for narrowing the scope as much as possible, while Davies promoted a broader multi-stakeholder approach that acknowledges legitimate uses with proper safeguards.


### Conclusion


The discussion revealed surveillance and spyware accountability as a complex challenge requiring coordinated international action while respecting the specific contexts of Global South countries. Speakers demonstrated consensus around the systematic misuse of surveillance technologies against civil society actors and the urgent need for victim support systems and accessible legal remedies.


The conversation highlighted the inadequacy of current responses, particularly in addressing the structural conditions that enable abuse in developing countries, and emphasized the need for enhanced capacity building and genuine partnership between Global North and Global South stakeholders.


Session transcript

Nighat Dad: In 2021, the Pegasus Project, an investigation by Forbidden Stories and Amnesty International shook the world. It revealed how Pegasus, a military-grade spyware developed by the Israeli firm NSO Group, had been used to target at least 189 journalists, 85 human rights defenders, and over 600 politicians and government officials globally, including cabinet ministers and diplomats. This was not just a moment of reckoning. It sparked a global demand for accountability. Since then, we have seen some movements in the U.S. blacklisted NSO Group and other surveillance firms. The U.K. and France launched the PolMol process earlier this year to start conversation around ethical oversight of such technologies. But despite these efforts, surveillance remains a booming, largely unregulated industry. Over 500 companies continue to market and sell these tools to around 65 governments worldwide, many of them in the Global South. While the discourse on ethical oversight and regulation is growing, it remains largely centered in the Global North. What’s missing are the perspectives, experiences, and context from the Global South, where surveillance not only thrives in silence, but often intersects with weak legal safeguards, authoritarian impulses, and shrinking civic spaces. And that’s why we are here. This roundtable is an attempt to bridge that gap to center the Global South voices in global surveillance and accountability debates. We want to ask, what does surveillance look like in regions like Latin America, South Asia, MENA region, and Africa? What forms does it take from sophisticated spyware like Pegasus to more traditional brick and mortar tactics still used by many actors around us? And critically, how do the solutions from the Global North apply or not apply in our context? Over the past five years, the surveillance industry, tech industry, has been a growing industry. It’s a growing industry. It’s a growing industry. It’s a growing industry. has only expanded. Even the most cautious individuals, journalists, human rights defenders, civil society actors, find themselves vulnerable. And it’s no longer just about states surveilling citizens. We are now seeing an ecosystem where private actors, outsourced contractors, and even foreign governments are deploying these tools to watch, track, and silence dissent. So I’ll just stop here with the introduction and just introduce some of our panelists who are here and two panelists who are joining us online. We have Ana Gaten from R3D Mexico. We have Apar Gupta from Internet Freedom Foundation India, Mohammad Najam from SMEX Lebanon, Elizabeth Davis, who is a policy lead at Pall Mall Process Policy, UK Foreign Commonwealth and Development Office Cyber Policy Department, David Kaye, who is a law professor at the University of California and former UN Special Rapporteur on Freedom of Expression. And last but not the least, Reema Amin, who is a security policy manager at META, focused on community defense. So I’ll start with my first question to Ana, basically. If you can just tell us a little bit about what are the implications of state-sanctioned cyber intrusion on its citizens in your region, but also in the global south, specifically, in what context in Latin America is differ from the north when it comes to surveillance and spyware technology?


Ana Gaitan: Sure. Thank you, Nighat. In many Latin American countries, governments have taken advantage of security crisis experienced by their societies to make it appear like our only alternative to protect ourselves is to give up our privacy, implying that if we do not, we will only be helping criminals commit more crimes. However, the reality is that these narratives are actually being used to criminalize citizens in contexts usually represented by high rates of impunity, corruption, and collusion with organized crime. Thus, rather to give us more security, surveillance powers in Latin America are abused to target human rights defenders and journalists in legacies of past military dictatorships and systemic human rights violations where the rule has been to control, repress, and censor all dissent. For example, in Mexico, abusive surveillance powers exacerbate in a context where Mexico has led and maintained for more than 15 years a military approach to public security risks, granting powers to the military that are constitutionally prohibited. The army has systematically abused surveillance technologies to interfere with investigations carried out officially and by human rights defenders and journalists related to the army’s human rights abuses, such as extrajudicial killings and enforced disappearances. Many of the Pegasus infections occur at times when the victims were carrying out work related to human rights violations committed by armed forces or police authorities. In fact, information that has been made public as a result of the hacking carried out by Colectivo Guacamaya confirms that the surveillance and monitoring activities carried out are mainly done against civil organizations, human rights defenders, activists, and journalists where they are classified as pressure groups for their work in defense of human rights. For example, in Mexico, one of the victims, undersecretary Encinas, was in charge of the Truth Commission for the disappearance of 43 students from Ayotzinapa in which army personnel participated. And another victim was Centro Pro, a human rights organization who represented the families of the victims in this case and represents many other victims of human rights violations by the army. In 2017, 2022, and 2023, surveilled victims of the Pegasus infections were identified teams in Mexico, mainly human rights defenders and journalists, filed criminal complaints with the Special Prosecutor’s Office for Crimes Against Freedom of Expression for the crimes of illegal interception of private communications and illegal access to computer systems. However, despite multiple calls by national and international actors regarding the need to carry out a diligent investigation, justice and accountability have been obstructed by the authorities under scrutiny, who consistently claim no database or formal documentation of the records regarding the persons of numbers targeted by Pegasus exist. Furthermore, in a context in which the army does not only control the federal security and intelligence apparatus but now controls ports, airports and roads, as well as operates trains, refineries, airlines, touristic resorts, banks and many other business interests, it is particularly problematic that it deploys surveillance technologies with complete opacity and impunity. And I think that in general terms, this is the context in which Latin American countries usually use and abuse surveillance powers in which they try to stifle dissent and represent censored human rights defenders and journalists without any type of redress or reparation for the victims’ access to justice, non-judicial or non-judicial remedies. So I would end with that.


Nighat Dad: Thank you so much, Ana. I’ll head to Apar Gupta, who is joining us online. And Apar, if you can build on what Ana basically described, what is happening in their region, how do you think surveillance has manifested in your or our part of the world? Are zero-click attacks a significant threat in the region or do other urgent issues take precedence? And what role has the Internet Freedom Foundation played in this regard?


Apar Gupta: Thank you so much, Nighat. Picking up from the remarks which were made by Ana. I think there is continuity as well as similarity in our experience in India. The Pegasus revelations of July 2021 included at least 38 reporters who were prominent in their criticism of the government, opposition leaders and activists, and even included a sitting Supreme Court judge. And this shows and demonstrates that the very functionaries who are vested with both official powers as well as roles and responsibilities in order to keep the state honest, in order to ensure that democracy preserves, are themselves the victims of zero-click attacks. Therefore, it’s not a hypothetical threat and it is not a threat which is individualized, but is a societal threat to already democratic systems which are under strain and rule of law which exists inconsistently in countries in South Asia. And this brings into sharp focus where the underlying foundation of the telecommunication laws in many countries in South Asia comes from a post-colonial legacy in which the state had absolute control over the spectrum and the airwaves, and extension of this has resulted in an opaque secretive system in which there is no requirement of judicial sanction and there is no independent parliamentary oversight in which most of the powers are centralized within the executive branch of the federal government itself. So, it is essentially a secretive procurement and then subsequent deployment of spyware technology which attacks the very roots of a democratic system across South Asia which builds off this colonial legacy. And is it still continuing? Or was it just a one-off instance in 2019 and then in 2021 in India? The notifications by Apple in October 2023 included scores of Indian MPs and reporters of a state-sponsored attack on iPhones, and which had echoes of Pegasus. So this is a problem which is much more wider than one specific company or one specific type of software. And our response to this has been, firstly, increasing the amount of public awareness around this issue, that this is not a conventional issue of surveillance in which information is being gathered in breach of the law. This is something which is a much more deeper harm to a democratic system itself. So IFF launched a campaign around it. There was strategic litigation, which was also conducted in the Supreme Court, which remains pending. And there was a special committee constituted by the Indian Supreme Court, whose findings are not yet public. This also demonstrates the remedial gaps which exist in rule-of-law processes, where institutions may not strongly react to infections by spyware domestically through this court. And subsequently, we have also filed one petition asking for a much more structural reform of India’s surveillance laws. Three short points before I end this intervention. The repeated instances of the use of spyware, the foundational deficiencies in our legal system, as well as institutional frameworks to enforce remedy, calls into sharp focus the need for platforms, as well as multilateral and multi-stakeholders, the organization and processes to do the following. Possibly consider monetarium, non-commercial spyware until its legality, necessity and proportionality principles can be set through multilateral frameworks, export control alignments which also have transparency requirements to which countries, what kinds of technologies are being used. issued, and what are the standards for which such kind of export controls actually apply. And specifically, till that happens, for victim notification and right, which enables the right to remedy, should be maintained by most platforms when they do detect spyware infections. So much.


Nighat Dad: Yeah. Apar, thank you so much for giving us a detailed picture of what is happening in India, but beyond India, in South Asia. And the kind of actions that IFF has taken. I just wanted to ask you, the cases are still pending, or in any of those petitions, have you heard from the courts or any hope around those petitions?


Apar Gupta: So, there was a high amount of hope when the petitions were initially filed, given there was a vast amount of public interest, as well as the reporting activity which was being carried around them. The petitions came to be filed shortly after the revelations were made, sometime in September 2021. And an expert committee was set up, however, its findings were not made public. And the case was then not posted for active hearing, at least for a period of two years, from 2023 to 2025. Earlier in the year, the case did come up for hearing, but then again, it is not a case which is proceeding fairly, with some kind of pace. And the fight right now is to get the determinations by the expert committee as to the examination of the devices made public, or at least even be made available to the very petitioners who have approached the court, whose phones were submitted to the committee. Even they don’t have access to the report. Again, I will re-emphasize, while it is essential for a lot of people in South Asia, not only in India, to engage with their courts, with Parliament, in the public sphere. There are limitations which are there in these institutional processes as to the autonomy and ability to provide remedy to victims.


Nighat Dad: Thank you so much, Apar. I would just say that, especially in South Asia, keeping in mind geopolitics and especially ongoing conflicts going on, I think it’s becoming more and more difficult for civil society to raise issues around spyware, surveillance. It sort of has become a taboo issue over the years, where you, when you mention accountability on spyware technologies, that’s where you also kind of, you are in a situation where you, there are several backlashes coming from different segments of the state and other actors. I’ll come to you, Mohamad Najem. You run this organization, SMACS, in MENA, and I wanted to ask you if you can also elaborate some patterns that you have observed in how spyware is deployed in your region, in MENA, and what SMACS is actually doing to address that challenge.


Mohamad Najem: Thank you, Negat. First I want to start by mentioning the MENA region has some specificity to it, because in 2010, 2011, when we witnessed the Arab Spring, the community and the societies in the region were going in one direction, and gradually we went into totally the opposite direction. So basically, the space, the tech space was really open before like the Arab Spring, or kind of open. When the Arab Spring happened, when we saw the results, all the authoritarian regimes, all the governments, like from the Gulf, Egypt, everybody came together and they started closing the civic space slowly, slowly. And one of the big tactics that they use is, is… surveillance. Of course, not only surveillance, but I’m just going to talk about some of these points. So, basically, they started first, like, there was almost zero regulations when it comes to the cyberspace, to the online space. And suddenly, in 2015, there was dozens of laws around, like, cybercrime laws, around freedom of expression, every, like, all the regulations started, came out, and all these regulations have one goal, to actually limit the speech, limit what people are talking about online. So, this kind of regulation affected a lot the space, and we started seeing a lot of people going to jail for, like, 10 years, 15 years, for things they have said online. So, regulation was really one of the big tools they have used. And then, when we want to talk about surveillance itself, of course, like, the Gulf country, they have a lot of money. We have seen the relationship with Israel was kind of, like, not known. Are they actually enemies, or are they friends? There was no public conversation about it. But then, later on, we discovered that the Gulf has been investing a lot with NSO, Pegasus, and there was a lot of cases about it. I’m sure you all know about it. So, Pegasus have been used heavily, and NSO, and so many other softwares and companies. So, this definitely affected a lot what we’ve seen right now, especially with the case of Jamal Khadjikshi, that everyone saw it happening, and we have seen that there’s actually no accountability. We’re not only talking about spyware accountability, we’re talking about accountability about a crime that happened. It’s really like, of course, spyware has been used. But like it’s much bigger than spyware, and I’m sure David will touch upon this a little bit So more and more of like the self-censorship has started to be created because of this Like there’s no accountability. There’s no transparency So there is really a lot of censorship that self-censorship that started to be created and Also at the same time we’ve seen these governments Because of the relationship with Israel has been publicly Known right now or like let’s say 2015 2017 2016 they also started to develop their own softwares their own local softwares We’ve seen Egypt doing this we’ve seen UAE doing this there’s so many articles on New York Times and so many other places We’ve seen Morocco. We’ve seen Saudi Arabia, so we started seeing these countries basically looking at this from a business perspective And I really think when we think about spyware we really need to look about it from a business perspective for so many authoritarian regimes and what they have been doing for the last five years is Not only producing their local solutions. I mean local solutions local spywares, but they’re also trying to sell it And there are geopolitical wars like we’ve seen We’ve seen for example like UAE are giving their softwares their surveillance to RSF Like the rapid support group in Sudan We have seen it happening in Egypt we have seen it happening in so many other places So it started to be part of the geopolitical game Mostly to gain more power. Sorry I put five minutes, and that was very quickly. So I need to finish quickly. And we also have seen it. Where was I? OK. So basically, UAE were selling it in terms of to empower their geopolitical presence, but to also make money. A lot of these countries are making these softwares to make money. So we need to understand this as advocates of digital rights. It’s a business decision as well for them. And one thing that we don’t talk a lot about in our communities is they’re also doing surveillance on their friends and allies. And this is something really important, because we have discovered in so many cases that all these countries are doing surveillance on each other’s. They’re not doing surveillance on their enemies, on activists, but they’re also doing surveillance on other politicians, on their friends, on their cousins. So they want to catch their secrets. They want to understand what they’re doing. So this is happening. I mean, in terms of SMACs, I know I didn’t answer the question yet, Negat. So briefly, what we’re trying to do, we’re trying to support civil society groups. We’re trying to support LGBT groups. We know there are so many criminalization of LGBT in our region. So these are the most vulnerable groups. So our team is distributed among the Arab-speaking countries. And we’re trying to do some kind of support to mitigate some of these threats. Of course, the threats are much bigger. We need to do more collaboration among civil society groups. And yeah, I’m going to stop here. Sorry, I passed my time.


Nighat Dad: No, thank you so much, Najem. I think that was really important to get a good picture of Gulf countries. I’ll come to you, Elizabeth. We would love for you to speak briefly about how the Paul Moll process can support efforts in the global south. Now you have heard three speakers from different regions. And the one pattern that is emerging is basically inaccountability and nontransparency by the governments. And how do you think that Paul Moll process or mechanisms like this can be meaningfully inclusive of actors outside the Global North?


Elizabeth Davies: Yeah. Thank you very much. Hopefully, everyone can hear me all right. I’m sorry to not be able to join you in person this morning. So, I’ll just do a brief overview on what the Pal-Mell process is at the start to avoid any confusion. So, it was launched in February 2024 by the UK and France as a multi-stakeholder international initiative to address the global threat posed by the rapidly growing market in commercial cyber intrusion capabilities. So, what we call C6 as a shorthand, but including but not limited to commercial spyware like Pegasus has been mentioned this morning. This threat includes a serious harm caused by the irresponsible use of these tools to target human rights defenders, journalists, and others who play a vital role in protecting and promoting fundamental freedom, much of which we only know about thanks to the brave work of many of you here today as we have set out. But we also, as the UK, have concerns because of this rapidly growing market’s impact on the cyber threat that we as states all face by lowering the barrier to entry to advance capabilities to a wider number of states and non-state actors. And that increased the volume, the variety, and severity of threats that we face threatening our officials, our infrastructure, our businesses, and our citizens. And we know that this threat will become increasingly acute into the future as the market expands, diversifies, and specializes. So, while we’re focusing today on commercial spyware in particular, I think it’s important to emphasize that the power map process looks more broadly across the commercial cyber intrusion market, so including elements of the supply chain that sit under these. sophisticated capabilities, like the vulnerability and export marketplace, because we don’t believe any one element can be tackled in isolation. The UK government does believe there are legitimate uses of these tools in defense of cybersecurity operations, for example, for national security and law enforcement. But these should be limited, and they should only be used with appropriate oversight and safeguards in place, and never in ways that threaten human rights or contravene international law and norms. So through building international consensus, the aim of the Pal-Mao process is, broadly, to ensure that access to the most advanced capabilities remains limited and controlled, that international norms and safeguards ensure the responsible development, use, and sale of cyber introduction capabilities around the world, and that better transparency across the market makes it easier for states to enhance national resilience and take proactive action to tackle irresponsible activity. So what we’re trying to do is set the rules of the road for different actors across the market and thinking about shared responsibility towards tackling this threat that impacts all of us. So in terms of action that the Pal-Mao process has done so far, we started with states as the major customers of this industry. As colleagues have set out, this is a business. So we’re talking here about trying to shift market incentives. So we used the findings of a multi-stakeholder consultation last autumn to draft a code of practice for states, which contained a series of detailed recommendations for states as responsible regulators, customers, and users of C6 under the four pillars of the Pal-Mao process, which are accountability, oversight, precision, and transparency. This final product was achieved through an extensive multi-stakeholder negotiation. So we’re hugely grateful for the constructive engagement in the process of this of many of you here today. We secured formal support from 24 other states for the code of practice so far, including Ghana, our first state supporter from the global south. And we hope that this number will continue to rise. But this is only a stepping stone, which I think is the really important thing to mention. we know that the code of practice will only have an impact if we implement the commitments we’ve made and continue to build on that momentum. So the next steps for the power mile process are sort of, we anticipate across three work streams, one of them being focused on implementation of the code of practice for states. So supporting states to put these commitments into practice, whether that’s through existing policy levers or creating new ones, there’s no point in having this if it’s not implemented properly and comprehensively. With industry, we are now turning our expectations to those working in this market themselves. So we have to agree collectively what practices, internal processes, security measures, and other elements should be implemented as standard across the market to help curb irresponsible activity and misuse, and accountability and tracking progress, and developing ways of formally doing this. We know this is needed to bring about this behavior change. Our approach so far has been to try and work to set a standard first because you can’t hold irresponsible actors accountable or shape the market until there is a credible standard to hold them accountable to. Well, that brings me to the huge importance of multi-stakeholder participation in the power mile process. You know, we need help from all of you in shaping the standard of behavior, convincing key players to join it, or at least a critical mass of them, enough to change this market, and then by holding them accountable to it. You know, the hugely valuable work all of you have done so far in highlighting that abuse that we want to continue to support you in doing. But we know that our efforts in this haven’t been inclusive enough so far. You know, the power mile process has always set out to be truly international from the beginning. We stated that it was a global problem that required a global solution. So an initiative that just involves states and other stakeholders from the global north is never going to move that dial. And we know that there is that huge importance in involving civil society stakeholders from the global south. And I’m sorry if some of you feel that that hasn’t been the case so far, and that’s something we would like to change. We know you can bring your significant expertise. to inform the process of developing some kind of code of practice for industry by applying the knowledge that you have of how these products and services have been developed, sold, and used in particular contexts and feeding that into the consultation process that we will be holding on this going forward. And to help determine, like I say, what standards need to be baked in in this industry or can never be included for a product to be responsibly used. And that expertise is important, too, in our work in supporting implementation of the code of practice for states. We want to know where you think that there are global examples of best practice in this space or where things have been tried in particular countries and haven’t worked. And as I said, you’re particularly vital partners when it comes to holding states and companies to account for irresponsible behavior and exposing whether their actions don’t meet their words, whether that is involving those who have signed up to the code of practice yet or not. Because we want to support efforts by states to address this threat in the global South, too. We want more states from the global South participating in these discussions if they are doing so in good faith and hopefully signing up to the code of practice, committing to the actions within it, and crucially, being held accountable to them. Like I say, we were delighted to have Ghana sign up to the code of practice and attend the conference in Paris in April. And we hope that more countries from the global South will follow. We’ve also worked to ensure that the issue is highlighted in the relevant multilateral fora so that the discussions are international. So we’ve consistently raised the issue at the UN’s open-ended working group on ICTs. And we hope that language will be included in this year’s final report. And I should say, last but very, very much not least, the UK wants to make sure that we are supporting those who are already victims of the irresponsible use of these capabilities or are at high risk of becoming so wherever they are in the world. And so that’s why we were pleased to announce alongside our Canadian partners last week the Common Good Cyber Fund, aimed at supporting civil society actors at high risk of digital transnational repression in particular, including in the global South. I’ll stop there because I think I’ve probably already gone over time. But I’m really keen to hear from all of you what more we should be doing, and particularly how we can support your efforts, both from the UK side and the Pal-Malt process.


Nighat Dad: Thank you so much, Elizabeth. I think it’s very useful what process UK and French-led initiative Pal-Malt process is doing and how you are trying to include Global South voices in this process. Professor David, I’ll come to you, and you have been in this space for so long. You are a former rapporteur at UNSR on freedom of expression, and I have seen you very actively engage with the accountability mechanisms around spyware technology in the US and also other processes. And I would like to ask you very candidly how international law, especially during these times, is effective when Global South actors, especially civil society, is trying to hold big actors accountable around spyware technology, and also if you think that Pal-Malt process and other processes like these would be useful for Global South.


David Kaye: Great. Thanks, Negat, for pulling us all together here. I guess the way I would answer the question at the highest level is that international law is only effective when it’s domesticated, when it’s actually applied. So I was very happy to hear from Elizabeth that Pal-Malt process is a stepping stone. How we judge it is by its implementation. And so in thinking about implementation, about the strength or the power, the effectiveness of international law, let me just try to make three points, or divide this into three sections. First, on global export constraint. Clearly. We need clarity and human rights standards to be a part of global export constraint. And that means that any of the constraint that we see that’s applied by governments or by international organizations needs to meet the three-part test of international law, precision, proportionality, legitimacy, and all of that. And those have to be implemented at the domestic level. I think one thing that we have seen in global constraint is that, particularly during the years of the Biden administration, the actual sanctioning of bad actors has an impact, right? So it’s one thing to have high-level soft law, but we really need the actual sanctioning of the bad actors because that has a very strong impact on the ability of those bad actors to continue to operate. And so as we’re thinking about global export control, we need to be thinking about how do we make that part real. The second part that I want to mention is litigation, right? So litigation is also a form of moving from the soft law, from the standards, to actual implementation. So I think we saw in a very constructive way WhatsApp or Meta suing the NSO group, which led just in recent months to an award against the NSO group of $168 million in a U.S. court. Now whether that holds remains to be seen, but that kind of pressure is absolutely essential. Now it’s more difficult to do that kind of litigation against states because of sovereign immunity. There’s variation in different jurisdictions. It’s harder to do in the United States than in the UK, for example. But we continue to see that kind of litigation as a tool to impress upon the global community and particularly on the bad actors, that there are consequences for their actions. The third area that I want to talk about, and I’ll devote sort of the balance of my time to this, is on domestic constraints. Domestic and sometimes supranational constraints that can be imposed on law enforcement and on the intelligence community. So actually, this has been an area where there’s been quite a bit of positive movement in recent years. Just last December, the Venice Commission, of which I’m a member, issued a report on spyware that has very strict rules with respect to law enforcement’s use of spyware tools in domestic settings. I think it’s important to look at that, to look at the European Union and the European Parliament’s PEGA committee process on this, and to ask a couple of questions about these domestic constraints. The first one is, will they be applied? So the Venice Commission really details some very significant rules that should be applied by states. Will they be applied by states? We’ve seen significant use of spyware against human rights defenders, against journalists. Even over the last several weeks, we’ve seen the scandal of the use by the Italian government against journalists. Will these standards actually be applied? That’s really a very open question right now. And then the second question, I think that’s very important is, will these standards be used to apply to global export constraint? Because it’s one thing for European governments. and others to say these are going to be the standards that are going to apply for the use internally, but will those standards actually be used in the consideration of what kind of tools can be exported to countries around the world? At the moment, I don’t know that we actually have an answer to that, but I would urge certainly processes like the Palma process to move quickly from sort of the higher level of soft law to really looking at these standards at the domestic level and say that these are the standards that also must apply to global export control. I’m at five minutes, so I’m going to stop there. I’m sure in the conversation, there’ll be more that we can talk about.


Nighat Dad: Thank you so much, Professor David. I’ll also encourage our audience, if they have questions, please prepare them. I’m not taking them right now. We have one more speaker, but I’ll open the floor after Rima. So Rima, I’ll come to you. You are working at Meta and your role is actually looking into issues like what we are discussing right now. Can you tell us how common is it over years or even recently to track individuals through online platforms? Recently the NSO was ordered to pay damages over $150 million to Meta for hacking WhatsApp in 2019. By the way, if you get that, I think you should proceed that to the civil society in the global south who are working on spyware accountability, but that’s just, I’m joking. So what does this mean for online platforms tackling vulnerability issues against cyber tech companies such as NSO?


Rima Amin: Sure. Thank you so much. And also just to say that is actually our plan if we are awarded those damages to be able to contribute that to organisations who are supporting people targeted by surveillance for hire. In terms of what we’ve seen on Meta’s platforms, we have investigated and disrupted operations from over 20 surveillance for hire firms across targeting people across 200 countries. And I think many of the speakers have pointed out that many of those people are based in the global South. These surveillance for hire firms claim that their technology is being used to target criminals and terrorists, but our investigations have shown over and over again, regular targeting of dissidents, critics of authoritarian regimes, families of opposition and sort of human rights activists. Our teams are very focused on sort of investigating these threats, disrupting them, notifying victims, working also with civil society organizations to find ways to support them. We also, where appropriate, do sort of intelligent sharing because we know that these types of threats cut across sort of different platforms and places. So it’s really important that we’re able to share that intelligence. And then also we release information about these threats through adversarial threat reports. In terms of your question relating to the NSO group, I think in the spirit of, you know, stepping stones as we’re talking about today, I think what that lawsuit really showed was that legal action here is possible. And in terms of having some optimism there, we hope that this will provide a bit of deterrence for, you know, the manner of which some of these surveillance for hire companies are operating. I think Mohammed also spoke about these being sort of businesses. And so, again, We hope that this lawsuit has provided insight for investors who may be thinking about investing in this type of technology as well. So, there’s a couple of things that I think we can be optimistic about. There were also a couple of things that we learned through the lawsuit. Firstly, we learned about NSO’s actual role in the data retrieval and delivery of the technology, which was sort of almost every part of it, so that was an interesting insight. And we also learned that WhatsApp were far from being the only ones targeted by the NSO group, so they spent tens of millions of dollars on malware installation across things like instant messaging, browsers, and sort of operating systems. In terms of sort of what we see as being needed next, of course it’s a really important step that we were able to take with the litigation, but we really need, and I think a lot of speakers here spoke about this, we need legal recourse to be accessible and attainable specifically for those who are targeted by these technologies. Elizabeth spoke very well about some of the controls and guardrails that are really needed for this industry. I think that’s pretty key as well, because we need something for these firms to be sort of accountable towards, and we also need to prevent these technologies from being misused in the first place.


Nighat Dad: Great. Thank you so much, Rima. So I’ll open the floor for questions. If you have questions, can you raise your hand, we’ll take two, three questions from the floor, and then please specify which speaker you want to ask questions to. There was one hand that I saw, yeah. So if you have a question, you have to go to the mic, which is there. We need to put this right here, yeah, okay. Hello, can you hear me? Yes. Yeah, thank you for the excellent panel.


Jennifer Brody: My name’s Jennifer Brody, I’m with Freedom House. I have a question really for the panel, but specifically to David Kaye. You mentioned the importance of export controls. In my work on this topic, what seems to be kind of the next step, lowest hanging fruit, is to help governments create enhanced human rights due diligence guides, essentially. It’s something civil society supports, governments in theory want to get behind, and the quote-unquote good actors in industry are also keen on this work. So curious if, yeah, David, if you have any comments. Also directed at Elizabeth Davies with the UKFCDO. Thank you.


David Kaye: Great, Jen, thanks for that question. It’s a great question. So I agree with that. I mean, I think going towards very specific due diligence approaches can just concretize what we’re talking about, and give a kind of checklist for governments to determine what is and is not legitimate. And also, I think to the extent that that due diligence can be transparent and widely shared, it also enables governments to share that kind of information for civil society, for other stakeholders to engage in that. I think getting to the title of this panel, I think it’s gonna be extremely important for that kind of due diligence to be widely shared outside of. of, you know, the global north. And there are efforts, there’s an African regional spyware initiative right now that can be one kind of vector to getting that kind of information and building that kind of capacity outside of the north, which could be really valuable. But I think due diligence like that is definitely important, particularly given that we’d be talking about fundamental human rights standards that should be applying here.


Elizabeth Davies: Yes. Elizabeth. Sure. Yeah. Just to come in on that as well, I think, yeah, fundamentally we would agree. I think this is, it’s one of the most obvious areas to focus on when it comes to that concrete implementation of the code of practice. So we are planning to sort of set up some particular working groups focused on particular areas of implementation that we think we can work on now over the next years. And one of them will be focused on export controls, because I think even also the sort of flip side of the human rights due diligence is also ensuring, I think, that national export control authorities fully understand what lots of these tools are capable of and therefore that they are asking the right questions when it comes to those human rights due diligence questions as well. It’s a complicated area and the technicalities of it, I think, sometimes are what kind of tie everybody up in knots. So improving that human rights due diligence and also just improving the application and enforcement of export controls across this space is something that we really want to look at quite closely. So yeah, we will be welcoming lots of input into that as to how all of this can be applied.


Nighat Dad: Do we have any other questions from the floor? Maybe any comment, addition, if there is no question, like if you want to add into this debate. If not, I would like to share some findings. So, I run this organization called Digital Rights Foundation, we are based in Pakistan, working in South Asia, now looking at the region, and so we are working on this series of regional scoping studies that will be released in coming months, and this study basically explores what surveillance looks like in South Asia, starting from Pakistan, India, Sri Lanka, Bangladesh, and Apar, who is also a speaker, they are also contributing to this study, and our aim is to uncover which cyber intrusion capabilities are available in our context, what risks they pose to privacy and digital rights, and what gaps exist in transparency and accountability. I’ll briefly mention a few findings in our report. In Pakistan, we found that the lawful intercept management system, which we call LIMS, is central to state surveillance. It is managed by a regulator, and funded by telecom providers, and the system facilitates real-time access to messages, call logs, metadata, and even audio-video content. Shockingly, telecom providers are required to insure at least 2% of their customer base, which is around 4 million people, so 4 million people are under surveillance at any given time in this country. I’ll go to Sri Lanka. In our research, some of the findings we are finding is the use of backdoors and unmonitored data transmission in devices provided by companies like Huawei. The Telecommunication Act gives sweeping interception powers to ministers while judicial approval is required. The lack of clear SOPs makes the entire process opaque and vulnerable to abuse. In India, while the landmark Putswami judgment recognized previously as a fundamental right, the passage of the Digital Personal Data Protection Act in 2023 marked a worrying regression. The law fails to meaningfully protect citizens from surveillance and decision-making around In Bangladesh, surveillance involves both traditional forms as physical tailing and white taping as well as more sophisticated and intrusive domains. Some police officers routinely have access to and use surveillance technologies, especially during protests or socially disrupted events to collect real-time information. National Regulatory Commission and National Telecommunication Monitoring Center both have the authority to collect data without incorporation from telecom providers. In one example, authorities deliberately slowed mobile and broadband internet access to force citizens in Dhaka to use traditional network communications, which are easier to trace. So these are some of the examples and findings. And they might not sound more sophisticated because the work in Global South is just starting, although by civil society, although the states and actors who have this capacity to acquire technologies are more advanced, and more advanced not only in using these technologies but have more resources. And I think that is the worrying trend where we are way behind in these conversations around accountability, transparency, and really have no means in terms of holding powerful actors accountable and how to hold them accountable, how we can use these international processes and what these processes mean for us. There are several initiatives that are going on. Civil society is building their own capacity, like SMEX and DRF is actually building this emerging threat lab, building our own capacity to support victims and survivors who, when they find that they are being surveilled or sophisticated spyware is used against them, they are at mercy of no one. And then they come to civil society or helplines or help desks. digital security helpdesk to seek guidance or support. And that’s where our role comes in. But we also need support in terms of building our capacity. So what we are trying to do is bridging this gap between the knowledge among Global North Organizations and Global South Organizations. We have really good advanced knowledge in Global North Organizations who are doing these investigations, but transfer of that knowledge is happening at a very slow pace. So what we are doing is trying to build our own knowledge and capacity so that we are on ground, can provide the support to the victims and survivors as first responders. I would like to mention SPIWARE Accountability Initiative, which is focused on Global South Organizations and Governments in their own context. And it’s a very interesting initiative in a very different industry that they can take in supporting civil society or actors who are trying to hold SPIWARE Governments or SPIWARE tech accountability work possible in their own context. So please, anyone who wants to start.


Ana Gaitan: I can start. It’s not going to be one, but I’m going to try to synthesize it. I agree with what was mentioned about the necessity of having national legal frameworks regarding implementation and not just soft law, because that way it can be legally binding. but also to contextualize it according to the global south and what we were discussing, to actually see what happens in countries of the global south regarding corruption, collusion with organized crime, impunity, and how these obstacles relate sometimes to our lack of access to justice, accountability, and transparency. And I think that’s very important for us to connect, and also what Nigat was mentioning about the fact that it’s not only related to the targeting of human rights defenders and journalists in public interest matters that affect democracies, but also we’re establishing a global trend where massive surveillance is happening everywhere. So for example, in Latin American countries, there’s a lot of now centralized data registries that are being interconnected and that allow for everyone to be massively surveilled. So I think that we have to also establish that it’s not only Pegasus and certain type of spywares, but that there is several surveillance technologies that are being implemented that are going to affect every citizen around the world. So that’s it.


Nighat Dad: Rima, can I continue?


Rima Amin: Sure. I think this is a global threat and cuts across different spaces. And so I think really working to drive initiatives like the Palmao process together, I think is going to be super important in making sure that they expand. I think a couple of key areas is driving to make sure the controls and guardrails are in place, both for the companies and also the customers themselves, make sure that human rights due diligence is there and that transparency is there too. And then the second piece to that is really. ensuring that remediation for targets is possible and the ability to drive accountability and legal action as needed is there too. Because unless you have that second piece, the first piece around guardrails completely falls apart, so yeah.


David Kaye: Sure, so I think there’s maybe two things that we need to be thinking about here. I mean, there’s a million things to be thinking about, but one is whether we’ve left open or are leaving open too many gaps. I mean, we live in an era where states are just kind of driving trucks through any small space that they can carve out to do what they wanna do. And so I’m particularly concerned about discussing the spyware industry and its legitimacy when in fact we’re talking about an industry that is essentially performing governmental functions when they shouldn’t be. And so when we have things like the European Unions Media Freedom Act, which actually carves out a little bit of space for the use of spyware against journalism, that’s a huge problem for us. And that’s a huge problem not only for European journalists, but it’s also a huge problem for the message that it sends to the rest of the world. So if I were sort of looking kind of generically at one thing that we should be focusing on, it’s really narrowing as much as possible any scope for the use of these tools at all, if not banning them, which seems not to be particularly on the table right now, but ensuring that that space is really not available for the use of these tools by states.


Nighat Dad: Najem, you have 20 seconds, and then 20 seconds each to Elizabeth and Opar.


Mohamad Najem: Oh my god, my turn, okay. I mean, briefly, I just want to say that coming to what we’ve seen in the last few years in terms of war in my region, I really think that we really need to think about digital rights everywhere, or like human rights everywhere. I mean, in the opening ceremony today, I’ve seen an interesting case of Ukraine. A gentleman was speaking about how successful it was to regain access to the telecom and how it’s helped them a lot through their communication by using Starlink. It’s interesting, and I really admire this experience, but also from the other angle, Starlink has been not used in Gaza, for example. So we really need to think about how we can think of human rights everywhere and digital rights everywhere, and we really need to think about how we can treat everybody equally to have access to the same telecommunication tools.


Nighat Dad: Thank you. Elizabeth, you, and then Opar, we have only 25 seconds left.


Elizabeth Davies: Okay, I will be very quick. I will say one, I think, as we said with the stepping stone, the actual comprehensive and sort of thorough implementation of code of practice is vital and following through on that. But also, you know, particularly in the spirit of this panel, but I think it’s vital to ensure that we don’t become a Global North initiative that is only talking to companies and countries based in the Global North. You know, otherwise we’re not going to have that global impact that we need.


Nighat Dad: Yeah, Apar, very quickly.


Apar Gupta: I think that notifications is something which needs to be universalised across platforms, especially for people in the Global South. The second thing is the ability for victims to reach out to organisations for having their devices tested and the methodology, given that there are very high barriers where evidence is tested in their domestic jurisdictions. So that capacity and that safety needs to be encouraged beyond the four or five organisations. which do it at least.


Nighat Dad: Thank you so much. Thank you everyone to our speakers. I would like to give a shout out to Jennifer Brody from Freedom House. She has been doing a lot of works. We have so many allies in the audience, our speakers. Thank you so much. And this is just the beginning of this debate. Please keep talking about this issue throughout IGF and beyond. Thank you. Thank you. Thank you. Thank you.


A

Ana Gaitan

Speech speed

140 words per minute

Speech length

775 words

Speech time

330 seconds

Security narratives used to justify surveillance while actually targeting dissidents in contexts of impunity and corruption

Explanation

Latin American governments exploit security crises to make citizens believe giving up privacy is necessary for protection from criminals. However, these narratives are actually used to criminalize citizens in contexts of high impunity, corruption, and collusion with organized crime.


Evidence

Rather than providing security, surveillance powers are abused to target human rights defenders and journalists in legacies of past military dictatorships and systemic human rights violations


Major discussion point

Global South Surveillance Patterns and Context


Topics

Human rights | Cybersecurity | Legal and regulatory


Agreed with

– David Kaye
– Elizabeth Davies

Agreed on

Implementation and enforcement are more important than soft law standards


Military control of surveillance in Mexico targeting human rights defenders investigating army abuses, with complete opacity

Explanation

Mexico has maintained a military approach to public security for over 15 years, granting constitutionally prohibited powers to the military. The army systematically abuses surveillance technologies to interfere with investigations of their own human rights abuses, operating with complete opacity and impunity.


Evidence

Many Pegasus infections occurred when victims were investigating human rights violations by armed forces. Undersecretary Encinas was targeted while leading the Truth Commission for 43 disappeared Ayotzinapa students, and Centro Pro was targeted for representing victims’ families. Guacamaya hacking revealed surveillance activities mainly target civil organizations, human rights defenders, activists, and journalists classified as ‘pressure groups’


Major discussion point

Global South Surveillance Patterns and Context


Topics

Human rights | Cybersecurity | Legal and regulatory


Criminal complaints in Mexico obstructed by authorities claiming no documentation of Pegasus targeting exists

Explanation

Despite multiple criminal complaints filed by Pegasus victims in 2017, 2022, and 2023 for illegal interception and computer system access, authorities consistently obstruct justice. They claim no database or formal documentation exists regarding persons targeted by Pegasus, preventing accountability.


Evidence

Surveilled victims, mainly human rights defenders and journalists, filed complaints with the Special Prosecutor’s Office for Crimes Against Freedom of Expression, but investigations have been obstructed by authorities under scrutiny


Major discussion point

Accountability and Legal Remedy Challenges


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Apar Gupta
– Mohamad Najem
– Nighat Dad

Agreed on

Lack of accountability and transparency mechanisms enables surveillance abuse


A

Apar Gupta

Speech speed

142 words per minute

Speech length

892 words

Speech time

375 seconds

Post-colonial telecommunications laws in South Asia enable secretive executive surveillance without judicial oversight

Explanation

South Asian telecommunications laws stem from post-colonial legacy where the state had absolute control over spectrum and airwaves. This has created an opaque, secretive system with no judicial sanction requirements and no independent parliamentary oversight, centralizing surveillance powers within the executive branch.


Evidence

The Pegasus revelations included 38 prominent journalists critical of government, opposition leaders, activists, and even a sitting Supreme Court judge, showing that democratic functionaries meant to keep the state honest are themselves victims


Major discussion point

Global South Surveillance Patterns and Context


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Ana Gaitan
– Mohamad Najem
– Rima Amin

Agreed on

Surveillance is used to target human rights defenders and journalists rather than legitimate security threats


Indian Supreme Court expert committee findings on Pegasus remain secret even from petitioners whose devices were examined

Explanation

While an expert committee was established by the Indian Supreme Court to examine Pegasus cases, its findings have not been made public. Even the petitioners who submitted their phones to the committee for examination do not have access to the report, demonstrating lack of transparency in judicial processes.


Evidence

The case was filed in September 2021 but was not posted for active hearing for two years (2023-2025). The fight now is to get the expert committee’s determinations made public or at least available to the petitioners whose phones were examined


Major discussion point

Accountability and Legal Remedy Challenges


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Ana Gaitan
– Mohamad Najem
– Nighat Dad

Agreed on

Lack of accountability and transparency mechanisms enables surveillance abuse


Institutional limitations in South Asian courts and parliaments prevent effective remedy for spyware victims

Explanation

While it’s essential for people in South Asia to engage with courts and Parliament, there are significant limitations in these institutional processes regarding their autonomy and ability to provide remedy to victims. The repeated instances of spyware use highlight foundational deficiencies in legal systems and institutional frameworks.


Evidence

Apple notifications in October 2023 included scores of Indian MPs and reporters of state-sponsored attacks on iPhones with echoes of Pegasus, showing the problem extends beyond one specific company or software


Major discussion point

Accountability and Legal Remedy Challenges


Topics

Legal and regulatory | Human rights | Cybersecurity


Need for victims to access device testing methodology given high barriers in domestic jurisdictions

Explanation

There are very high barriers for evidence testing in domestic jurisdictions, making it difficult for spyware victims to get their devices properly examined. The capacity and safety for device testing needs to be encouraged beyond the current four or five organizations that provide this service.


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Cybersecurity | Human rights | Development


Victim notification by platforms should be universalized, especially for Global South users

Explanation

Platforms should universalize victim notification systems, particularly for people in the Global South who may have fewer resources and support systems when targeted by surveillance. This is essential for enabling the right to remedy for spyware victims.


Major discussion point

Private Sector Role and Litigation


Topics

Human rights | Cybersecurity | Legal and regulatory


Agreed with

– Nighat Dad
– Rima Amin

Agreed on

Need for victim notification and support systems


M

Mohamad Najem

Speech speed

151 words per minute

Speech length

1080 words

Speech time

427 seconds

Arab Spring backlash led to dozens of restrictive cybercrime laws and massive surveillance expansion across MENA region

Explanation

After the Arab Spring in 2010-2011, authoritarian regimes across the MENA region collaborated to close civic space. They moved from almost zero online regulations to dozens of cybercrime and freedom of expression laws by 2015, all designed to limit online speech and leading to people receiving 10-15 year prison sentences for online expression.


Evidence

The tech space was relatively open before the Arab Spring, but after seeing the results, governments from the Gulf, Egypt and others came together to systematically close civic space using surveillance as a key tactic


Major discussion point

Global South Surveillance Patterns and Context


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Ana Gaitan
– Apar Gupta
– Rima Amin

Agreed on

Surveillance is used to target human rights defenders and journalists rather than legitimate security threats


Gulf countries developing local surveillance software as both business venture and geopolitical tool

Explanation

Gulf countries have moved beyond purchasing surveillance tools to developing their own local software solutions. They view this not only as a business opportunity to make money but also as a way to enhance their geopolitical presence and power projection in the region.


Evidence

UAE has been providing their surveillance software to the Rapid Support Forces (RSF) in Sudan. Similar activities have been documented in Egypt and other countries, with extensive reporting by New York Times and other outlets on countries like Morocco and Saudi Arabia developing local solutions


Major discussion point

Global South Surveillance Patterns and Context


Topics

Cybersecurity | Economic | Legal and regulatory


Surveillance extends beyond enemies to friends and allies for intelligence gathering purposes

Explanation

Countries in the MENA region are conducting surveillance not only on activists and enemies but also on other politicians, friends, and allies. This demonstrates that surveillance is being used for broader intelligence gathering to uncover secrets and understand what others are doing, even within friendly relationships.


Evidence

Multiple cases have been discovered showing these countries doing surveillance on each other, not just on their enemies or activists, but on their friends and cousins to catch their secrets


Major discussion point

Global South Surveillance Patterns and Context


Topics

Cybersecurity | Human rights | Legal and regulatory


No accountability even for major crimes like Khashoggi case despite known spyware use

Explanation

The murder of Jamal Khashoggi, where spyware was used, demonstrates the complete lack of accountability in the region. This case shows that the problem extends far beyond spyware to encompass broader issues of impunity for serious crimes, creating an environment where surveillance abuse thrives.


Evidence

Everyone saw the Khashoggi case happen with documented spyware use, yet there was no accountability, showing the problem is much bigger than just spyware accountability


Major discussion point

Accountability and Legal Remedy Challenges


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Ana Gaitan
– Apar Gupta
– Nighat Dad

Agreed on

Lack of accountability and transparency mechanisms enables surveillance abuse


Self-censorship increases due to lack of transparency and accountability mechanisms

Explanation

The absence of accountability and transparency in surveillance practices has led to widespread self-censorship among citizens. People modify their behavior and limit their expression because they know they are being watched and that there are no consequences for those conducting surveillance.


Major discussion point

Accountability and Legal Remedy Challenges


Topics

Human rights | Sociocultural | Legal and regulatory


Digital rights must be considered universally and equally across all regions and conflicts

Explanation

There is a need to think about digital rights and human rights everywhere equally, without discrimination based on geography or political considerations. The differential treatment of telecommunications access in different conflict zones demonstrates the need for universal application of digital rights principles.


Evidence

Starlink was successfully used in Ukraine to regain telecom access and communication, which was admirable, but the same technology has not been made available in Gaza, showing unequal treatment


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Human rights | Infrastructure | Development


N

Nighat Dad

Speech speed

141 words per minute

Speech length

2146 words

Speech time

907 seconds

Pakistan’s LIMS system enables surveillance of 4 million people simultaneously through telecom infrastructure

Explanation

Pakistan’s Lawful Intercept Management System (LIMS) is managed by regulators and funded by telecom providers, facilitating real-time access to messages, call logs, metadata, and audio-video content. The system requires telecom providers to ensure surveillance capacity for at least 2% of their customer base, meaning 4 million people can be under surveillance simultaneously.


Evidence

The LIMS system is central to state surveillance in Pakistan and provides comprehensive access to communications data and content


Major discussion point

Global South Surveillance Patterns and Context


Topics

Cybersecurity | Human rights | Infrastructure


Bangladesh authorities deliberately slowed internet to force citizens onto traceable traditional networks

Explanation

In Bangladesh, authorities strategically slowed mobile and broadband internet access to force citizens in Dhaka to use traditional network communications, which are easier to trace and monitor. This demonstrates how infrastructure manipulation can be used as a surveillance tactic.


Evidence

This tactic was used during protests or socially disrupted events to collect real-time information, with both the National Regulatory Commission and National Telecommunication Monitoring Center having authority to collect data without incorporation from telecom providers


Major discussion point

Global South Surveillance Patterns and Context


Topics

Infrastructure | Cybersecurity | Human rights


Surveillance accountability becomes taboo issue in South Asia due to geopolitical tensions and state backlash

Explanation

In South Asia, especially given ongoing geopolitical conflicts, raising issues around spyware and surveillance accountability has become increasingly difficult for civil society. When organizations mention accountability for spyware technologies, they face backlashes from various segments of the state and other actors, making it a taboo subject.


Major discussion point

Accountability and Legal Remedy Challenges


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Ana Gaitan
– Apar Gupta
– Mohamad Najem

Agreed on

Lack of accountability and transparency mechanisms enables surveillance abuse


Global South organizations building emerging threat labs to support surveillance victims as first responders

Explanation

Organizations like SMEX and Digital Rights Foundation are building emerging threat labs to develop capacity for supporting victims and survivors of surveillance. When people discover they are being surveilled by sophisticated spyware, they often have nowhere to turn except civil society helplines and digital security help desks.


Evidence

These organizations serve as first responders when victims find they are being surveilled, as they are often at the mercy of no one else and come to civil society for guidance and support


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Development | Cybersecurity | Human rights


Agreed with

– Apar Gupta
– Rima Amin

Agreed on

Need for victim notification and support systems


Knowledge transfer from Global North to Global South organizations happening at slow pace

Explanation

While Global North organizations have advanced knowledge and expertise in surveillance investigations, the transfer of this knowledge to Global South organizations is occurring very slowly. This creates a gap where Global South organizations need to build their own knowledge and capacity to provide ground-level support to victims.


Evidence

Global North organizations have really good advanced knowledge in conducting surveillance investigations, but the transfer of that knowledge is happening at a very slow pace


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Development | Cybersecurity | Human rights


E

Elizabeth Davies

Speech speed

186 words per minute

Speech length

1793 words

Speech time

577 seconds

Pall Mall process aims to set rules for commercial cyber intrusion market through multi-stakeholder approach

Explanation

The Pall Mall process, launched by the UK and France in February 2024, is a multi-stakeholder international initiative addressing the global threat from the rapidly growing commercial cyber intrusion capabilities market. It aims to set rules of the road for different actors across the market through shared responsibility, focusing on accountability, oversight, precision, and transparency.


Evidence

The process looks broadly across the commercial cyber intrusion market, including supply chain elements like vulnerability and export marketplaces, because no one element can be tackled in isolation


Major discussion point

International Initiatives and Export Controls


Topics

Legal and regulatory | Cybersecurity | Human rights


Disagreed with

– David Kaye

Disagreed on

Scope of surveillance regulation – narrow vs. comprehensive approach


Code of practice for states achieved support from 24 countries but implementation is crucial next step

Explanation

The Pall Mall process developed a code of practice for states containing detailed recommendations under four pillars, achieved through extensive multi-stakeholder negotiation. While 24 states have formally supported it, including Ghana as the first Global South supporter, implementation of these commitments is the critical next step.


Evidence

The code of practice was developed using findings from multi-stakeholder consultation and covers accountability, oversight, precision, and transparency pillars


Major discussion point

International Initiatives and Export Controls


Topics

Legal and regulatory | Cybersecurity | Human rights


Agreed with

– Ana Gaitan
– David Kaye

Agreed on

Implementation and enforcement are more important than soft law standards


Export control authorities need better understanding of surveillance tool capabilities

Explanation

National export control authorities need to fully understand what surveillance tools are capable of so they can ask the right questions when conducting human rights due diligence. The technical complexities of these tools often tie authorities up in knots, making proper assessment difficult.


Major discussion point

International Initiatives and Export Controls


Topics

Legal and regulatory | Cybersecurity | Human rights


Comprehensive implementation of international standards vital to avoid becoming Global North-only initiative

Explanation

The Pall Mall process must ensure it doesn’t become a Global North initiative that only talks to companies and countries based in the Global North. Without global participation and implementation, the initiative won’t achieve the global impact needed to address this worldwide threat.


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Legal and regulatory | Development | Human rights


J

Jennifer Brody

Speech speed

135 words per minute

Speech length

103 words

Speech time

45 seconds

Need for human rights due diligence in export controls as lowest hanging fruit for progress

Explanation

Enhanced human rights due diligence guides for governments represent the most achievable next step in surveillance accountability. This approach has support from civil society, government backing in theory, and interest from good actors in industry, making it a practical starting point for concrete progress.


Major discussion point

International Initiatives and Export Controls


Topics

Human rights | Legal and regulatory | Cybersecurity


R

Rima Amin

Speech speed

133 words per minute

Speech length

659 words

Speech time

297 seconds

Meta disrupted over 20 surveillance-for-hire firms targeting people across 200 countries

Explanation

Meta has investigated and disrupted operations from over 20 surveillance-for-hire firms that have targeted people across 200 countries, with many targets based in the Global South. Despite claims these tools target criminals and terrorists, investigations show regular targeting of dissidents, critics of authoritarian regimes, families of opposition, and human rights activists.


Evidence

Meta’s teams focus on investigating threats, disrupting them, notifying victims, working with civil society organizations, sharing intelligence across platforms, and releasing information through adversarial threat reports


Major discussion point

Private Sector Role and Litigation


Topics

Cybersecurity | Human rights | Legal and regulatory


Agreed with

– Ana Gaitan
– Apar Gupta
– Mohamad Najem

Agreed on

Surveillance is used to target human rights defenders and journalists rather than legitimate security threats


NSO lawsuit revealed company’s extensive role in data retrieval and tens of millions spent on malware

Explanation

The WhatsApp lawsuit against NSO Group revealed important insights about the company’s operations, including NSO’s actual role in almost every part of data retrieval and delivery of the technology. The lawsuit also showed that WhatsApp was far from the only target, with NSO spending tens of millions of dollars on malware installation across various platforms.


Evidence

NSO spent tens of millions of dollars on malware installation across instant messaging, browsers, and operating systems, showing the scope of their operations beyond just WhatsApp


Major discussion point

Private Sector Role and Litigation


Topics

Cybersecurity | Legal and regulatory | Human rights


Legal recourse must be accessible specifically for those targeted by surveillance technologies

Explanation

While the NSO lawsuit was an important step, legal recourse needs to be accessible and attainable specifically for those who are targeted by surveillance technologies. The current system makes it difficult for actual victims to seek justice, requiring reforms to make legal remedies more available to those most affected.


Major discussion point

Private Sector Role and Litigation


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Apar Gupta
– Nighat Dad

Agreed on

Need for victim notification and support systems


D

David Kaye

Speech speed

142 words per minute

Speech length

1173 words

Speech time

495 seconds

Sanctioning bad actors has real impact on their ability to operate effectively

Explanation

While high-level soft law is important, the actual sanctioning of bad actors has a very strong impact on their ability to continue operating. The Biden administration’s approach of sanctioning surveillance companies demonstrates that concrete enforcement actions are more effective than just establishing standards.


Major discussion point

International Initiatives and Export Controls


Topics

Legal and regulatory | Cybersecurity | Human rights


Agreed with

– Ana Gaitan
– Elizabeth Davies

Agreed on

Implementation and enforcement are more important than soft law standards


WhatsApp lawsuit against NSO resulted in $168 million award demonstrating legal action is possible

Explanation

The WhatsApp/Meta lawsuit against NSO Group resulted in a $168 million award in U.S. court, showing that litigation can be an effective tool for moving from soft law standards to actual implementation. While it’s harder to pursue litigation against states due to sovereign immunity, private company litigation provides important pressure on bad actors.


Evidence

There’s variation in different jurisdictions – it’s harder to do this kind of litigation in the United States than in the UK, for example, but the pressure from such litigation is absolutely essential


Major discussion point

Private Sector Role and Litigation


Topics

Legal and regulatory | Cybersecurity | Human rights


Venice Commission issued strict rules for law enforcement spyware use that should apply to export controls

Explanation

The Venice Commission issued a report in December with very strict rules for law enforcement’s use of spyware tools in domestic settings. These domestic standards should be used to apply to global export controls, ensuring that the same strict standards required internally are also applied when considering what tools can be exported globally.


Evidence

The Venice Commission report details significant rules that should be applied by states, and there has been significant movement in recent years including the European Parliament’s PEGA committee process


Major discussion point

International Initiatives and Export Controls


Topics

Legal and regulatory | Human rights | Cybersecurity


Standards must be narrowed to prevent states from exploiting gaps in regulations

Explanation

There’s concern about leaving too many gaps that states can exploit, as they tend to drive trucks through any small space carved out for surveillance use. The focus should be on narrowing as much as possible any scope for use of these tools, if not banning them entirely, to prevent abuse.


Evidence

The European Union’s Media Freedom Act carves out space for spyware use against journalism, which is problematic not only for European journalists but also for the message it sends to the rest of the world


Major discussion point

International Initiatives and Export Controls


Topics

Legal and regulatory | Human rights | Cybersecurity


Disagreed with

– Elizabeth Davies

Disagreed on

Legitimacy of surveillance tools usage


Agreements

Agreement points

Surveillance is used to target human rights defenders and journalists rather than legitimate security threats

Speakers

– Ana Gaitan
– Apar Gupta
– Mohamad Najem
– Rima Amin

Arguments

Security narratives used to justify surveillance while actually targeting dissidents in contexts of impunity and corruption


Post-colonial telecommunications laws in South Asia enable secretive executive surveillance without judicial oversight


Arab Spring backlash led to dozens of restrictive cybercrime laws and massive surveillance expansion across MENA region


Meta disrupted over 20 surveillance-for-hire firms targeting people across 200 countries


Summary

All speakers agree that surveillance technologies are systematically misused to target civil society actors, journalists, and human rights defenders under the guise of security, rather than being used for legitimate law enforcement purposes


Topics

Human rights | Cybersecurity | Legal and regulatory


Lack of accountability and transparency mechanisms enables surveillance abuse

Speakers

– Ana Gaitan
– Apar Gupta
– Mohamad Najem
– Nighat Dad

Arguments

Criminal complaints in Mexico obstructed by authorities claiming no documentation of Pegasus targeting exists


Indian Supreme Court expert committee findings on Pegasus remain secret even from petitioners whose devices were examined


No accountability even for major crimes like Khashoggi case despite known spyware use


Surveillance accountability becomes taboo issue in South Asia due to geopolitical tensions and state backlash


Summary

Speakers consistently highlight how authorities obstruct investigations, withhold information, and prevent accountability mechanisms from functioning effectively, creating an environment of impunity for surveillance abuse


Topics

Legal and regulatory | Human rights | Cybersecurity


Need for victim notification and support systems

Speakers

– Apar Gupta
– Nighat Dad
– Rima Amin

Arguments

Victim notification by platforms should be universalized, especially for Global South users


Global South organizations building emerging threat labs to support surveillance victims as first responders


Legal recourse must be accessible specifically for those targeted by surveillance technologies


Summary

There is consensus that victims of surveillance need better notification systems, support mechanisms, and accessible legal remedies, with particular emphasis on supporting Global South victims who have fewer resources


Topics

Human rights | Cybersecurity | Development


Implementation and enforcement are more important than soft law standards

Speakers

– Ana Gaitan
– David Kaye
– Elizabeth Davies

Arguments

Security narratives used to justify surveillance while actually targeting dissidents in contexts of impunity and corruption


Sanctioning bad actors has real impact on their ability to operate effectively


Code of practice for states achieved support from 24 countries but implementation is crucial next step


Summary

Speakers agree that while international standards and codes of practice are important, the critical challenge is ensuring proper implementation and enforcement rather than just creating more soft law instruments


Topics

Legal and regulatory | Human rights | Cybersecurity


Similar viewpoints

All three speakers describe how historical legacies (military dictatorships, colonial laws, authoritarian backlash) create structural conditions that enable surveillance abuse in their respective regions

Speakers

– Ana Gaitan
– Apar Gupta
– Mohamad Najem

Arguments

Military control of surveillance in Mexico targeting human rights defenders investigating army abuses, with complete opacity


Post-colonial telecommunications laws in South Asia enable secretive executive surveillance without judicial oversight


Arab Spring backlash led to dozens of restrictive cybercrime laws and massive surveillance expansion across MENA region


Topics

Legal and regulatory | Human rights | Cybersecurity


Both speakers view the WhatsApp/Meta lawsuit against NSO as a significant precedent demonstrating that legal action against surveillance companies can be effective and revealing important information about their operations

Speakers

– David Kaye
– Rima Amin

Arguments

WhatsApp lawsuit against NSO resulted in $168 million award demonstrating legal action is possible


NSO lawsuit revealed company’s extensive role in data retrieval and tens of millions spent on malware


Topics

Legal and regulatory | Cybersecurity | Human rights


Both speakers recognize the critical need to bridge the gap between Global North and Global South in surveillance accountability efforts, emphasizing the importance of inclusive approaches and knowledge sharing

Speakers

– Nighat Dad
– Elizabeth Davies

Arguments

Knowledge transfer from Global North to Global South organizations happening at slow pace


Comprehensive implementation of international standards vital to avoid becoming Global North-only initiative


Topics

Development | Human rights | Cybersecurity


Unexpected consensus

Surveillance as a business model requiring market-based solutions

Speakers

– Mohamad Najem
– Elizabeth Davies
– Rima Amin

Arguments

Gulf countries developing local surveillance software as both business venture and geopolitical tool


Pall Mall process aims to set rules for commercial cyber intrusion market through multi-stakeholder approach


Meta disrupted over 20 surveillance-for-hire firms targeting people across 200 countries


Explanation

There was unexpected consensus that surveillance should be understood and addressed as a commercial market with business incentives, requiring market-based interventions rather than just human rights approaches. This business perspective was shared across civil society, government, and private sector speakers


Topics

Economic | Cybersecurity | Legal and regulatory


Need for technical capacity building in Global South

Speakers

– Apar Gupta
– Nighat Dad
– Rima Amin

Arguments

Need for victims to access device testing methodology given high barriers in domestic jurisdictions


Global South organizations building emerging threat labs to support surveillance victims as first responders


Meta disrupted over 20 surveillance-for-hire firms targeting people across 200 countries


Explanation

Unexpectedly, there was strong consensus across civil society and private sector that technical capacity building for device testing and threat detection in the Global South is a priority, suggesting alignment between advocacy and industry perspectives on practical support needs


Topics

Development | Cybersecurity | Human rights


Overall assessment

Summary

Strong consensus exists on surveillance abuse patterns, accountability failures, and the need for victim support, with unexpected alignment on treating surveillance as a business requiring market interventions and technical capacity building priorities


Consensus level

High level of consensus with significant implications for coordinated action. The agreement spans civil society, government, and private sector perspectives, suggesting potential for unified approaches to surveillance accountability that combine human rights advocacy with market-based interventions and technical capacity building in the Global South


Differences

Different viewpoints

Scope of surveillance regulation – narrow vs. comprehensive approach

Speakers

– David Kaye
– Elizabeth Davies

Arguments

Standards must be narrowed to prevent states from exploiting gaps in regulations


Pall Mall process aims to set rules for commercial cyber intrusion market through multi-stakeholder approach


Summary

David Kaye advocates for narrowing scope as much as possible or banning surveillance tools entirely to prevent state abuse, while Elizabeth Davies promotes a broader multi-stakeholder approach that acknowledges legitimate uses with proper safeguards


Topics

Legal and regulatory | Human rights | Cybersecurity


Legitimacy of surveillance tools usage

Speakers

– David Kaye
– Elizabeth Davies

Arguments

Standards must be narrowed to prevent states from exploiting gaps in regulations


The UK government does believe there are legitimate uses of these tools in defense of cybersecurity operations, for example, for national security and law enforcement


Summary

David Kaye is concerned about any carve-outs for legitimate use as they create exploitable gaps, while Elizabeth Davies explicitly acknowledges legitimate uses for national security and law enforcement with proper oversight


Topics

Legal and regulatory | Human rights | Cybersecurity


Unexpected differences

Universal application of digital rights across conflicts

Speakers

– Mohamad Najem

Arguments

Digital rights must be considered universally and equally across all regions and conflicts


Explanation

Mohamad Najem’s critique of differential treatment of telecommunications access (Starlink in Ukraine vs. Gaza) was unexpected as it introduced geopolitical considerations that other speakers didn’t address, suggesting disagreement with selective application of digital rights principles


Topics

Human rights | Infrastructure | Development


Overall assessment

Summary

The discussion showed remarkable consensus among Global South speakers on surveillance abuse patterns and accountability challenges, with main disagreements occurring between Global North and Global South perspectives on regulatory approaches


Disagreement level

Low to moderate disagreement level. Most disagreements were about implementation methods rather than fundamental goals. The strongest disagreement was between David Kaye’s restrictive approach and Elizabeth Davies’ multi-stakeholder approach to surveillance regulation. Global South speakers showed strong alignment on problems but varied approaches to solutions, suggesting the need for diverse, context-specific strategies rather than one-size-fits-all solutions.


Partial agreements

Partial agreements

Similar viewpoints

All three speakers describe how historical legacies (military dictatorships, colonial laws, authoritarian backlash) create structural conditions that enable surveillance abuse in their respective regions

Speakers

– Ana Gaitan
– Apar Gupta
– Mohamad Najem

Arguments

Military control of surveillance in Mexico targeting human rights defenders investigating army abuses, with complete opacity


Post-colonial telecommunications laws in South Asia enable secretive executive surveillance without judicial oversight


Arab Spring backlash led to dozens of restrictive cybercrime laws and massive surveillance expansion across MENA region


Topics

Legal and regulatory | Human rights | Cybersecurity


Both speakers view the WhatsApp/Meta lawsuit against NSO as a significant precedent demonstrating that legal action against surveillance companies can be effective and revealing important information about their operations

Speakers

– David Kaye
– Rima Amin

Arguments

WhatsApp lawsuit against NSO resulted in $168 million award demonstrating legal action is possible


NSO lawsuit revealed company’s extensive role in data retrieval and tens of millions spent on malware


Topics

Legal and regulatory | Cybersecurity | Human rights


Both speakers recognize the critical need to bridge the gap between Global North and Global South in surveillance accountability efforts, emphasizing the importance of inclusive approaches and knowledge sharing

Speakers

– Nighat Dad
– Elizabeth Davies

Arguments

Knowledge transfer from Global North to Global South organizations happening at slow pace


Comprehensive implementation of international standards vital to avoid becoming Global North-only initiative


Topics

Development | Human rights | Cybersecurity


Takeaways

Key takeaways

Surveillance in the Global South operates in contexts of weak legal safeguards, corruption, and impunity, with states using security narratives to justify targeting human rights defenders and journalists rather than actual criminals


The surveillance industry has become a profitable business venture for authoritarian regimes, with Gulf countries developing local spyware capabilities for both domestic control and geopolitical influence


International accountability mechanisms face significant limitations in Global South contexts, with court cases stalled, evidence withheld, and institutional remedies proving inadequate


The Pall Mall process represents progress in establishing international standards but requires meaningful Global South participation and concrete implementation rather than just soft law commitments


Legal action against surveillance companies (like Meta’s $168 million award against NSO) demonstrates that accountability is possible and can create deterrent effects


Massive surveillance infrastructure affects entire populations, not just targeted individuals, with systems like Pakistan’s LIMS monitoring 4 million people simultaneously


Knowledge and capacity gaps between Global North and South organizations hinder effective response to surveillance threats, requiring enhanced cooperation and resource sharing


Resolutions and action items

Pall Mall process to establish working groups focused on export control implementation and human rights due diligence guidelines


Development of enhanced human rights due diligence guides for governments as immediate actionable step


UK announcement of Common Good Cyber Fund to support civil society actors at high risk of digital transnational repression


Global South organizations building emerging threat labs and first responder capabilities for surveillance victims


Spyware Accountability Initiative focusing on supporting Global South organizations and governments in their accountability efforts


Continued victim notification by platforms, especially for Global South users, and intelligence sharing across platforms


Implementation of Venice Commission’s strict rules for law enforcement spyware use in domestic and export control contexts


Unresolved issues

How to ensure meaningful Global South participation in international processes like Pall Mall beyond tokenistic inclusion


Whether domestic surveillance standards will actually be applied to global export controls by European and other governments


How to address the fundamental legitimacy question of whether private companies should perform governmental surveillance functions at all


How to overcome institutional limitations in Global South courts and parliaments that prevent effective remedy for surveillance victims


How to accelerate knowledge transfer from Global North to Global South organizations working on surveillance accountability


How to address the business incentives driving the surveillance industry while authoritarian regimes profit from both domestic use and international sales


How to ensure universal application of digital rights principles across different geopolitical contexts and conflicts


Suggested compromises

Treating Pall Mall process and similar initiatives as ‘stepping stones’ rather than final solutions, with emphasis on concrete implementation over high-level commitments


Focusing on ‘lowest hanging fruit’ like human rights due diligence guidelines that have support from civil society, governments, and responsible industry actors


Narrowing the scope for legitimate use of surveillance tools as much as possible, even if complete bans are not politically feasible


Combining multiple approaches including export controls, litigation, domestic constraints, and international standards rather than relying on any single mechanism


Balancing legitimate national security and law enforcement needs with strict oversight, judicial approval, and human rights safeguards


Supporting both international standard-setting processes and local capacity building for Global South organizations simultaneously


Thought provoking comments

However, the reality is that these narratives are actually being used to criminalize citizens in contexts usually represented by high rates of impunity, corruption, and collusion with organized crime. Thus, rather to give us more security, surveillance powers in Latin America are abused to target human rights defenders and journalists in legacies of past military dictatorships and systemic human rights violations where the rule has been to control, repress, and censor all dissent.

Speaker

Ana Gaitan


Reason

This comment reframes the entire surveillance debate by exposing the false security-privacy trade-off narrative used by governments. It reveals how surveillance is weaponized against the very people it claims to protect, particularly in post-authoritarian contexts with weak institutions.


Impact

This established a critical framework that subsequent speakers built upon, shifting the discussion from technical aspects of spyware to the broader political and historical context of surveillance abuse in the Global South.


Therefore, it’s not a hypothetical threat and it is not a threat which is individualized, but is a societal threat to already democratic systems which are under strain and rule of law which exists inconsistently in countries in South Asia.

Speaker

Apar Gupta


Reason

This comment elevates the discussion from individual privacy concerns to systemic democratic threats, emphasizing how spyware attacks the foundational institutions of democracy itself in fragile political systems.


Impact

This broadened the scope of the conversation to include institutional vulnerability and democratic backsliding, influencing later discussions about the need for structural reforms rather than just technical solutions.


So, this kind of regulation affected a lot the space, and we started seeing a lot of people going to jail for, like, 10 years, 15 years, for things they have said online… And one thing that we don’t talk a lot about in our communities is they’re also doing surveillance on their friends and allies… They’re not doing surveillance on their enemies, on activists, but they’re also doing surveillance on other politicians, on their friends, on their cousins.

Speaker

Mohamad Najem


Reason

This reveals the comprehensive nature of authoritarian surveillance that extends beyond traditional targets to include allies and family members, showing how surveillance creates a climate of total mistrust and control.


Impact

This comment introduced a new dimension to the discussion about the psychological and social impacts of surveillance, moving beyond the typical focus on journalists and activists to show how surveillance affects entire social networks.


We need to understand this as advocates of digital rights. It’s a business decision as well for them… A lot of these countries are making these softwares to make money.

Speaker

Mohamad Najem


Reason

This insight reframes surveillance from a purely political tool to a commercial enterprise, revealing how authoritarian regimes are monetizing oppression and creating new revenue streams from surveillance technology.


Impact

This business perspective influenced later speakers to discuss market incentives and economic deterrents, leading to conversations about litigation, sanctions, and financial accountability as tools for change.


Shockingly, telecom providers are required to insure at least 2% of their customer base, which is around 4 million people, so 4 million people are under surveillance at any given time in this country.

Speaker

Nighat Dad


Reason

This specific statistic about Pakistan’s surveillance infrastructure provides concrete evidence of mass surveillance capabilities, moving the discussion from anecdotal cases to systematic, institutionalized surveillance.


Impact

This data point grounded the theoretical discussion in stark reality, prompting other speakers to acknowledge that the threat extends far beyond targeted spyware to encompass mass surveillance systems affecting millions.


I mean, we live in an era where states are just kind of driving trucks through any small space that they can carve out to do what they wanna do… when we have things like the European Unions Media Freedom Act, which actually carves out a little bit of space for the use of spyware against journalism, that’s a huge problem for us.

Speaker

David Kaye


Reason

This comment critically examines how even well-intentioned regulations in the Global North can create dangerous precedents that authoritarian regimes exploit, highlighting the global interconnectedness of policy decisions.


Impact

This shifted the conversation toward examining the unintended consequences of Global North policies and the need for more restrictive rather than permissive approaches to surveillance regulation.


But we also need support in terms of building our capacity… transfer of that knowledge is happening at a very slow pace. So what we are doing is trying to build our own knowledge and capacity so that we are on ground, can provide the support to the victims and survivors as first responders.

Speaker

Nighat Dad


Reason

This highlights a critical gap in the global response to surveillance – the lack of technical capacity and knowledge transfer to Global South organizations who are often the first responders to surveillance victims.


Impact

This comment redirected the discussion toward practical capacity-building needs and the importance of supporting local organizations, influencing speakers to consider more concrete support mechanisms rather than just policy frameworks.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond technical and legal frameworks to examine the deeper political, economic, and social dimensions of surveillance in the Global South. The conversation evolved from describing surveillance problems to analyzing their root causes in weak institutions, authoritarian legacies, and economic incentives. The speakers collectively built a narrative that surveillance is not just a privacy issue but a comprehensive threat to democratic systems, social trust, and human rights. The discussion also highlighted the inadequacy of Global North solutions when applied to Global South contexts, emphasizing the need for locally-informed approaches and genuine capacity building rather than top-down policy prescriptions.


Follow-up questions

Are the legal cases and petitions regarding Pegasus surveillance still pending in Indian courts, and is there any hope for progress?

Speaker

Nighat Dad


Explanation

This follow-up question seeks clarity on the current status of legal remedies and accountability mechanisms in India’s judicial system regarding spyware abuse.


Will the Venice Commission’s strict rules on spyware use actually be applied by states in practice?

Speaker

David Kaye


Explanation

This questions the gap between establishing international standards and their actual implementation by governments, which is crucial for effectiveness.


Will domestic spyware standards be used to apply global export controls?

Speaker

David Kaye


Explanation

This explores whether internal governance standards will translate into restrictions on exporting surveillance technology to other countries, particularly in the Global South.


How can the Pal-Mal process become more meaningfully inclusive of Global South actors beyond current efforts?

Speaker

Nighat Dad


Explanation

This addresses the need for genuine participation from Global South stakeholders rather than tokenistic inclusion in international governance processes.


How can legal recourse be made accessible and attainable specifically for those targeted by surveillance technologies in the Global South?

Speaker

Rima Amin


Explanation

This highlights the need for practical remedies for surveillance victims who currently have limited access to justice mechanisms.


How can enhanced human rights due diligence guides for export controls be developed and implemented effectively?

Speaker

Jennifer Brody


Explanation

This focuses on creating practical tools that governments can use to assess human rights impacts before approving surveillance technology exports.


How can knowledge transfer between Global North and Global South organizations be accelerated to build local capacity for supporting surveillance victims?

Speaker

Nighat Dad


Explanation

This addresses the capacity gap where Global South organizations need technical expertise to serve as first responders for surveillance victims.


How can victim notification systems be universalized across platforms, especially for people in the Global South?

Speaker

Apar Gupta


Explanation

This seeks to ensure that surveillance victims worldwide receive timely warnings about attacks on their devices and accounts.


How can device testing methodology and capacity be expanded beyond the current few organizations that provide this service?

Speaker

Apar Gupta


Explanation

This addresses the limited availability of technical forensic services for surveillance victims who need evidence of attacks on their devices.


How can digital rights and telecommunications access be ensured equally across conflict zones and different geopolitical contexts?

Speaker

Mohamad Najem


Explanation

This raises questions about equitable access to communication tools and digital rights regardless of political circumstances or geographic location.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.