WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention

WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention

Session at a glance

Summary

This workshop focused on protecting children from online sexual exploitation and abuse (CSAM/CSEA), particularly in live streaming contexts, bringing together experts from civil society, academia, and the tech industry to discuss technology, policy, and prevention approaches. The discussion was structured around three main themes: integrating technology tools into trust and safety systems while protecting children’s rights, promoting cross-platform collaboration to prevent content spread, and strengthening policy frameworks to address emerging forms of abuse.

Speakers emphasized the importance of adopting a “safety by design” approach that incorporates protective technologies at both the front-end to prevent harm and back-end to detect violations. However, significant gaps remain in transparency reporting, with companies rarely providing detailed information about their moderation practices specific to live streaming. The discussion highlighted alarming trends, including that one-third of reported CSAM cases involve self-generated content, with 75% involving prepubescent children, suggesting widespread grooming and exploitation.

Cross-platform collaboration emerged as crucial since bad actors typically exploit multiple services across the tech ecosystem. The Tech Coalition’s Lantern project was presented as an example of successful signal-sharing between platforms, enabling companies to share information about violating accounts and activities while preserving privacy. Speakers stressed that technology solutions must be complemented by multi-stakeholder approaches involving human rights advocates, children themselves, parents, and law enforcement.

Policy recommendations included strengthening transparency requirements, improving age assurance mechanisms, addressing platform recidivism where banned users easily create new accounts, and enhancing complaint mechanisms designed from children’s perspectives. The discussion also emphasized the need for better law enforcement capacity and resources, as many countries lack adequate capabilities to investigate and prosecute these crimes. Participants called for equal focus on strengthening criminal justice frameworks alongside platform regulation, ensuring child-friendly court procedures, and involving financial institutions in detecting suspicious transactions related to commercial exploitation.

Keypoints

## Major Discussion Points:

– **Technology Solutions and Children’s Rights Integration**: The discussion explored how technologies to detect and prevent child sexual abuse material (CSAM) can be developed while protecting children’s rights, including safety-by-design approaches, AI-powered detection tools, and privacy-preserving methods like Apple’s on-device machine learning and metadata-based risk scoring systems.

– **Cross-Platform Collaboration and Information Sharing**: Speakers emphasized the need for coordinated efforts across platforms since perpetrators often exploit multiple services. The Tech Coalition’s “Lantern” project was highlighted as an example of secure signal-sharing between companies to identify and disrupt abuse networks more effectively.

– **Policy and Legal Framework Strengthening**: The conversation addressed gaps in national and international policies, including the need for better criminalization of emerging forms of abuse (like live streaming and self-generated content), improved law enforcement capacity, stronger transparency requirements for platforms, and enhanced complaint mechanisms accessible to children.

– **Multi-Stakeholder Approach and Youth Participation**: Multiple speakers stressed the importance of involving all stakeholders – including children themselves, civil society, academia, tech companies, and governments – in developing solutions, with particular emphasis on centering children’s voices in the design process.

– **Regional Challenges and Resource Constraints**: Participants from different regions (Africa, India, Brazil, Netherlands) highlighted varying challenges including digital literacy gaps, inadequate law enforcement resources, cultural contexts, and the growing scale of the problem versus declining resources to address it.

## Overall Purpose:

The discussion aimed to explore comprehensive approaches to protecting children from online sexual exploitation and abuse, particularly in live streaming contexts, by examining how technology, policy, and education can work together through multi-stakeholder collaboration to address this evolving global threat.

## Overall Tone:

The discussion maintained a serious, professional, and collaborative tone throughout, reflecting the gravity of the subject matter. While there was one moment of tension when an audience member criticized another participant’s comments about privacy and encryption, the moderators handled this diplomatically. The conversation was characterized by mutual respect among experts sharing knowledge, with speakers building upon each other’s points constructively. The tone remained solution-focused and forward-looking, emphasizing the urgent need for coordinated action while acknowledging the complexity of balancing child safety with privacy rights and other considerations.

Speakers

**Speakers from the provided list:**

– **Sabrina Vorbau** – Project manager at European SchoolNet, representing the InSafe network of European Safer Internet Centers, co-moderating the session

– **Deborah Vassallo** – Coordinator of the Safer Internet Center in Malta, supporting with online moderation

– **Robbert Hoving** – President of Off-Limits (the Safer Internet Center in the Netherlands), President of the InSafe board

– **Dhanaraj Thakur** – From the Center for Democracy and Technology, co-moderating the session

– **Lisa Robbins** – Online safety policy analyst at the OECD

– **Aidam Amenyah** – Executive director of Child Online Africa

– **Patricia Aurora** – CEO of Social Media Matters (working in India region)

– **Sean Litton** – President and chief executive officer at the Tech Coalition

– **Sabine Witting** – Assistant professor for Law and Digital Technologies at Leiden University, co-founder of TechLegality (consultancy firm specializing in human rights and tech)

– **Kate Ruane** – From the Center for Democracy and Technology

– **Jutta Croll** – From the Dynamic Coalition on Children’s Rights in the Digital Environment

– **Andrew Kempling** – Runs a tech and public policy consultancy, trustee of the Internet Watch Foundation

– **Sergio Tavares** – From SaferNet Brasil (safe internet center for Brazil)

– **Audience** – Various audience members who made interventions

**Additional speakers:**

– **Julien Ruffy** – Associate professor at the University of Paris 8, co-chair of the working group on internet governance

– **Shiva Bisasa** – From Trinidad and Tobago, speaking from experience with a victim of financial sextortion

– **Jameson Cruz** – Youth representative from Manaus, Brazil, part of the Brazilian Internet Young Governors Training Program since 2022

– **Cosima** – Works with the UK Government, has been with NAC for Internet Center for about six years working on digital literacy and policy

– **Raoul Plummer** – With the Electronic Frontier Finland

– **Unnamed speaker** – From the Finnish Green Party

Full session report

# Workshop Report: Protecting Children from Online Sexual Exploitation and Abuse in Live Streaming Contexts

## Executive Summary

This 90-minute workshop brought together international experts to address protecting children from online sexual exploitation and abuse (CSEA), with particular focus on live streaming contexts. The session was structured in three discussion rounds covering technology solutions, cross-platform collaboration, and policy frameworks. Participants included representatives from civil society organisations, academic institutions, technology companies, and government bodies, creating a multi-stakeholder dialogue on current challenges and potential solutions.

Key themes that emerged included the need for better age assurance mechanisms, the multi-platform nature of modern abuse, concerning trends in self-generated content, and significant gaps in law enforcement capacity globally. While participants agreed on the importance of multi-stakeholder collaboration and centering children’s voices, disagreements emerged around privacy-preserving approaches and policy priorities.

## Key Participants

The workshop was co-moderated by **Sabrina Vorbau** from European SchoolNet and **Dhanaraj Thakur** from the Center for Democracy and Technology, with **Deborah Vassallo** from Malta’s Safer Internet Centre providing online moderation support.

Participants included:

– **Robbert Hoving**, President of Off-Limits and the InSafe board

– **Aidam Amenyah** from Child Online Africa

– **Patricia Aurora** from Social Media Matters (India)

– **Sergio Tavares** from SaferNet Brasil

– **Sean Litton** from the Tech Coalition

– **Sabine Witting** from Leiden University

– **Lisa Robbins** from the OECD

– **Kate Ruane** from the Center for Democracy and Technology

– **Jutta Croll** from the Dynamic Coalition on Children’s Rights in the Digital Environment

– **Jameson Cruz**, youth representative from Brazil’s Internet Young Governors Training Programme

– **Andrew Kempling** from the Internet Watch Foundation

– **Shiva Bisasa** from the Internet Society Uganda Chapter

## Rights-Based Framework for Child Protection

**Lisa Robbins** established an important foundational perspective, arguing that “protecting children from CSCA should not just be done in a way that protects and promotes their rights. I think it’s really important to acknowledge that protecting children from CSCA is protecting and promoting their rights, most obviously freedom from violence, but also acknowledging that this CSCA can infringe upon dignity rights, privacy, and a safe online space is really important for enabling children to access a large number of rights in today’s reality, such as opinion, assembly, information, or education and health.”

This framing positioned child protection as inherently rights-affirming rather than creating a safety-versus-rights dichotomy.

## Technology Solutions and Current Gaps

### Age Assurance Challenges

**Lisa Robbins** revealed a significant gap in current practices: “Only two of 50 services systematically assure age on account creation.” This represents a fundamental vulnerability in child protection systems across platforms.

**Andrew Kempling** emphasized that “age verification is important to prevent adults from accessing child accounts and content,” while **Jutta Croll** noted that privacy-preserving age verification mechanisms exist without being intrusive.

### AI and Detection Technologies

**Aidam Amenyah** stressed that “AI-powered platforms should be sensitive enough to detect, prevent, and report live streaming situations affecting children,” noting that children in Africa encounter concerning content virtually every day.

**Sean Litton** described privacy-preserving detection methods, including how “session metadata and third-party signals can generate risk scores for broadcasts without analyzing actual content” and how “on-device machine learning can detect nudity while preserving privacy through local processing.”

## Cross-Platform Nature of Abuse

**Sean Litton** provided crucial insight into modern exploitation patterns: “The bad actor might contact a child on a gaming platform, move them to a private messaging platform, and then perhaps use a live streaming platform down the road. So the abuse spans social media, gaming, live streaming, payment apps, and more. But the individual company is obviously unaware of what happened on the other platforms.”

This revelation highlighted why isolated company responses are insufficient and coordinated industry collaboration is essential.

### Project Lantern

**Sean Litton** presented the Tech Coalition’s Lantern project as an example of cross-platform collaboration, enabling secure signal sharing between companies about policy-violating accounts while preserving privacy. The project is piloting with payment providers to share signals on financial transactions, recognizing that financial extortion is a major component of crimes against children online.

However, **Sabine Witting** raised concerns that “Global South representation is often underrepresented in industry standard-making processes,” while **Kate Ruane** emphasized that cross-platform efforts “create significant risks for human rights and need multi-stakeholder engagement for transparency.”

## Alarming Statistics on Self-Generated Content

**Robbert Hoving** presented concerning data: “Of all the reports coming in, one third of those reports is self-generated, meaning young children making sexualized images themselves… And of those 1 third of reports, 75% is prepubescent, so implying very young children. This is not children going to middle school in their teenagers. This is really young children.”

**Sabine Witting** argued that self-generated content requires nuanced approaches considering voluntary versus coercive production, challenging simple criminalization approaches.

## Global Scale and Regional Perspectives

### Staggering Global Statistics

**Andrew Kempling** provided sobering context: “roughly 300 million victims of child sexual abuse and exploitation every year globally. That’s about 14% of the world’s children each year.”

### Regional Challenges

**Patricia Aurora** shared specific data from India: “9 million children out of the 36.2 million population of India have been targeted.” She noted that India lacks robust legal frameworks for child protection online and highlighted that live streaming platforms are being used for monetizing pre-recorded child abuse content.

**Aidam Amenyah** emphasized that Africa faces uneven digital literacy alongside rapid technology growth, and that laws are not effectively enforced while internet service providers lack accountability.

**Sergio Tavares** raised sustainability concerns, noting that Brazil faces growing numbers of reports while resources are declining globally.

## Law Enforcement and Criminal Justice Gaps

**Sabine Witting** made a critical observation about policy priorities: “I would really like to see the same effort from governments that they at the moment put into platform regulation, they should put into law enforcement and strengthening the criminal justice framework. Because there is a bit of a, I feel like an over-focus at the moment on the responsibility of platforms, while we know that if these cases then really reach the court system, most of them either A, fall through the cracks, or B, the children that are forced to go through the court system leave extremely traumatized.”

She also noted that criminal justice systems lack protective measures and trauma-informed approaches for child victims.

## Privacy and Encryption Debate

**Kate Ruane** defended end-to-end encryption as essential protection, arguing that “privacy and safety should be viewed as complementary rather than opposing values” and that “content-oblivious methods are more effective than content-aware methods for most harmful content.”

One participant argued that privacy was being “weaponized” as an excuse not to stop CSAM sharing on encrypted platforms. **Raoul Plummer** stated his disagreement with suggestions that privacy advocates were somehow complicit in child abuse.

## Personal Impact and Human Cost

**Shiva Bisasa** provided powerful personal testimony about the human impact: “I know of a victim of financial sextortion who took his life because of this.” This reminder of the real-world consequences underscored the urgency of the issues being discussed.

## Children’s Participation

Multiple speakers emphasized involving children in solution design. **Aidam Amenyah** noted that “children want involvement in designing protection solutions to ensure they are fit for purpose,” while the inclusion of **Jameson Cruz** as a youth representative demonstrated practical implementation of this principle.

## Financial Aspects

**Sean Litton** highlighted that “financial extortion is a major component of crimes against children online,” while **Sabine Witting** noted that “following the money is one of the most important leads for law enforcement in organized crime.”

## Transparency and Accountability

**Lisa Robbins** noted that “companies rarely provide detailed information about their moderation practices specific to live streaming,” making it difficult for policymakers to understand where targeted action is needed. She emphasized that “better and more granular transparency and information is really key for policymakers to be able to react and understand where targeted policy action and deployment of safeguarding technology is needed.”

## Areas of Agreement and Disagreement

Participants generally agreed on the need for multi-stakeholder collaboration, better law enforcement capacity, improved platform transparency, and centering children’s voices in solution design.

However, significant disagreements emerged around privacy-preserving approaches versus content scanning, whether policy focus should prioritize platform regulation or criminal justice strengthening, and how to implement cross-platform collaboration with appropriate oversight.

## Commitments and Next Steps

Several concrete commitments emerged from the discussion:

– The Tech Coalition committed to publishing results of their financial payment provider pilot

– The OECD agreed to share Financial Action Task Force research on disrupting financial flows related to live stream sexual abuse

– The Internet Watch Foundation offered to collaborate with tech companies to validate tool effectiveness

– Participants agreed to continue conversations at the IGF village

## Conclusion

The workshop revealed both the complexity of protecting children from online sexual exploitation and the potential for coordinated action across sectors. The alarming statistics about self-generated content involving very young children, the multi-platform nature of modern abuse, and significant gaps in law enforcement capacity underscore the urgency of coordinated action.

While disagreements remain about implementation approaches and policy priorities, the consistent emphasis on centering children’s voices and the recognition that no single entity can solve these problems alone provides a foundation for continued collaboration. The path forward requires sustained commitment from all stakeholders, adequate resource allocation, and continued innovation in developing solutions that protect children while preserving their rights and digital participation.

Session transcript

Sabrina Vorbau: Wow, good morning, everyone. It’s nice to see you all, and welcome to the workshop on protecting children from online sexual exploitation, including live streaming spaces, technology, policy, and prevention. My name is Sabrina Forbaugh, I’m a project manager at European SchoolNet and representing here the InSafe network of European Safer Internet Centers. I will be co-moderating the session together with my colleague, Danarai Thakur, from the Center for Democracy and Technology, and we are also joined by our colleague, Deborah Vassallo, who is the coordinator of the Safer Internet Center in Malta, and she will be supporting us with the online moderation. Welcome everyone here in the room and also online for joining our session and for your participation. The session aims of exploring how technology, policy, and education can work together to tackle the evolving threat of online child sexual exploitation and abuse, also known as CSAM or CSEA, particularly in the live streaming context, but not exclusively. We are really fortunate to be joined by an incredible group of experts from civil society, academia, and the tech industry who will share their insight from different regions and perspectives, truly in the multi-stakeholder approach. of the IGF. We have structured our session that will run for 90 minutes in three parts where we will tackle different angles of the subject. Before we dive into the first round let me briefly introduce our speakers. Here in the room with us we have next to me Robert Hoving who is president of Off-Limits, the Safer Internet Center in the Netherlands and he’s also president of the Inhofe board and we have Kate Rouhani with us. She is also from the Center of Democracy and Technology. Virtually and online joining us from various parts of the world we have Sean Lytton who is president and chief executive officer at the Tech Coalition. We have Dr. Sabine K. Witting, professor at Leiden University. Patricia Aurora, CEO of Social Media Matters. Abo Aydam Achmea, execute director of Child Online Africa and Lisa Robbins, online safety policy analyst at the OSCD. So lots of expertise on our side and I see also loads of expertise in the room. So let’s begin by exploring how technology tools are currently being developed and implement to dedicate and to prevent CSAM in live streaming spaces but not exclusively and how can they ensure that these tools align with children’s rights and safety. So we will turn a first question to our speakers and after each round we will take a pause and invite you to share your intervention and your questions with us and if you would like to do so please just line up here in front of the microphone. So our first question how can technology technologies to prevent CSAM, be integrated and trust and safety systems in a way that protects and promotes children’s rights. And we will start with our first speaker joining us online. And I give the floor over to Lisa that we can see on the screen. Hi, Lisa. Welcome. Thanks for joining us. Hi, thanks so much. And hopefully you can hear me well. Wonderful. Yes. Great. So thank you so much.

Lisa Robbins: It’s really great to be here on this really rich panel for this super important conversation. So just for a tiny little bit of context, for those of you who don’t know the OECD, we’re a multilateral organization. We have a 38 member country membership, and we work on a multilateral on a consensus basis to develop research and evidence policy recommendations on lots of issues, but including digital policy and including within that digital safety and specifically work on children in the digital environment. My first remark in response to the question is that I think it’s really important to acknowledge up front that protecting children from CSCA should not just be done in a way that protects and promotes their rights. I think it’s really important to acknowledge that protecting children from CSCA is protecting and promoting their rights, most obviously freedom from violence, but also acknowledging that this CSCA can infringe upon dignity rights, privacy, and a safe online space is really important for enabling children to access a large number of rights in today’s reality, such as opinion, assembly, information, or education and health. So while it is really important to recognize where there are tensions between our rights in our solutions and in policy solutions, which is the angle that I come from, it’s really also important to recognize that there is a direct child rights implication on a number of layers of CSCA itself. Secondly, just to break down the question of what is it is that we mean by technologies. I guess in the question, are we referring to companies themselves that provide live streaming or safety technologies? And how can this be integrated and how safety technologies could be integrated into those services to provide safeguard for children’s? Or are we talking about a combination of both? And so for the interest of time, I’ll tackle the question really from the perspective of both. And I’m gonna highlight the importance of taking a safety by design approach. And I know that that’s not a technology in itself, but there are technologies that underline safety by design. Here at the OECD, we’ve done a little bit of work here to really understand what this concept means. And we last year published a report that posits eight key components for digital safety by design. These roughly fall into three buckets. I’m not gonna talk about one of them, which is more about corporate social responsibility and an environment of safety. But two of the buckets, one of which is putting in technologies and tools at the front end to prevent harm from occurring, and then putting in tools and technologies at the backend to detect harm should it occur otherwise really does require incorporating certain technologies into service design and delivery. So two, we also have done some research here which can shed some light on how companies are actually doing in this space and what is actually occurring in relation to the incorporation of technologies into companies to meet those sort of two aspects of what at the front end and also as a safety net. Firstly, we’ve done a series of reports looking at company practices for transparency reporting and there are other public facing policies and governing documents relevant to child sexual exploitation and abuse. Now we look at a wide number of things such as how CSEA is defined on a platform, how they enforce their. policies, moderation practices, and transparency reporting practices. But one of the things we do look at is how they detect CSEA. Now, we’ve done two reports relevant to this. One report was published in 2023, and another one will be published this coming Monday. And in the one that’s to be coming on Monday, it shows that companies really rarely provide detailed or clear information on their moderation practices. And now we don’t specifically break this down in the report. I did a quick scan of the some 80 services we looked at relevant to live streaming. And now only two of those, taking aside those services who sold businesses live streaming, only two of those provide information on their moderation practices specific to live streaming. And again, only a few provide metrics that break down their live streaming incidences. And so really for policymakers to be able to react and understand where targeted policy action and deployment of safeguarding technology is needed, better and more granular transparency and information is really key. Now, I realize I’m about to run out of time, but we’ve also done a similar exercise relevant to age-related policies. This paper is out today. And just to say really briefly, it’s clear from the research we’ve done that age assurance, and I know this is a hot topic on a lot of people’s mind, but we’re not quite there yet in achieving age assurance. We identify this as a key component of safety by design because companies need to know who their users are to put in place child protective safeguards. But of the 50 services we looked at here, only two systematically assure age on account creation. And so there still is gaps to be done in relation to age assurance. And I’ll leave it there. Thank you so much, Lisa, for kicking us off and sharing already with us the incredible work the OSCD is doing at global level, mentioning a lot of publications, and we will also make sure to include this in our session report. And you mentioned very interesting areas. And I think that’s a very good kick-off for this first round. We will now turn to Avo, who is from Child Online Africa, also a very big advocate on children’s rights, and we will hear from him a little bit more about what he thinks about protection of children’s rights. So, Avo, thank you very much for being here. Thank you very much for having me.

Sabrina Vorbau: Thank you for having me. We will now turn to Avo, who is from Child Online Africa, also a very big advocate on children’s rights, and we will hear a little bit more from Avo what is happening in the space, particularly in the Africa region. I hope, Avo, you can hear us. And the floor would be yours. Yes, I can hear you.

Aidam Amenyah: Good morning. Hello, everyone. Pleasure connecting with you from Accra, Ghana. I’m going to build on what the background Lisa gave, because we don’t want to burn time. We want to build on what each other is presenting. So as much as possible, as you are all aware, Africa has a unique situation where we have an even situation of digital literacy, and the rapid growth of technology is helping to some extent, but it’s also exposing a number of incidences. And at Child Online Africa, our focus is to give children what they want, what they need in order to be online. So by so doing, we interact with these young people a lot to get things done. And one of the things they make us understand is that virtually almost every day, there’s something that they encounter in the space. So we’re asking, okay, so what can be done in order to safeguard your interests and your engagement in the space? And they made us understand that. understand that it’s important that now that AI is there, the platform is made in such a way that is sensitive to detect, prevent, and also report streaming of live situations that affect children by extension. And the moment they mentioned AI, you ask them, okay, so what can you do yourself? And so, okay, we can try to code something, but we haven’t gotten there yet. But then we are of the view that as a growing environment where young people are largely involved in the space, it will be important for us to look at protocols that allow laws to be enforced effectively, because one of the things they came up with is the fact that the laws are not biting and it looks like they are not working effectively as far as our country, our region is concerned. So they’re looking at the involvement of children in local communities to design solutions which affect them so that it will be foolproof. It wouldn’t be like adults sitting to come up with designs that do not really, really impact on children. And they also recommended that there should be human oversight on AI moderation to prevent false positives if possible. And again, there’s also the need, they said we should put a lot of responsibility on internet service providers, because they should be able to block some of the live streams of children, but they are not doing that enough. And all because there are no laws holding them accountable. So they will feel that if the service providers will be made a lot more responsible, the system will be good enough and conducive enough for them. But their involvement is very key in ensuring that the design for the protection solution. is fit for purpose and is intuitive for them to interact with. So by extension, they felt that it’s important for platforms to be faster in identifying incidences of abuse and rescuing young people if they show fall victims, prosecution of perpetrators, and also by also creating, making, having at the back of the mind of designers that children are using the platform. Even though they don’t give money, they don’t buy anything, they are also consumers and their interest should be taken on board and taken seriously. That’s what I can say for now, thank you.

Sabrina Vorbau: Thank you so much, Avo, and also for highlighting already youth participation in the whole process. So really making young people part of the development process, but also policymaking, the design as well, and definitely call for action for more responsibilities for the platforms. Thank you for now. We will now go to our next speaker, Patricia Aurora. We talked about platforms already. Patricia works for Social Media Matters, specifically in the India region. I hope, Patricia, you can hear us and the floor would be yours now. Thank you for joining us.

Patricia Aurora: Thank you, Sabrina, thank you for the opportunity. I’m so happy to be here, joining virtually, giving the context of our child safety work in India and what we have been doing, and how India is basically shaping the laws around child protection in India. Given that, just a brief introduction about who I am for the audience over there. So I’m Patricia Aurora. I’m currently leading Social Media Matters, an organization that’s working. in at the intersection of digital safety, child rights, and online harm prevention in India. Basically, I’ve been engaged in shaping conversations around child protection, digital literacy, and trust and safety frameworks. And hearing Lisa and Avu so far on inserting the context, I think so now this adds value when we talk about the ongoing initiatives that we have been doing in India for over a decade now. And this has given us a clear standing that what we need as children and their voices to be heard by many. Right now in India, what we come across is that many children who are facing any form of abuse or harm, they do not feel or they do not know the path of how to kind of report the situation, even if it is a form of online harm like cyberbullying. This is one area which is talked about across India. Cyberbullying amongst minors is a major issue in India. And when we’re talking about technology, and when we’re talking about policies, I think the tech platforms have a major role over here. Why? Because when we are giving that liberty to children to access the internet, which is somewhere a right for them as well in the digital space, and when we’re talking about digital rights in India. But sadly, the tech companies, what we are seeing is that they’re failing in a form that we do not have some standard mechanisms to protect children. And these standard mechanisms are not well defined because we do not have a robust legal framework. We are borrowing some sections from various laws currently to protect children online, which was not even into speculation until we realized that children, how children are being targeted in the online spaces. Coming to grooming, it has taken the lives of many young children as well, because of these harms in the online space. Now coming to live streaming, because live streaming is one platform where pre-recorded videos are being monetized for their own benefit. And this is coming from, you know, cross-platform as well. The use of child content, child-abusive content, is something which is at its peak right now. And quoting some stats from the reports that have been released by NECMEC, and also the TIPLINE report that stated that 9 million children out of the 36.2 million population of India have been targeted of these. So this gives a very big number for us to even think about what is happening, especially when we talk about the cultural aspect of it. How we are looking into this content, from the live streaming perspective as well, that how we are able to add a lens of safety for them in the online space. Are we giving them adequate mechanisms that, you know, they can freely access internet? Or that freedom is turning into a bad shape for them? So these are some points that, you know, I want to think about. And also think about that, do we need to redefine these policies? Do we need to rethink about them? Do we need to reimagine that what is happening? Do we need to reframe working? Do we need to have more collaborative efforts where civil society organizations, where tech companies, where regulatories can come together and create a full package where we are just thinking about finer safety over there? At this point, at this juncture, when we are talking about the legal frameworks, I think so many of you must have also heard about it. heard about the Digital Personal Data Protection Act, that is talking very specific about child safety in India. There was a parallel legal framework called the Digital India Act, which was only to take forward the Information Technology Act, where India is trying to define all sorts of online harm, protecting digital users. So these two important acts, we are also hopeful that that will reshape the whole cultural context of child safety in India, including live streaming spaces that will prevent children from feedback and situations that children are encountering in the online spaces at the given time.

Sabrina Vorbau: Thank you very much, Patricia, for your intervention and also bringing us closer a bit to the national context and the current situation in India and also the, yeah, very shocking numbers you have shared with us, but I think, and we will come back to this also in our second round where we will talk a bit more about cross-platform and multi-stakeholder collaboration, but reporting is, of course, crucial to make that more accessible to children and young people, but the public in general, build more awareness but also more confidence in young people to take the step to reporting. Before we come to our last speaker for this first round, I just remind everyone, if you would like to make an intervention or ask a question here in the room, please line up towards the microphone and also online, there’s the possibility to intervene. You can just raise your hand. We will now conclude this first round, giving the floor to Sean, Sean Lytton, President from the Tech Coalition. Sean, I hope you’re with us and the floor is yours now. Yeah, thank you. It’s great to be with you all.

Sean Litton: By way of introduction, the Tech Coalition is a global association of leading tech companies. We have social media, search engines. We have many live streaming platforms. Some, that’s their primary business. And then many platforms, it’s a component of their service. They’ll have a live streaming component. And we are 100% focused on preventing, disrupting child sexual exploitation and abuse and building our members’ capacity to do this. As we heard from the other speakers, there’s alarming trends with respect to online child sexual exploitation and abuse in a live streaming context. It’s used to, it is perhaps the number one way that children are now exploited commercially in terms of, we would call it sex trafficking. These trends underscore the need to develop and deploy technologies that prevent child sexual exploitation and abuse on live streaming platforms. While also ensuring children’s safety, well-being, and privacy. The Tech Coalition and our members are prioritizing ways to do this. Two examples I’d like to talk about today. First of all, we are working with one of our members. It’s a major live streaming platform to develop a tool to detect child sexual exploitation and abuse in a live streaming context. It uses session metadata and third-party signals to generate a risk score for the particular broadcast. And because this tool operates without analyzing actual content of the live stream, privacy standards are preserved. But based on that score, the child safety team can take a closer look and decide whether to shut down or intervene in some way in that broadcast. So in practice, this approach relies on participants. participant characteristics, like country of origin, the use of anonymization services, et cetera. And by combining these metadata signals, the system indicates the likelihood of child sexual exploitation and abuse activity occurring within a given live stream session for further investigation. Now, development of this tool began in the fall of 2024, and this summer, our members will test and formally evaluate its feasibility for this approach for broader industry adoption. So our goal is to advance detection methods that use behavioral signals and metadata rather than relying solely on content scanning and preserve privacy of the conversation. So another example is Apple’s communication safety feature. For example, Apple’s a member of the tech coalition. This uses on-device machine learning to detect nudity in photos and video accounts, and because analysis happens on the device, neither Apple or any other third party observes the content or is aware that nudity was shared. When nudity is detected, the image is blurred, and the child sees age-appropriate safety information and health resources. And recently, Apple announced the expansion of this feature to FaceTime calls. It is also available by API for free to all developers who develop apps for iOS. This enables any app to check for nudity in a video stream, detecting the sensitive content either from the device’s camera or the remote devices signed into the conference call. These are just two examples of how child sexual exploitation abuse detection can combine both children’s safety and privacy to reduce abuse in live streaming environments.

Sabrina Vorbau: Thank you very much. Thank you very much, Sean. Also again, for giving us the perspective and the work that is going on in the space of tech companies, certainly a lot. We will come back. back to Sean also in the next round where we then also talk a little bit more about multi-stakeholder collaboration and also Sean mentioned already the promotion of well-being for children and young people when using technology. Before we get there I see we have a few people lining up. Just please before you intervene or ask your question quickly introduce yourself.

Audience: Please go ahead. Hello, good morning. I’m Julien Ruffy, I’m a associate professor at the University of Paris 8 and I co-chair the working group on internet governance and I’ve taken part in a few research projects on online use safety. One of the things that come up very frequently when we do focus groups in middle and high schools with young people is that a lot of the time they feel that partly they can be compelled by adults and institutions to use digital tools. Of course I’m not talking about live streaming but just having to use a mobile phone and being online a lot even when sometimes they might wish to disconnect and when they are faced with this kind of content or with cyber bullying problems or any of the things that you have talked about the main problem that comes up is where do I find a safe space where adults can hear my problems and act on them and I feel like a lot of the proposals coming from the tech company side is to say well we’re going to develop a technology that is going to detect nudity and things like that without infringing on privacy which is something that I would like maybe to question sometimes but actually what happens then if some content is flagged or something really bad happens what happens if then police for example doesn’t act upon it and so my question to you is in your work advocating for children’s rights in the online sphere how much effort are you targeting towards law enforcement agencies and educational institutions so that when something happens they act upon it because you can flag content as much as you like. Platforms can act as much as they’d like. If at some point there is no law enforcement coming up, problems are not going to be solved. Thank you. Thank you so much for your question. We haven’t

Sabrina Vorbau: heard from Robert yet but maybe this could already be a question for you to briefly answer but I know that you later on will give a bit more context on the work that Off-Limits are doing. Yes, thank you very much. Good morning everyone and thank you for that question. I would like to bring in the perspective of the

Robbert Hoving: safer internet centers, the system we have in Europe, because that actually is more a safe space both for parents, caretakers but also for people who are a victim. To find help and a lot of times that help can be that they just have someone that listens to them. They can also take the content down and then if they want to take more action, for instance go to the police, they can help with that as well but they can also provide them towards the right helping parties. So for me it would be really crucial to strengthen such a network because there are these spaces where people can go and a third aspect of them is the awareness raising to ensure to help schools and to give schools material to ensure that it doesn’t happen. You will always have people with bad intent so you cannot stop everything but I think you could step up much more in the prevention looking at the data that is actually coming in at the safer internet centers. And I think we will hear a little bit more about this later

Sabrina Vorbau: on in the session. Maybe for this round one more question or intervention and

Jutta Croll: Jutta please. Thank you for giving me the floor Sabrina. I am Jutta Kroll from the Dynamic Coalition on Children’s Rights in the Digital Environment and since I would like to say a few words about the Global Age Assurance Summit, which is part of the Global Age Assurance Summit, and Age Assurance was already mentioned as a tool to protect children from online sexual exploitation. I just wanted to intervene and to turn your attention to the Global Age Assurance Summit, Standard Summit communique on Age Assurance, which is a child rights-based approach to such issues, and I would like to say a few words about that. It’s possible with Age Assurance instruments to make sure that adults cannot join these spaces that are made for children, and the Global Age Assurance Standard Summit came to the conclusion that this can be done, data minimizing and privacy preserving, without being intrusive and gathering data from children, as well as providing a safe space for children to be able to join the community. So that’s what we are doing, and we are also working on the ISO standard 27566, so if anybody is interested how that can be done, you will find the communique. We have brought some copies and just come to the Dynamic Coalitions, who is in the IGF village, and we can talk about that. Thank you.

Sabrina Vorbau: Thank you very much, Sabrina, and thank you to all of you for being here. I think we have covered a lot of topics, and the great work you and colleagues are doing on children’s rights. I think we will move to the next round, but please stay with us. We will bring you in for intervention later.

Dhanaraj Thakur: I will hand over now to Danaraj to guide us through the second round, more looking into cross-platform and multi-stakeholder collaborations. Danaraj, I will hand over to you. Thank you, Sabrina, and thank you to all of you for being here. of content related to child sexual exploitation and abuse. This is a significant problem and part of the reality of how this kind of exploitation and abuse occurs. So, the question that we want to raise, to start with our experts, and then we can move into Q&A at the end of the round as well is how can we promote, or better promote, cross-platform efforts, including on livestream platforms, to prevent the spread of this kind of child sexual exploitation and abuse content. And to do that, I’m gonna start with you, Robert, to hear your thoughts first on this issue,

Robbert Hoving: on this cross-platform problem. All right, thank you very much. As we’re from the Safer Internet Center here in the Netherlands of Limits, we have the hotline for the child abuse material to be reported, we have the helpline for the other transgressive behavior, and also where caretakers can call for help. And we also have an initiative which is called Stop It Now, which is a prevention line for people actually watching this material. And when I bring that perspective in, when I start, for instance, with the perpetrator, we did research and we discovered that the people calling Stop It Now, more than half are males under 26, and they watch this type of behavior out of escalation behavior. For instance, because at a too young age they went online, and we know how easy it is to get access to an adult website, and we also know that looking at heavy material at a young age desensitizes and can ensure that people want to look for heavy material. Abuse material might be heavy material. When we look at the victims, for instance, at the numbers of the hotline at Off Limits, of all the reports coming in, one third of those reports is self-generated, meaning young children making sexualized images themselves, might be alone, might be with other children, might be with attributes, but they make them themselves. themselves through webcams, phones, et cetera. And of those 1 third of reports, 75% is prepubescent, so implying very young children. This is not children going to middle school in their teenagers. This is really young children. And then we look at platforms. And there are different types of platforms. There might be live streaming platforms or also social media. There are a lot of good aspects about social media. And some social media are really veered towards sharing content. But there’s also social media that is designed to meet people and to connect with people. Now, when we go back to that 75% prepubescent, we know from, and we’re going to investigate this now together with the Ministry of Justice, but we know that it might be grooming, for instance, in online spaces. But it might also be risk behavior, looking for attention because it comes from previous abuse or it’s because it’s stuff they have seen online at a too young age, and they’re enacting that. Now, when you have these three ingredients, Ms. Cole already mentioned age verification. I think it’s very important to, at certain online spaces, have age verification to ensure that old people who want to post as younger people and young people who want to post as older people, that they cannot connect. And also to chime in and to actually echo what Janice said at the session about deep fakes, cross-platform efforts, connecting with education. I think we could also really have companies stepping up to collaborate more with schools to give material instead of just pushing their tools towards schools, but also help build up the curriculum. Because we always say education, but when you put everything at schools, then there should also be the capacity at schools to be able to do that. And I think combining it with the tech sector, for instance, like Janice mentioned, could actually be a very good idea. So that would be my reaction. to your question. Great.

Dhanaraj Thakur: Thank you. Thank you for those points. Particularly the statistics you mentioned around prepubescent self-generated content. So Sean, I want to turn to you now, because given your particular perspective, your perspective with the coalition and therefore the opportunity to work across actual platforms

Sean Litton: and engage in efforts across platforms, I’d be curious to hear your thoughts on this issue of solutions, cross-platform solutions. Yeah, thank you for the question. So as the other speakers have noted, bad actors typically exploit multiple services across the tech ecosystem in their attempts to groom children, distribute CSAM, or engage in other harmful activities like financial extortion. So for example, the bad actor might contact a child on a gaming platform, move them to a private messaging platform, and then perhaps use a live streaming platform down the road. So the abuse spans social media, gaming, live streaming, payment apps, and more. But the individual company is obviously unaware of what happened on the other platforms. They don’t have all the information. And it makes them difficult, without the complete picture, to adequately grasp what’s going on and take action. So that’s why industry collaboration is essential at the Tech Coalition. We recently launched a program called Lantern, which is the first cross-platform signal sharing program that helps companies strengthen enforcement of their child safety policies. We launched Lantern so companies could securely share signals with one another about accounts and activity that violate their own child sexual exploitation and abuse policies. Until then, until Lantern, there was no consistent way for companies to share this information in a secure. and privacy-preserving way, Lantern helps fill that gap by revealing a fuller picture of the harm. Working with Lantern, companies can increase their prevention and detection capabilities, speed up threat identification, build awareness of emerging threats, bad actor tactics, and strengthen their reporting out to hotlines and other authorities. So we know this approach works. And last year, members shared hundreds of thousands of signals through Lantern. This led to account actions, content removal, and the disruption of offender networks and CSAM circulation. Signals helped flag contact and trafficking cases as well that may not have been identified otherwise. It’s really important. And crucially, these outcomes come in addition to the original action taken by the company that first detected the abuse, showing how Lantern enables a ripple effect of protection across the ecosystem. So together, initiatives like Lantern are helping close detection gaps, enabling faster action and improving that collaboration really does make a difference. It’s not just possible. It’s powerful.

Dhanaraj Thakur: So thank you. Great. Thank you, Sean. Very interesting to hear about Project Lantern. And I’m sure that might come up again in the Q&A. Very good. Also next, I want to now turn to one of our online speakers, Sabine Witting, based at Leiden University. Sabine, since this is, and hopefully you can hear us, and as well, since you’re joining us now and speaking for the first time, please, if you want to say more about yourself, please go ahead. Thank you. Thank you so much. And I hope you can hear me all right.

Sabine Witting: Yes. My name is Sabine Witting. I’m an assistant professor for Law and Digital Technologies at Leiden University, but I’m also the co-founder of TechLegality, which is a consultancy firm specializing in human rights and tech, especially on a lot of these hot potato topics that already came up today, such as age assurance. We do a lot of work on these topics across the world. So thanks so much for the question around cross-platform collaboration. And I think this kind of collaboration is essential because platforms all deal with the same human rights and children’s rights issues, especially issues around competing rights. And a lot of this really is essentially trying to square the circle, and this kind of collaboration can certainly assist with that. I think also that cross-platform collaboration alone, of course, is not enough. As important as it is that industry has their own space, for example, for the tech coalition to really collaborate, I think a multi-stakeholder effort is always crucial to have all people affected by technologies at the table, human rights advocates, child rights advocates, academia, but also parents and children themselves. And I also want to use an example of one of the technologies that’s often put forward as a solution to child sexual abuse in the digital space, but also especially for live streaming, which is age assurance, which has already come up a few times. And I think when we approach age assurance, for example, from a multi-stakeholder collaboration, one of the key gaps that’s always, always criticized about age assurance is the lack of clear evaluation criteria and how we can assess, for example, the effectiveness and the robustness of these technologies, and also to really understand better who is adversely impacted by these technologies. And at the moment, there are some standards, there’s some industry standards, which have played a very important role for quite a long time. But the problem is that a lot of the industry standards are not accessible and they’re not drafted in a multi-stakeholder way. They’re drafted by industry, they’re often drafted by age assurance providers themselves. And that, of course, then begs the question of might these standards be biased to a certain extent? Have human rights concerns from across the world really been taken into consideration? And here I’m not only talking about the Global North, but especially the Global South. And representatives of the Global South are usually underrepresented in these industry standard making processes, which is a huge problem because of the important role that industry standards are playing here. And maybe I want to point at a good practice example that I’m lucky enough to be part of, which is the current drafting of the IEEE standard on the prevention of CSEA and generative AI. And this group is really a combination of three groups. of all the actors that we need around the table. We have human rights advocates, we have industry, we have tech experts with academia. And it’s really enriching to see how an industry standard can be developed with all of these different stakeholders at the table. And also we have a strong representation from the global South. And of course, also generated AI, we’ve heard it from Abel and from Pratisha, is a different story in India and in African countries that is in the global North. So it’s really important to have not only a cross-platform collaboration, but multi-stakeholder and regional representation, especially of vulnerable groups.

Dhanaraj Thakur: Thank you. Great, thank you so much, Sabine. And thank you for raising this point about the relevance of standard settings, particularly around age insurance tech, and the utility of having multi-stakeholder projects with that. So I now want to turn to Kate Rowan, my colleague at the Center for Democracy and Technology. So Kate, we heard about, from Sean, about the industry efforts around cross-calibration, but we also heard both from Robert and Sabine, the importance of the multi-stakeholders in these efforts. So I’m curious on your thoughts on how we can better address the cross-platform proliferation of these kinds of content. Thanks so much, Jan-Raj. And thanks to everybody who has spoken so far.

Kate Ruane: I echo a lot of what’s already been said, and I specifically want to pick up on the point that cross-platform efforts need multi-stakeholder engagement in order to work best, in part because cross-platform efforts are going to create even more significant risks for human rights like free expression and privacy. And that is actually a necessary thing. Child sexual abuse and exploitation is such a large and clear harm. It is a crime around the world for a reason. And that means that efforts to restrict it are… necessarily proportionate to the harm. And so therefore, when mistakes get made, we can see really significant impacts on the lives of innocent people. And in order to ensure that our responses to the crime of CSEA is proportionate, multi-stakeholder engagement can ensure that harms do not propound beyond the criminal activity. And multi-stakeholder engagement can be really helpful for ensuring that there are things like transparency and appeals processes. The Tech Coalition, for example, has done human rights impact assessments on its efforts to combat CSEA. That’s really positive, and it would be really helpful to have more spaces in which children themselves, survivors, civil society, technologists, especially privacy experts, and others can engage in the ways that platforms are developing their information sharing efforts to ensure that human rights are respected throughout their development and their execution. And I specifically want to call out a couple of things. So the Tech Coalition has talked about its development of a tool to detect signals across platforms for child sexual abuse and exploitation. This is a really interesting and valuable tool, I think, and it would be helpful, particularly, to have transparency into the tools themselves, not just for signals development, but also for things like content detection, especially in live stream where it’s particularly difficult to execute. Most content detection tools are designed to identify content at rest, whereas live streaming content is constantly in motion. The various tools that exist to try to identify child sexual abuse and exploitation within live streaming are currently significantly lacking in benchmarks, yet we have. a number of organizations that are marketing technologies that claim to be able to identify child sexual exploitation and abuse content in live streaming or in video content. But we don’t really have a good way to identify whether these products actually work or work sufficiently, or whether there are sufficient safeguards in order to address potential errors in the content detection tools. Multi-stakeholder engagement and transparency can help with both of those things. It can help us identify and improve these types of tools, and it can help us sort of understand how they work going forward, and help us deploy better transparency and accountability tools for tech companies themselves and for governments who are engaged in proper enforcement of their laws going forward. So those are a couple things that I thought about. And I also wanted to turn back to Robert’s point about media literacy and ensuring that we are investing in people’s understanding regarding how to engage in combating CSEA at the local and person-to-person and user level. I think we definitely need significantly more engagement in that front, and we also need tech companies both in a cross-platform and a transparency way to talk to each other about how their reporting processes work. One of the things that we see is that your enforcement efforts sometimes are only as good as your reporting processes are. If it is difficult to find the button, just the simple design feature of how to report harm and ensure that it is moving forward, that is another simple thing that platforms can share information about can share transparency reporting about to make sure that we are consistently getting better in how we combat these harms, while also ensuring that we are protecting free expression and privacy in the process. So, we’re going to continue the pattern, the format that Sabrina laid out where we can now have a break for questions and interventions from the audience and we’re going to move on to the Q&A portion of the session. So, I’m going to turn it over to you.

Deborah Vassallo: Thank you. Thank you, Kate. So, I’m going to turn it over to you. We’ll take questions and interventions from the audience and, again, I would ask for the same thing where people can line up by the mic. So, please line up and we’ll take as many as we can. We do have a few minutes. So, yes, sir, please start and if you can just give just say introduction as well before you share your question or intervention. Sure. Good morning, everyone.

Andrew Kempling: My name is Andrew Kempling. I run a tech and public policy consultancy. I’m also a trustee of the Internet Watch Foundation. Since we’re talking about CSAM, no one’s actually given a number. I think we need to have a number. It’s estimated there’s roughly 300 million victims of child sexual abuse and exploitation every year globally. That’s about 14% of the world’s children each year. So, just to put some scale on this, this is a non-trivial problem. To add to the list that Lisa started with back at the start, a couple of people have mentioned age estimation and verification. I just wanted to reiterate that because a few sessions earlier in the week people have asserted that’s simply a mechanism to let social media platforms get even more data about their users. But as has rightly been said, there are privacy-preserving mechanisms to do age verification and estimation, which are important tools to keep children off from adult content, but importantly also adults off of child sites. and from accessing child accounts on site, so we need to make better use of that. What hasn’t really been said is that research shows us that end-to-end messaging platforms, encrypted messaging platforms, are widely used to share child sexual abuse material once it’s been captured, including video, and privacy is used as an excuse to not stop that, where people have weaponised privacy. I’d love to hear the panel’s comments. There are well-known privacy-preserving techniques to block known CSAM from being shared on these platforms. I’d love to hear your views on why they’re not being used. And then finally, I just wanted to also mention, since tech standards were covered, I think, by Sabrina, they are changing to the extent that a lot of existing parental controls and content filtering will stop working because metadata is increasingly being encrypted. That’s a major problem, not least of which because a lot of the policy community don’t take part in the way that tech standards are defined. So we need to find a better way for getting multi-stakeholder engagement, otherwise we see the problem getting bigger, not smaller, in the community. And then finally, Kate talked about those tools not being known whether they work or not. The IWF has data, we can test tools, so maybe let’s get together, perhaps with Sean’s members afterwards, and we can do some validation whether they actually are effective or not. Thank you.

Dhanaraj Thakur: Great, thank you. Thank you for the offer as well. So maybe we can have some quick reactions to this. First was the point about encryption. Kate, if you had any quick thoughts on that, and then the point about tech standards as well, maybe Lisa or Sabine, you might want to jump in there.

Kate Ruane: So, Kate, you wanna? Sure, I would love to start with the end-to-end encryption question. So, you know, I think that oftentimes privacy and safety are placed in tension with each other, and I find that framing to be a little bit difficult because I think privacy and safety are very much in line with one another. Folks often point to the distribution of child sexual abuse material through end-to-end encrypted platforms as a reason to create an encryption backdoor, or a justification for no longer using or relying on encrypted technologies. I think that that can obscure many of the benefits of encrypted technologies, and encrypted technologies are particularly salient for human rights defenders, for journalists, and for people who just want to keep their data private to both themselves and to the intended recipients, and keep it outside of the view, not just of governments and other bad actors, but also tech companies themselves. As tech companies continue to hoover up so much more data about all of us, that encrypted services are one of the few and potentially actually only place where they cannot see the content of the communication, and that becomes more and more important as we see the world changing in front of our eyes. But I want to point to a particular research that has been done by Rihanna Pfefferkorn at Stanford, where she looked at the effectiveness of content oblivious versus content aware methods of content moderation, and what she found is, for the most part, for the vast majority of harmful content, content oblivious methods. of detecting these types of abuse are far more effective than content scanning or content aware methods of detecting content. And so when you put that in front of the fact that CSAM is a very particular type of harm that is specifically probably best detected via things like content matching, putting it up against the value, putting the ability to detect it on every single surface across the entire internet, against the value of having a safe and effective place to engage in communications for national security purposes, for the purposes of journalism,

Dhanaraj Thakur: for the purposes of ensuring privacy from tech companies and from governments that would otherwise harm people for so many other reasons, I think that there are many other ways to detect CSAM on encrypted services, including through user reporting, which has been a very effective way to address that content, that we should continue to have encrypted services and continue to think about privacy and safety as things that are complementary to one another and not necessarily at odds. All right, thank you. Okay, for purposes of time and interest of time, I’m gonna ask the next two people in the line to maybe just very briefly offer your intervention or question and then I’ll bring it back to the panel and then we move to the next round. So please go ahead. And also Sir Amanda to introduce yourself.

Audience: Sure, Shiva Bisasa, Trinidad and Tobago. Speaking from the experience of knowing someone, knowing a victim of financial sextortion who ultimately took his life and what we’ve had to go through in the follow-up to that, in investigating, reporting, coming from a developed nation. I see a big gap between some of the things I’m hearing about on stages here and the reality that exists in the developing world. Definitely reporting needs to be improved, investigative methods need to be improved. If there’s the ability for victims to directly report to platforms, that would be good. Because we have material that we can put forward to the platforms and we need some outlet to give it to the authorities or competent authorities to deal with the matter. I think the first speaker, the first contributor, he talked about the law enforcement ability to do things. And I think that is something that needs to be developed further. Assistance with respect to development, developing capacity in law enforcement in developing states, as well as general awareness within the general population and some of the online harms that exist out there that needs developing in the developing world. Sean brought up the issue of financial extortion, he also brought up the issue of there’s an exit point, there’s a payment aspect to it. Direct question to Sean would be, does Landon’s project also contemplate signals from the financial transactions aspect, are you matching or are you searching within financial transactions to either find perpetrator payments, payments to perpetrators or payments from victims?

Sabrina Vorbau: Thank you. Thank you. And thank you also for sharing your experiences. The next, yes, please go ahead.

Sergio Tavares: Hi, my name is Sérgio Tavares, I’m from SaferNet Brasil, which is the safe internet center for Brazil. And I come from a country with almost 200 million internet users. One third, one in three, are children and teenagers below 18 years old. And also live streaming. streaming has also become prevalent in Brazil, and we are seeing a growing number of reports, not only in my country, but everywhere. If you look to the U.S., you can see the NICMAC numbers. If you look globally, you can see you have the INHOPE numbers. So the numbers of reports, the number of cases is growing everywhere in the world. On the other hand, the resources is declining. And my question is, how to solve this dilemma? Because we need resources to create technology. We need resources to create, to develop policies and prevention programs. But when you look for the government resources, they are declining. Privacy resources, industry resources, they are also, if it’s not declining, it’s stable on a very low level. And the problem is growing everywhere. How to solve this dilemma?

Dhanaraj Thakur: Thank you. Great. Thank you. Okay. So we’ll have some quick reactions from our speakers before moving to the next round. So Sean, there was a question directed to you. Maybe you have some thoughts you can briefly share with us.

Sean Litton: Yeah. Thank you. Yeah. There is a financial component to a lot of crimes against children online. With respect to Lantern specifically, we are piloting with two major global payment providers sharing signals on Lantern to determine the effectiveness of those signals. They are only ingesting. At this point, signals from the social media, gaming, et cetera, companies. And we’ll have a report out later this summer on the effectiveness of that pilot. If it’s effective, then we’ll scale it up and bring other financial companies onto the platform. So that’s, so you’re right. And I’m very sorry for what. happened to your friend, and there have been a number of cases of suicides related to financial extortion. It is a really difficult issue, and law enforcement is a big challenge there because the perpetrators of the abuse tend to thrive in countries where there’s lower law enforcement capacity, and then the victims may be in a different country. And so even if the report gets to that country where the origin of the crime or the abuse, law enforcement may not necessarily have the sufficient capacity to act on the report. And so this leaves everyone in a bind. But anyway, we are piloting with financial companies, and we hope to share those results later this summer. Thank you. Thank you. Okay, so we do have another round of questions. So I

Dhanaraj Thakur: want to actually, I know we have got a lot of important feedback from the audience, but I do have to turn it back to Sabrina so we can have another round, and then maybe we can save some time for the Q&A later on as well. Definitely, please stay in line. We will bring you in

Sabrina Vorbau: towards the end of the session, but in our final round we want to turn the attention to policy. It was brought up a couple of times already, the need for more standard mechanisms and also the more stronger role of law enforcement as CSAM threats evolve, also including the rise of self-generated content. We heard global numbers and regional numbers, and we will now look more into how should national and international policy frameworks respond. So for our final round, the question to the speakers is how can national and international policy be strengthened to address emerging forms of abuse such as the self-generated content? And Lisa is already on screen, so we will start this final round with you. Thanks so much, Sabrina, and I do of course have some thoughts in relation to the question. I just wanted to mention quickly two things relevant to the discussion we just had.

Lisa Robbins: Firstly, not my part of the OECD, but there has been some research done by part of the OECD called Financial Action Task Force, which has done some specific research in relation to disrupting financial flows relevant to live stream sexual abuse and sexual extortion. And so I’ll share that report with Sabrina so it can be shared with the group as well. And the paper that I just mentioned in relation to CSEA, we developed an intensive list of services that facilitate CSEA. And that really reflected what was said about cross platforming and off platforming from larger services to smaller services and a need for some scrutiny on the smaller services as well, which I think takes me into the policy question. So I just wanted to mention those issues. I think when we talk about national and international policies and how to strengthen them, certainly the position from the OECD and what we advocate is to take a tech neutral approach, a multi-layered approach, which can range from awareness raising with children, digital literacy through to industry action and then stronger sticks, such as industry regulation and law enforcement. That approach is a multi-stakeholder and that we engage really in good international collaboration so we don’t end up with a really fragmented regulatory space in this really global, global area. But I think what might be useful in relation to this conversation is to have a think about what are some sort of specific policy actions that maybe aren’t getting scrutiny the same way that a broader overarching policy actions are. A couple that have really been already touched on and really are getting attention are transparency reporting, practices and platforms in a number of areas. We’ve talked about transparency, not just of what’s happening on platform, but actually how companies are. are using tools and we’ve talked a lot about age assurance. I think there’s two other areas that would be interesting to focus on from a targeted policy response when we’re looking at safety of children from CSEA in a number of areas. So not just in self-generated content online coercion, but in all the different manifestations that we have relevant to CSEA. And I’m really pleased that the two that I was concerned about and wanted to raise have already been an important part of the discussion today. So the first is recidivism on platforms, and I think has been noted already is that law enforcement is overwhelmed. And I, again, as with my colleagues on the panel, express my sincere apologies and empathy with the gentleman who mentioned the terrible tragedy of his friend. Recidivism on platforms, I’m not necessarily talking about a law enforcement recidivism, but where bad actors are banned from an account, but then are able to recreate an account without consequences or without scrutiny. Now, Sean has already mentioned the Lantern Project, and I obviously will let him speak to that. But we do know that there is problems with recidivism. Research from Australia’s e-safety commissioner under its transparency reporting program shows that companies have really very little safeguards in the way of recidivism in varied practices. There’s limited information sharing across services. And what e-safety looked at both CCA and TVIC material and found that users and bad actors users were able to open an account, have it shut down for a violation and easily be able to open other ones with really little oversight as to how that is managed and oversight across platforms and within platforms to stop new accounts being created. And that is one area that could be focused on. The second is complaint mechanisms. And I’m really happy that that’s been mentioned a lot today as well. And just to focus on children themselves. and the capacity for children to make complaints. And I think Kate mentioned this, and also Ava mentioned this already today about really listening to kids and understanding complaints from a kid’s perspective. Again and again, research on children, when we talk to kids about what they want from platforms, they want better complaint mechanisms. They wanna be able to understand what it means actually to file a complaint, what filing a report means, what’s the consequences, and they want responses back to them to understand what actually happened with their complaint. And so I would, I’m happy to mention on other things and also to mention the work that the OECD has done on transparency reporting and age assurance and more broad legislation and international cooperation, but I would posit those two as two important areas where policy action could be focused.

Sabrina Vorbau: Thank you so much, Lisa, and you mentioned the multi-layered approach, which I think comes out very strongly of the discussion we’re having today and then also really crucial, the strong collaboration that is needed with law enforcement, and I want to bring in Robert T also based on the work that Off-Limits is doing and specifically the hotlines.

Robbert Hoving: Yeah, when I look at policies, for instance, I think a lot is there. In the Netherlands, a lot is there. On the European level, for instance, with the DSA now coming into effect, a lot is there. We have enforcement, we have safer internet centers, so we have policies how to deal with them. Sometimes I think that authorities could maybe go a bit quicker when a party is not working. For instance, Telegram, we had troubles with them in the Netherlands. You could also decide, take them out of the App Store because we know there’s a lot of CSAM, there’s weapons being sold, et cetera, and I think that that is a very good solution to directly go after these companies instead, for instance, looking at privacy rights, so to say. Going to the back, a bit for me the buzzwords of the IGF and this is my first IGF so maybe there are more buzzwords that I didn’t hear, but I heard a lot like multi-stakeholder approach, public-private cooperation, but I think that that is what we need more and I think also how we’re sitting in this room, what we’re doing at the IGF, it’s like gold and you need to brush it and it starts to shine because the dominant theme with online abuse and with new forms of online abuse is its content. So that to me means you should have an integral approach of how you deal with content online because another theme which will be dominant in new forms of abuse, it will start as harmful but be lawful. It can still have tremendous effects on people offline, people of the LGBTQ plus community, it can be very racist for instance with memes, but it is very sure that the dominant theme is content and it’s a lot of the times it’s lawful, it’s allowed to post these things while we know that these people who are a victim need help and I think by working that way in the public-private stakeholders, working together in a public-private collaboration, you can pick up signals together, you can do triage and from those signals you can see like hey, maybe we see a trend online that really as a society we don’t accept anymore and then you might try, then you might start to change your legislation because out of all those signals you decide like we want to change the law, it’s something we did for an instance in the Netherlands with doxxing, had to collect data to intimidate someone and also to spread that data with addresses etc. has been criminalized since the 1st of January 2024. So I think by picking up these signals, by working integral on the content that’s online, that is what we should do more. Like mentioned, I think we have a perfect example in the Netherlands, the ECP, that area in the room. I think the way we did that in the Netherlands, that multistakeholder public-private working together and actually building something and looking at, like, how should we approach this content, and be curious, be curious as a company, be curious as education, also be curious as civil society and, for instance, as ministries, that that is much more the way forward, because a lot of the policies, legislation, in my opinion, is already there. And I think the way we did that in the Netherlands, that multistakeholder public-private working together and actually building something and looking at, like, how should we approach this content, and be curious as education, also be curious as civil society and, for instance, as ministries, that that is much more the way forward, because a lot of the policies, legislation, in my opinion, is already there.

Sabrina Vorbau: Thank you. Thank you for sharing also that best practice example and if colleagues are interested, we are also here in the IGF village with the better internet for kids and the others, also to discuss on a more personal level if you are interested later on and to continue the conversation with us. Before we will conclude this round, I would like to bring in two more colleagues and also turn now to Kate for intervening with her points. Yes. Very quickly, because I would like to get to questions

Kate Ruane: as well. So I think on policy I can think of two specific things that I would like to see happen. First, I would like to see companies, and I have mentioned this already, but I would like to see more transparency from companies regarding how they are identifying and removing child sexual exploitation and abuse. The first is we don’t actually know the prevalence of this type of abuse on many platforms, because we do not have enough data from the platforms regarding how much they encounter. Second, it would be interesting to see how the effectiveness of the tools they are using. Right now, there are generally speaking, I think, three categories of tools being used to prevent, identify, and remove child sexual abuse material, and they are just kind of basically design-based features, so, for example, in order to live stream, you need to have a certain number of followers or an account that has existed for a certain amount of time, or you need to go through an age assurance process to assure that you are not of a particular age in order to begin to do that. into live stream, it would be good to know the degree to which those types of tools are reducing the number of live streams or reducing the amount of problematic content shared. It would also be good to know the data on which content detection tools are trained. At this time, content detection tools like PhotoDNA, we know that those are trained on known CSAM, but they are specifically designed to identify particular images at rest. If we are talking about content in live streaming, content that is moving, content that is live, it would be helpful to understand, A, the data sets that are being used to train these types of tools, B, how the data is being sourced, whether it is being done ethically, and then C, whether there has been consent to the use of the training data, especially if it is being trained on existing child sexual abuse material. Currently, we are not aware, or at least we’re not sufficiently aware, of how these tools are being created and trained, and yet they are being marketed as tools to use to detect content in live streaming. And then kind of the last thing we need to know is, how well are signals detection tools being, or how well are they working? Tools that, as the tech coalition talked about, tools like, OK, we’re looking at where your IP address is coming from. We’re looking at signals regarding the content that is associated with the live stream, for example, to try to figure out the likelihood that child sexual abuse is happening within specific content or within a live stream. Data regarding how successful those are and what mitigation efforts look like when content is misidentified as being CSAM, I think, would be helpful things to engage in from a policy perspective. And the last thing I wanted to talk about is law enforcement. In the United States, one of the most of the biggest problems we have is under-resourced law enforcement. So we are doing, actually, probably a relatively decent job of identifying CSAM, particularly when it is at rest, and reporting it into NCMEC. But what we don’t know, or what is under-resourced, is the ability to address that identified harm by law enforcement. And another tool that is going to be necessary going forward, and it’s going to get more and more necessary, to separate out synthetic CSAM, AI-generated CSAM, from real, live CSAM that has been created using an actual child. Because that is going to be essential to helping law enforcement identify children that are in harm so that they can engage in enforcement efforts in a more efficient way.

Sabrina Vorbau: Thank you. Thank you for your points, and also for concluding a bit on the transparency. Aspect, and yeah, how holding companies more accountable, as Robert mentioned already, in the EU. We have the Digital Services Act that came into force recently with a specific article on the protection of minors. And here, this is also a good example where, for example, our colleagues from the Safer Internet Centers are very active, also kind of transmitting this transparency in the education to the children, the young people, but also the parents and the schools to make policymaking in general also accessible. One more intervention from our speakers for this final round, and we have Sabine on the screen. Last points from your side. The floor is yours.

Sabine Witting: Thanks so much. I think if you allow me to approach a question maybe from the legal angle, I think there are various areas of laws that still require strengthening, both on the international level, but also national level. And I think there is a bit of a misconception that technology-facilitated child sexual abuse and exploitation is equally criminalized across the world. And we all have the same understanding what that means. And that’s certainly not the case. So much more work needs to be done in ensuring strong national legislation, especially in areas. such as live streaming, we have still a lot of countries that do not criminalize the mere accessing of child sexual abuse material as a separate criminal offense. And that’s because the main focus of international law, for example, the optional protocol to the CRC was always on possession because that was the prevalent issue in the 90s when these conventions were drafted. So there’s still quite a bit of a push where upcoming issues such as live streaming need to be addressed in national criminal law. The same issue is with self-generated content. Self-generated content is often considered a homogeneous group of content, which we just also need to criminalize. However, that is a much more complex issue from a children’s rights perspective because there is content that is produced voluntarily, consensually of adolescents above the age of consent to sexual activity. And these issues also need to be approached, again, with the same care from a children’s rights sense within the context that this content is produced. And I wanna go back to one of the points, I think that was mentioned a few times, which was a question around law enforcement. I would really like to see the same effort from governments that they at the moment put into platform regulation, they should put into law enforcement and strengthening the criminal justice framework. Because there is a bit of a, I feel like an over-focus at the moment on the responsibility of platforms, while we know that if these cases then really reach the court system, most of them either A, fall through the cracks, or B, the children that are forced to go through the court system leave extremely traumatized. And this is for various reasons. First of all, because we still have a lack of protective measures for children in the criminal justice system. For example, protection from cross-examination. There is insufficient court preparation for children and the presiding magistrate, and also the prosecutor that interview and that examine children within the criminal justice system are not trained to do so in an age-appropriate and trauma-informed way. And I think this is a huge, huge gap. And I would really like to see an equal effort to re-strengthen that system. And the same applies, for example, and especially for victims of technology-facilitated child sexual abuse and exploitation. A lot of measures that are typically used in a child-friendly justice process, for example, the use of CCTV cameras, can be quite traumatizing for a child that is a victim of live stream of child sexual abuse and exploitation, because you put the child again in front of a camera in the criminal justice system, even though the camera has played a crucial role in the abuse and exploitation. exploitation. Much more consideration needs to be paid around also the use of technology in the criminal justice sector and how that might impact children that have experienced technology facilitated abuse and exploitation. My last point I want to make on the role of the financial sector because I think that also came up quite a few times times now especially again in the context of live streaming of child sexual abuse and exploitation where this is done in a commercial way. The financial sector can really play a very crucial role in flagging suspicious payments. I think mandatory reporting is really essential to make sure that financial institutions report suspicious transactions that might be linked to live stream of child sexual abuse and exploitation and one of the ways to do that is to consider these kinds of offenses predicate offenses under anti-money laundering laws and that really would help to oblige financial institutions to file suspicious transaction reports and as we know especially when it comes to organized crime and that includes a lot of cases of live stream of CSEA follow the money is one of the most important leads for law enforcement one of the most important starting points so I think also when we talk about industry responsibility let’s not leave at the financial sector especially in the context of commercial sexual exploitation and live stream. Thank you. Thank you very much

Sabrina Vorbau: Sabine also closing us up on a lot of action points and also very clearly mentioning that in all of this we should not forget the well-being of the child which is really needs to be and should be the center of the action. Thank you so much. For the last couple of minutes we would like again to give the audience here in the room and online possibilities for interventions and questions and please take the microphone and briefly introduce yourself.

Audience: So my name is Jameson Cruz. I live in Manaus, it’s one of the biggest Brazilian cities, and it is located in the middle of the Amazon forest. And I have proudly represented the youth delegation of the Brazilian Internet Young Governors Training Program since 2022. And during the Internet Forum in Brazil, the largest IGF event in the world, I have proposed a workshop focused on the protection of children and adolescents in the online environment. While addressing issues such as sexual exploitation is crucial, we must also emphasize the growing need to educate and communicate with people about the danger of overexposure online. This is a challenge that has intensified in recent years. Violence, discrimination, and sexualizing of minors have expanded beyond the physical and now permeate virtual space. The urgency to act is clear. We need platform regulations, public policies, education, and digital literacy to ensure that the internet is a safe space for all, especially for our youngest users. Thank you. Thank you. Thank you so much. And wonderful to have you here and many other youth from all over the world, especially the Global South to participate in the IGF. And I think programs like the Youth IGF are crucial and important. And I think as you rightly said, there is the need to put also more youth voices in the different sessions within the IGF and definitely also put more spotlight on children’s rights.

Sabrina Vorbau: Thank you. We’ll turn to our next speaker here in the room. Hi.

Audience: My name is Cosima and I work with the UK Government. So, I’m going to start off by saying that I’ve been a member of the digital literacy initiative for a long time. I’ve been a member of the NAC for Internet Center for about six years, working on digital literacy and also policy. I think digital literacy is an incredibly powerful tool. I would just like to mention or get your insight, there’s not lots of time, on how we can frame sort of literacy initiatives to be sensitive to the fact that if we want to implement them, we need to be able to do it in a way that people can understand what we’re talking about. So, I would just like to say that I’m a member of the NAC for Internet Center for a long time. I think it’s really important so they can understand it but we’re also not breaching a line. And then just quickly on policy, I really appreciated your insights on policy. I just wanted to quickly flag the idea that beyond the EU, several countries are either showing apprehension to sort of the use of digital literacy as a way to regulate the use of AI. So, I think it’s really important to see how technology facilitated gender-based violence and also more specifically CSAM, how we can sort of navigate that. Because if we’re outlawing regulation of AI, then that also includes regulation of AI in CSAM. So, yeah, I just wanted to flag that. Thank you.

Sabrina Vorbau: Thank you so much. Thank you for your participation as well. I’m going to turn it over to you. I’m going to turn it over to you. I’m going to turn it over to you.

Kate Ruane: Sure, I mean, I’m not aware of every single policy effort around the world right now, but the U.S. is currently considering a policy which would prevent U.S. states from regulating AI to some extent. We, our organization, the Center for Democracy and Technology has identified this particular provision as incredibly important, and I think it even links to the need of more as the generation of non-consensual intimate images or synthetic CSAM is certainly one problem that might exist. One thing to think about though is whether existing laws actually already without naming AI specifically already cover that particular type of abuse. Hopefully there are around the world laws that currently exist that while they might not say AI, nonetheless encompass the content that would be AI generated child sexual abuse material. Again, I don’t know enough, but from our perspective, preventing the regulation of AI whole cloth without considering the human rights impacts of that type of an action is deeply problematic.

Sabrina Vorbau: Thank you. We have two last interventions and questions in the room and then we will close with Danaraj and some takeaways from the session. So maybe we can take both questions and then respond to it. Please go ahead. Thanks.

Audience: My name is Raoul Plummer. I’m with the Electronic Frontier Finland. My first IGF was in Joopessoa. I’ve been to quite a few of these. And this is the first time I actually have to take a little space to commend something [This portion has been removed from the record for violating the IGF Code of Conduct; particulalry the stipulation to “focus discussion or remarks on issues rather than on particular actors, whether they be individuals, groups, organizations or governments, and refrain from personal or ad-hominem attacks]. And this is a very polarized issue as it is. It’s very tough and complicated. I totally appreciate saying that privacy and children’s rights can be completely aligned. But this kind of polarization [This portion has been removed from the record for violating the IGF Code of Conduct; particulalry the stipulation to “focus discussion or remarks on issues rather than on particular actors, whether they be individuals, groups, organizations or governments, and refrain from personal or ad-hominem attacks]. Thanks. Thank you. We will give the opportunity for follow-up after the session. Last speaker, please. I’m from the Finnish Green Party. My question is mostly relating to the issue that we have a lot of this… I mean, even this week we’ve heard a lot from these different actors, international actors, organizations, about the efforts to take down this type of content. But quite often the conversation ends up at washing their hands, that, yeah, we took down this content, we don’t have jurisdiction to go further. The investigation and other things need to be left up to law enforcement, which severely differs between different countries as well as legislation. And quite often the law enforcement does not have resources, neither in personnel or skills or understanding of current technologies, to actually do anything about it. And thus we get those repeat offenders and recidivism on those platforms as well as larger crime syndicates which operate through this or earn some part of their income through perpetuating this material or disseminating it. And it just seems still very confusing that this is such a long-lived issue, but there is no proper official body to unite these platforms and law enforcement. I hope to see some…

Sabrina Vorbau: I think we are all continuing, I will continue trying our best to collaborate in the way that we have been doing here today. I think it’s just taking a sort of glimpse of the conversation, opening up the conversation, and hopefully continuing the conversation in mutual respect of everyone’s opinion and the work we are doing. We are up for time, but I would like to give a final minute to Dan-Ada. I would like to give a final minute to Dan-Ada to kind of take us a little bit through some of the takeaways from our session.

Dhanaraj Thakur: Yes, thank you, Sabrina. We are at time, so I’ll be very brief just to say that we are having this very important conversation, but I think it’s obvious that we all recognize that this is a very serious problem that has significant harmful life-threatening impacts on children, their families, and communities. It’s important to highlight many of the, as the speakers did, as well as the audience members highlighted many of the important paths forward, particularly in terms of different kinds of recommendations, which included, for example, a point that was made at the very start, centering children in the design of solutions, in addressing the problem, starting from the design of technologies right through to the criminal justice system, centering their views as well as their well-being. Many participants, speakers mentioned the multistakeholder, the relevance and importance of multistakeholder approaches, but even more specifically talked about how that relates not just to, say, cross-platform approaches, public-private approaches, but also in standard setting and even coordination, improving coordination with law enforcement, for example. And we also talked about technical solutions and where these technical solutions, where we, those developing policy, require more information and transparency around these kinds of technologies. And so transparency is a theme that came up. several times as well, both in the efficacy of technologies, but also in trying to disaggregate, for example, emerging problems around synthetic CSAM and actual CSAM and so on. I think there is a lot that was discussed. And I know, given the line of participants that came up, we didn’t even get to every point. We will be sharing a summary report of this afterwards as well. What I would like to end on, of course, is just to thank all of our speakers. We really appreciate everyone who was able to join from many different places, here on stage with Robert and Kate, but also online with Patricia, Ayo, Sabine, and Sean as well. And then thanks, of course, to Sabrina and Deborah,

Sabrina Vorbau: our co-organizers on this. Thank you, everyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you to all of the organizers. Thank you. Thank you all. Thank you for your participation. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

L

Lisa Robbins

Speech speed

170 words per minute

Speech length

1767 words

Speech time

623 seconds

Safety by design approach with front-end prevention and back-end detection technologies is essential

Explanation

Lisa argues that protecting children from CSEA requires incorporating technologies at both the front end to prevent harm from occurring and at the backend to detect harm should it occur otherwise. This approach involves putting in place certain technologies into service design and delivery as part of a comprehensive safety by design framework.

Evidence

OECD published a report that posits eight key components for digital safety by design, with technologies falling into buckets of front-end prevention tools and backend detection tools

Major discussion point

Technology Tools and Safety by Design for CSAM Prevention

Topics

Cybersecurity | Human rights

Age assurance is a key component but only two of 50 services systematically assure age on account creation

Explanation

Lisa identifies age assurance as a crucial component of safety by design because companies need to know who their users are to put in place child protective safeguards. However, research shows that very few services actually implement systematic age assurance, revealing significant gaps in current practices.

Evidence

OECD research of 50 services showed only two systematically assure age on account creation; paper on age-related policies was released the day of the session

Major discussion point

Technology Tools and Safety by Design for CSAM Prevention

Topics

Human rights | Legal and regulatory

Companies rarely provide detailed information on moderation practices specific to live streaming

Explanation

Lisa’s research reveals that companies provide very little transparency about their moderation practices, particularly for live streaming content. This lack of detailed information makes it difficult for policymakers to understand where targeted policy action and deployment of safeguarding technology is needed.

Evidence

OECD report examining 80 services found only two provided information on moderation practices specific to live streaming, and only a few provide metrics breaking down live streaming incidents

Major discussion point

Transparency and Accountability in Platform Practices

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Transparency from platforms about their practices is insufficient

Protecting children from CSEA is directly protecting their rights to freedom from violence and dignity

Explanation

Lisa emphasizes that protecting children from child sexual exploitation and abuse should not just be done in a way that protects their rights, but that this protection IS protecting their rights. She argues that CSEA directly infringes upon children’s dignity, privacy, and their ability to access other rights online.

Evidence

CSEA infringes upon dignity rights, privacy, and safe online spaces enable children to access rights such as opinion, assembly, information, education and health

Major discussion point

Children’s Rights and Participation in Solutions

Topics

Human rights

Better transparency is needed regarding how companies identify and remove CSEA content

Explanation

Lisa advocates for more transparency from companies about their content detection and removal processes. This includes understanding the prevalence of abuse on platforms, the effectiveness of tools being used, and how well different detection methods are working.

Evidence

Research shows companies have very little safeguards against recidivism and limited information sharing across services; Australia’s e-safety commissioner research on transparency reporting

Major discussion point

Transparency and Accountability in Platform Practices

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Cross-platform collaboration is necessary due to the nature of online abuse

Disagreed with

Disagreed on

Focus on Platform Regulation vs. Law Enforcement

Children need better complaint mechanisms with clear understanding of consequences and responses

Explanation

Lisa highlights that research consistently shows children want better complaint mechanisms from platforms. They want to understand what filing a complaint means, what the consequences are, and they want responses back to understand what happened with their complaint.

Evidence

Research on children shows that when talking to kids about what they want from platforms, they consistently ask for better complaint mechanisms and clearer communication about the complaint process

Major discussion point

Children’s Rights and Participation in Solutions

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Children’s voices and participation must be central to solution design

A

Aidam Amenyah

Speech speed

139 words per minute

Speech length

560 words

Speech time

241 seconds

AI-powered platforms should be sensitive to detect, prevent, and report live streaming situations affecting children

Explanation

Aidam argues that as AI technology becomes available, platforms should be designed to be sensitive enough to detect, prevent, and report live streaming situations that affect children. This technological solution should be developed with input from young people themselves to ensure effectiveness.

Evidence

Young people in Africa encounter harmful content virtually every day and specifically mentioned AI as a solution they want to see implemented

Major discussion point

Technology Tools and Safety by Design for CSAM Prevention

Topics

Cybersecurity | Human rights

Laws are not effectively enforced and internet service providers lack accountability

Explanation

Aidam points out that young people in Africa observe that existing laws are not effectively enforced and appear not to be working in their region. He argues that internet service providers should have more responsibility and be held accountable for blocking harmful live streams involving children.

Evidence

Young people reported that laws are not biting and not working effectively in their country/region; ISPs are not blocking live streams of children adequately because there are no laws holding them accountable

Major discussion point

Law Enforcement and Legal Framework Challenges

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Law enforcement capacity and resources are inadequate globally

Children want involvement in designing protection solutions to ensure they are fit for purpose

Explanation

Aidam emphasizes that young people want to be directly involved in designing solutions that affect them, rather than having adults create designs without their input. They also want human oversight on AI moderation to prevent false positives and ensure solutions are intuitive for children to use.

Evidence

Young people recommended involvement of children in local communities to design solutions, human oversight on AI moderation, and emphasized that adult-designed solutions may not really impact children effectively

Major discussion point

Children’s Rights and Participation in Solutions

Topics

Human rights

Agreed with

Agreed on

Children’s voices and participation must be central to solution design

Africa faces uneven digital literacy and rapid technology growth exposing children to incidents

Explanation

Aidam describes the unique situation in Africa where there is uneven digital literacy combined with rapid technological growth. While technology helps to some extent, it also exposes children to numerous incidents, with young people reporting they encounter harmful content almost daily.

Evidence

Child Online Africa interacts with young people who report encountering harmful content virtually every day in online spaces

Major discussion point

Regional Perspectives and Challenges

Topics

Development | Human rights

P

Patricia Aurora

Speech speed

145 words per minute

Speech length

766 words

Speech time

316 seconds

Tech platforms need standard mechanisms and robust legal frameworks to protect children effectively

Explanation

Patricia argues that tech platforms are failing to provide standard mechanisms to protect children, partly because there isn’t a robust legal framework in place. Currently, India borrows sections from various laws that weren’t originally designed to address online child protection, creating gaps in protection.

Evidence

India currently borrows sections from various laws to protect children online, which were not originally designed for online child protection; children facing abuse don’t know how to report situations

Major discussion point

Technology Tools and Safety by Design for CSAM Prevention

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Transparency from platforms about their practices is insufficient

Live streaming platforms are used for monetizing pre-recorded child abuse content

Explanation

Patricia highlights that live streaming has become a platform where pre-recorded videos containing child abuse are being monetized for profit. This cross-platform use of child-abusive content has reached peak levels and represents a significant threat to children’s safety.

Evidence

NECMEC and TIPLINE reports show 9 million children out of 36.2 million population in India have been targeted; pre-recorded videos are being monetized on live streaming platforms

Major discussion point

Financial Aspects and Commercial Exploitation

Topics

Cybersecurity | Human rights

India lacks robust legal frameworks and borrows sections from various laws for child protection

Explanation

Patricia explains that India doesn’t have comprehensive legal frameworks specifically designed for online child protection. Instead, the country relies on borrowing sections from various existing laws that weren’t originally intended to address online child safety issues, creating gaps in protection.

Evidence

Digital Personal Data Protection Act and Digital India Act are being developed to address these gaps and reshape child safety in India

Major discussion point

Regional Perspectives and Challenges

Topics

Legal and regulatory | Human rights

Cultural contexts must be considered when addressing child safety in different regions

Explanation

Patricia emphasizes the importance of considering cultural aspects when addressing child safety online, particularly in the context of live streaming. She argues that solutions need to account for how different cultures view and approach child safety in online spaces.

Evidence

Discussion of cultural lens needed for safety in online spaces and how freedom of internet access can turn problematic without adequate cultural considerations

Major discussion point

Regional Perspectives and Challenges

Topics

Sociocultural | Human rights

S

Sean Litton

Speech speed

136 words per minute

Speech length

1141 words

Speech time

503 seconds

Session metadata and third-party signals can generate risk scores for broadcasts without analyzing actual content

Explanation

Sean describes a tool being developed with a major live streaming platform that uses session metadata and third-party signals to generate risk scores for broadcasts. This approach preserves privacy by not analyzing the actual content of live streams while still enabling child safety teams to identify potentially harmful broadcasts.

Evidence

Tool uses participant characteristics like country of origin and use of anonymization services; development began fall 2024 with testing planned for summer 2025

Major discussion point

Technology Tools and Safety by Design for CSAM Prevention

Topics

Cybersecurity | Human rights

On-device machine learning can detect nudity while preserving privacy through local processing

Explanation

Sean highlights Apple’s communication safety feature as an example of privacy-preserving technology that uses on-device machine learning to detect nudity. Because analysis happens on the device, neither Apple nor third parties can observe the content, and the feature has been expanded to FaceTime calls and made available to all iOS developers.

Evidence

Apple’s feature blurs detected images and shows age-appropriate safety information; recently expanded to FaceTime calls and available by API for free to all iOS developers

Major discussion point

Technology Tools and Safety by Design for CSAM Prevention

Topics

Cybersecurity | Human rights

Bad actors exploit multiple services across the tech ecosystem, requiring industry collaboration

Explanation

Sean explains that perpetrators typically use multiple platforms in their abuse activities – they might contact a child on a gaming platform, move them to private messaging, and then use live streaming platforms. Individual companies lack the complete picture of this cross-platform abuse, making collaboration essential.

Evidence

Example of abuse spanning social media, gaming, live streaming, and payment apps; individual companies are unaware of activities on other platforms

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Cybersecurity | Legal and regulatory

Agreed with

Agreed on

Cross-platform collaboration is necessary due to the nature of online abuse

Lantern program enables secure signal sharing between companies about policy-violating accounts and activities

Explanation

Sean describes Lantern as the first cross-platform signal sharing program that helps companies strengthen enforcement of their child safety policies. It allows companies to securely share signals about accounts and activities that violate child sexual exploitation and abuse policies, revealing a fuller picture of harm.

Evidence

Members shared hundreds of thousands of signals through Lantern last year, leading to account actions, content removal, and disruption of offender networks; helped flag contact and trafficking cases

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Cybersecurity | Legal and regulatory

Financial extortion is a major component of crimes against children online

Explanation

Sean acknowledges that there is a significant financial component to many crimes against children online, including cases that have led to suicides. He notes that perpetrators often operate from countries with lower law enforcement capacity while victims may be in different countries, creating enforcement challenges.

Evidence

Lantern is piloting with two major global payment providers; there have been cases of suicides related to financial extortion; perpetrators often operate from countries with lower law enforcement capacity

Major discussion point

Financial Aspects and Commercial Exploitation

Topics

Cybersecurity | Economic

Law enforcement capacity varies significantly between countries, creating enforcement gaps

Explanation

Sean points out that even when reports reach countries where crimes originate, law enforcement may not have sufficient capacity to act on the reports. This creates a challenging situation where everyone is left in a bind, as perpetrators can exploit jurisdictions with weaker enforcement capabilities.

Evidence

Perpetrators tend to operate in countries with lower law enforcement capacity while victims may be in different countries; law enforcement may lack sufficient capacity to act on reports

Major discussion point

Law Enforcement and Legal Framework Challenges

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Law enforcement capacity and resources are inadequate globally

R

Robbert Hoving

Speech speed

180 words per minute

Speech length

1383 words

Speech time

459 seconds

One-third of hotline reports involve self-generated content, with 75% being prepubescent children

Explanation

Robbert shares alarming statistics from Off-Limits hotline data showing that one-third of all reports involve self-generated content – meaning young children making sexualized images themselves. Most concerning is that 75% of these self-generated reports involve prepubescent children, not teenagers, indicating very young children are involved.

Evidence

Data from Off-Limits hotline shows 1/3 of reports are self-generated content; 75% of self-generated content involves prepubescent children; content made through webcams, phones, etc.

Major discussion point

Self-Generated Content and Emerging Threats

Topics

Human rights | Cybersecurity

Companies should collaborate more with schools on curriculum development rather than just pushing tools

Explanation

Robbert argues that instead of simply providing tools to schools, tech companies should help build up educational curriculum in collaboration with schools. He emphasizes that if everything is placed on schools for education, there should also be adequate capacity at schools to handle these responsibilities.

Evidence

Combining education efforts with the tech sector could be beneficial; schools need capacity building to handle educational responsibilities around online safety

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Sociocultural | Human rights

Agreed with

Agreed on

Multi-stakeholder collaboration is essential for effective child protection solutions

Safer Internet Centers provide safe spaces for victims and help with content removal

Explanation

Robbert explains that Safer Internet Centers serve as safe spaces for both parents and victims, providing someone to listen to them, helping with content removal, and assisting with further action like going to police. They also provide awareness raising to help schools prevent incidents from occurring.

Evidence

Off-Limits operates hotline for child abuse material, helpline for other transgressive behavior, and Stop It Now prevention line; they help with content takedown and connecting victims to appropriate help

Major discussion point

Education and Prevention Strategies

Topics

Human rights | Sociocultural

Grooming in online spaces and risk behavior from previous abuse contribute to self-generation

Explanation

Robbert explains that the high percentage of prepubescent self-generated content may result from grooming in online spaces, risk behavior seeking attention due to previous abuse, or children enacting behaviors they’ve seen online at too young an age. This highlights the complex factors behind self-generated content.

Evidence

Research with Ministry of Justice planned to investigate causes; perpetrators calling Stop It Now are more than half males under 26 who escalated from accessing adult websites at young age

Major discussion point

Self-Generated Content and Emerging Threats

Topics

Human rights | Cybersecurity

Prevention through awareness raising and school materials is crucial alongside enforcement

Explanation

Robbert emphasizes that while there will always be people with bad intent that cannot be completely stopped, much more can be done in prevention through awareness raising and providing schools with materials. He argues for strengthening prevention efforts based on data coming into Safer Internet Centers.

Evidence

Safer Internet Centers provide awareness raising and help schools with materials; data from centers shows prevention opportunities

Major discussion point

Education and Prevention Strategies

Topics

Sociocultural | Human rights

K

Kate Ruane

Speech speed

147 words per minute

Speech length

1863 words

Speech time

755 seconds

Cross-platform efforts create significant risks for human rights and need multi-stakeholder engagement for transparency

Explanation

Kate argues that while cross-platform efforts are necessary to address CSEA, they create even more significant risks for human rights like free expression and privacy. Multi-stakeholder engagement is essential to ensure that responses remain proportionate and that mistakes don’t disproportionately impact innocent people.

Evidence

Tech Coalition has done human rights impact assessments on its CSEA efforts; need for transparency and appeals processes in cross-platform information sharing

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Cross-platform collaboration is necessary due to the nature of online abuse

Companies need to share information about reporting process effectiveness and design features

Explanation

Kate emphasizes that enforcement efforts are only as good as reporting processes, and simple design features like how easy it is to find the report button matter significantly. She argues that platforms should share transparency reporting about their reporting processes to consistently improve how they combat harm while protecting rights.

Evidence

Simple design features like report button placement affect effectiveness; platforms should share information about how their reporting processes work

Major discussion point

Transparency and Accountability in Platform Practices

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Transparency from platforms about their practices is insufficient

Disagreed with

Disagreed on

Focus on Platform Regulation vs. Law Enforcement

Privacy and safety should be viewed as complementary rather than opposing values

Explanation

Kate argues against framing privacy and safety as being in tension with each other, stating they are very much aligned. She emphasizes that encrypted technologies provide essential protection for human rights defenders, journalists, and people wanting to keep their data private from both governments and tech companies.

Evidence

Research by Rihanna Pfefferkorn at Stanford showed content-oblivious methods are more effective than content-aware methods for most harmful content; encrypted services are one of few places where tech companies cannot see communication content

Major discussion point

Privacy and Encryption Considerations

Topics

Human rights | Cybersecurity

Disagreed with

Disagreed on

Privacy vs. Safety in Encryption

End-to-end encryption provides essential protection for human rights defenders and journalists

Explanation

Kate defends encrypted technologies as particularly important for human rights defenders, journalists, and people who want privacy from governments and tech companies. She argues that as tech companies collect more data, encrypted services become one of the few places where they cannot see communication content.

Evidence

Encrypted technologies are salient for human rights defenders and journalists; tech companies continue to collect vast amounts of data, making encrypted services increasingly important

Major discussion point

Privacy and Encryption Considerations

Topics

Human rights | Cybersecurity

Under-resourced law enforcement struggles with identification and prosecution of perpetrators

Explanation

Kate identifies under-resourced law enforcement as one of the biggest problems in the United States. While the country does relatively well at identifying and reporting CSAM to NCMEC, the ability to address identified harm through law enforcement action is significantly under-resourced.

Evidence

U.S. does decent job identifying CSAM and reporting to NCMEC but law enforcement response to identified harm is under-resourced

Major discussion point

Law Enforcement and Legal Framework Challenges

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Law enforcement capacity and resources are inadequate globally

AI-generated synthetic CSAM needs to be separated from real CSAM for effective law enforcement

Explanation

Kate argues that it will become increasingly necessary to separate synthetic, AI-generated CSAM from real CSAM created using actual children. This separation is essential to help law enforcement identify children who are actually in harm and engage in enforcement efforts more efficiently.

Evidence

Tool needed to distinguish between synthetic and real CSAM to help law enforcement identify actual children in harm

Major discussion point

Self-Generated Content and Emerging Threats

Topics

Cybersecurity | Legal and regulatory

S

Sabine Witting

Speech speed

169 words per minute

Speech length

1377 words

Speech time

487 seconds

Multi-stakeholder collaboration is essential beyond just cross-platform efforts, including affected communities

Explanation

Sabine argues that while cross-platform collaboration is important, it’s not sufficient alone. Multi-stakeholder efforts must include all people affected by technologies – human rights advocates, child rights advocates, academia, parents, and children themselves – to effectively address competing rights issues.

Evidence

Example of IEEE standard on prevention of CSEA in generative AI being developed with human rights advocates, industry, and tech experts with strong Global South representation

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Cross-platform collaboration is necessary due to the nature of online abuse

Global South representation is often underrepresented in industry standard-making processes

Explanation

Sabine points out that industry standards are often not accessible and are drafted by industry or age assurance providers themselves, potentially creating bias. Representatives from the Global South are usually underrepresented in these processes, which is problematic given the important role industry standards play.

Evidence

Industry standards are often drafted by age assurance providers themselves; Global South underrepresented in standard-making; generative AI issues differ between Global South and Global North

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Development | Legal and regulatory

Technology-facilitated child sexual abuse is not equally criminalized across the world

Explanation

Sabine explains that there’s a misconception that technology-facilitated child sexual abuse is equally criminalized globally with the same understanding. Many countries still don’t criminalize mere accessing of child sexual abuse material as a separate offense, and issues like live streaming need to be addressed in national criminal law.

Evidence

Many countries lack criminalization of accessing CSAM as separate offense; international law like optional protocol to CRC focused on possession due to 90s context; live streaming issues need addressing in national criminal law

Major discussion point

Law Enforcement and Legal Framework Challenges

Topics

Legal and regulatory | Human rights

Criminal justice systems lack protective measures and trauma-informed approaches for child victims

Explanation

Sabine argues that governments should put equal effort into strengthening criminal justice frameworks as they do into platform regulation. She highlights that most cases either fall through cracks or leave children traumatized due to lack of protective measures, insufficient court preparation, and inadequately trained personnel.

Evidence

Lack of protection from cross-examination; insufficient court preparation; prosecutors and magistrates not trained in age-appropriate, trauma-informed approaches; CCTV cameras can be traumatizing for live stream abuse victims

Major discussion point

Law Enforcement and Legal Framework Challenges

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Law enforcement capacity and resources are inadequate globally

Disagreed with

Disagreed on

Focus on Platform Regulation vs. Law Enforcement

Self-generated content requires complex approach considering voluntary versus coercive production

Explanation

Sabine explains that self-generated content is often treated as a homogeneous group that should simply be criminalized, but it’s much more complex from a children’s rights perspective. Some content is produced voluntarily by adolescents above the age of consent, requiring careful consideration of the context in which content is produced.

Evidence

Content produced voluntarily, consensually by adolescents above age of consent requires different approach; context of content production must be considered from children’s rights perspective

Major discussion point

Self-Generated Content and Emerging Threats

Topics

Human rights | Legal and regulatory

Financial sector can play crucial role in flagging suspicious payments related to CSEA

Explanation

Sabine emphasizes that the financial sector can play a crucial role in flagging suspicious payments, especially in commercial live streaming child sexual abuse and exploitation. She argues for mandatory reporting and considering these offenses as predicate offenses under anti-money laundering laws.

Evidence

Financial institutions should report suspicious transactions; considering CSEA as predicate offenses under anti-money laundering laws would oblige suspicious transaction reports; ‘follow the money’ is important lead for law enforcement in organized crime

Major discussion point

Financial Aspects and Commercial Exploitation

Topics

Economic | Legal and regulatory

A

Andrew Kempling

Speech speed

135 words per minute

Speech length

410 words

Speech time

181 seconds

Age verification is important to prevent adults from accessing child accounts and content

Explanation

Andrew argues that age estimation and verification are important tools not just to keep children off adult content, but importantly to keep adults off child sites and from accessing child accounts. He emphasizes that there are privacy-preserving mechanisms available to accomplish this.

Evidence

Estimated 300 million victims of child sexual abuse globally each year (14% of world’s children); privacy-preserving age verification mechanisms exist

Major discussion point

Law Enforcement and Legal Framework Challenges

Topics

Human rights | Legal and regulatory

Disagreed with

Disagreed on

Privacy vs. Safety in Encryption

J

Jutta Croll

Speech speed

174 words per minute

Speech length

230 words

Speech time

79 seconds

Privacy-preserving age verification mechanisms exist without being intrusive

Explanation

Jutta explains that age assurance instruments can prevent adults from joining spaces made for children while being data minimizing and privacy preserving. The Global Age Assurance Standard Summit concluded this can be done without being intrusive or gathering excessive data from children.

Evidence

Global Age Assurance Standard Summit communique on child rights-based approach; working on ISO standard 27566; can provide safe spaces for children without intrusive data gathering

Major discussion point

Privacy and Encryption Considerations

Topics

Human rights | Legal and regulatory

S

Sergio Tavares

Speech speed

131 words per minute

Speech length

184 words

Speech time

84 seconds

Brazil faces growing numbers of reports while resources are declining globally

Explanation

Sergio highlights the dilemma facing countries like Brazil with almost 200 million internet users (one-third being children) where live streaming problems are growing and reports are increasing globally, but resources from governments, privacy sector, and industry are either declining or remaining at very low levels.

Evidence

Brazil has 200 million internet users with 1/3 being children under 18; NCMEC and INHOPE numbers show growing reports globally; government and industry resources are declining or stable at low levels

Major discussion point

Regional Perspectives and Challenges

Topics

Development | Human rights

D

Dhanaraj Thakur

Speech speed

162 words per minute

Speech length

1029 words

Speech time

380 seconds

Children should be centered in the design of solutions from technology to criminal justice systems

Explanation

Dhanaraj summarizes that centering children in the design of solutions is a key takeaway from the session, emphasizing that this should apply from the initial design of technologies all the way through to the criminal justice system. This includes centering both their views and their well-being throughout the process.

Evidence

Multiple speakers and audience members highlighted the importance of centering children’s views and well-being in solution design

Major discussion point

Children’s Rights and Participation in Solutions

Topics

Human rights

Agreed with

Agreed on

Children’s voices and participation must be central to solution design

S

Sabrina Vorbau

Speech speed

146 words per minute

Speech length

1884 words

Speech time

769 seconds

Youth participation in development processes and policymaking is crucial for effective solutions

Explanation

Sabrina emphasizes throughout the session that youth participation is crucial in the development process, policymaking, and design of solutions. She highlights the importance of making young people part of the entire process rather than just recipients of adult-designed solutions.

Evidence

Multiple references to youth participation throughout session; emphasis on including youth voices in IGF sessions and putting spotlight on children’s rights

Major discussion point

Children’s Rights and Participation in Solutions

Topics

Human rights

Agreed with

Agreed on

Children’s voices and participation must be central to solution design

A

Audience

Speech speed

148 words per minute

Speech length

1472 words

Speech time

595 seconds

Developing nations need capacity building in law enforcement and general population awareness

Explanation

An audience member from Trinidad and Tobago shared experiences of dealing with a financial sextortion case that led to suicide, highlighting the gap between discussions at international forums and reality in developing nations. They emphasized the need for improved reporting, investigative methods, and capacity building in law enforcement.

Evidence

Personal experience with financial sextortion case leading to suicide; challenges in reporting and investigation in developing nations; need for direct victim reporting to platforms

Major discussion point

Regional Perspectives and Challenges

Topics

Development | Legal and regulatory

Agreed with

Agreed on

Law enforcement capacity and resources are inadequate globally

Digital literacy and media literacy are powerful tools requiring sensitive implementation

Explanation

An audience member emphasized that digital literacy is incredibly powerful but needs to be framed sensitively so people can understand the concepts without breaching appropriate boundaries. They also raised concerns about policies that might outlaw AI regulation, which could impact CSAM regulation.

Evidence

Need for sensitive framing of literacy initiatives; concerns about policies preventing AI regulation that could affect CSAM regulation

Major discussion point

Education and Prevention Strategies

Topics

Sociocultural | Legal and regulatory

Overexposure online and digital literacy education are growing needs

Explanation

A youth representative from Brazil emphasized the growing need to educate people about the dangers of overexposure online, noting that violence, discrimination, and sexualization of minors have expanded from physical spaces into virtual spaces, requiring urgent action through regulation, policies, education, and digital literacy.

Evidence

Violence and discrimination have expanded from physical to virtual spaces; need for platform regulations, public policies, education, and digital literacy to ensure safe internet for youngest users

Major discussion point

Education and Prevention Strategies

Topics

Sociocultural | Human rights

D

Deborah Vassallo

Speech speed

168 words per minute

Speech length

84 words

Speech time

30 seconds

Online moderation support is essential for managing virtual participation in child safety discussions

Explanation

Deborah serves as the coordinator of the Safer Internet Center in Malta and provides online moderation support for the workshop session. Her role demonstrates the importance of having dedicated online safety expertise to facilitate discussions about child protection in digital spaces.

Evidence

Introduced as coordinator of Safer Internet Center in Malta supporting online moderation for the session

Major discussion point

Cross-Platform Collaboration and Multi-Stakeholder Approaches

Topics

Human rights | Sociocultural

Agreements

Agreement points

Multi-stakeholder collaboration is essential for effective child protection solutions

Better and more granular transparency and information is really key for policymakers to be able to react and understand where targeted policy action and deployment of safeguarding technology is needed

Multi-stakeholder collaboration is essential beyond just cross-platform efforts, including affected communities

Cross-platform efforts create significant risks for human rights and need multi-stakeholder engagement for transparency

Companies should collaborate more with schools on curriculum development rather than just pushing tools

Children should be centered in the design of solutions from technology to criminal justice systems

All speakers agreed that addressing CSAM requires comprehensive multi-stakeholder approaches involving governments, tech companies, civil society, academia, and importantly, children themselves. They emphasized that no single entity can solve this problem alone.

Human rights | Legal and regulatory

Children’s voices and participation must be central to solution design

Children want involvement in designing protection solutions to ensure they are fit for purpose

Tech platforms need standard mechanisms and robust legal frameworks to protect children effectively

Children need better complaint mechanisms with clear understanding of consequences and responses

Children should be centered in the design of solutions from technology to criminal justice systems

Youth participation in development processes and policymaking is crucial for effective solutions

There was strong consensus that children and young people should not be passive recipients of adult-designed solutions but active participants in designing, implementing, and evaluating child protection measures.

Human rights

Cross-platform collaboration is necessary due to the nature of online abuse

Bad actors exploit multiple services across the tech ecosystem, requiring industry collaboration

Cross-platform efforts create significant risks for human rights and need multi-stakeholder engagement for transparency

Multi-stakeholder collaboration is essential beyond just cross-platform efforts, including affected communities

Better transparency is needed regarding how companies identify and remove CSEA content

Speakers agreed that because perpetrators use multiple platforms in their abuse activities, individual companies cannot address the problem in isolation and must collaborate while maintaining appropriate safeguards.

Cybersecurity | Legal and regulatory

Law enforcement capacity and resources are inadequate globally

Law enforcement capacity varies significantly between countries, creating enforcement gaps

Under-resourced law enforcement struggles with identification and prosecution of perpetrators

Laws are not effectively enforced and internet service providers lack accountability

Criminal justice systems lack protective measures and trauma-informed approaches for child victims

Developing nations need capacity building in law enforcement and general population awareness

There was unanimous agreement that law enforcement globally lacks sufficient resources, training, and capacity to effectively address online child sexual exploitation, creating significant gaps in protection.

Legal and regulatory | Development

Transparency from platforms about their practices is insufficient

Companies rarely provide detailed information on moderation practices specific to live streaming

Companies need to share information about reporting process effectiveness and design features

Tech platforms need standard mechanisms and robust legal frameworks to protect children effectively

Speakers agreed that technology platforms provide inadequate transparency about their content moderation practices, detection tools, and reporting mechanisms, making it difficult to assess effectiveness and improve policies.

Legal and regulatory | Human rights

Similar viewpoints

Both speakers emphasized that age assurance is crucial for child protection and that privacy-preserving methods exist, though implementation remains limited across platforms.

Age assurance is a key component but only two of 50 services systematically assure age on account creation

Privacy-preserving age verification mechanisms exist without being intrusive

Human rights | Legal and regulatory

Both speakers recognized the significant financial component of online child exploitation and the important role financial institutions can play in detection and prevention.

Financial extortion is a major component of crimes against children online

Financial sector can play crucial role in flagging suspicious payments related to CSEA

Economic | Legal and regulatory

Both speakers highlighted the complexity of emerging technologies in child exploitation and the need for more sophisticated legal and technical responses to address these evolving threats.

AI-generated synthetic CSAM needs to be separated from real CSAM for effective law enforcement

Technology-facilitated child sexual abuse is not equally criminalized across the world

Cybersecurity | Legal and regulatory

Both emphasized the critical importance of education and prevention strategies, particularly in school settings, as essential components of comprehensive child protection approaches.

Prevention through awareness raising and school materials is crucial alongside enforcement

Digital literacy and media literacy are powerful tools requiring sensitive implementation

Sociocultural | Human rights

Unexpected consensus

Privacy and safety as complementary rather than opposing values

Privacy and safety should be viewed as complementary rather than opposing values

Session metadata and third-party signals can generate risk scores for broadcasts without analyzing actual content

Despite coming from different perspectives (civil society advocacy vs. tech industry), both speakers agreed that privacy and child safety don’t have to be in tension, and that technical solutions can preserve privacy while enhancing protection. This consensus was unexpected given the typical framing of privacy vs. safety debates.

Human rights | Cybersecurity

Self-generated content requires nuanced approaches beyond simple criminalization

One-third of hotline reports involve self-generated content, with 75% being prepubescent children

Self-generated content requires complex approach considering voluntary versus coercive production

Both speakers, from different professional backgrounds (hotline operations vs. legal academia), agreed that self-generated content cannot be addressed through simple criminalization and requires understanding the complex circumstances behind its creation. This nuanced view was unexpected in a field often characterized by zero-tolerance approaches.

Human rights | Legal and regulatory

Global South perspectives are systematically underrepresented in standard-setting

Global South representation is often underrepresented in industry standard-making processes

Africa faces uneven digital literacy and rapid technology growth exposing children to incidents

Cultural contexts must be considered when addressing child safety in different regions

There was unexpected consensus across speakers from different regions and roles about the systematic exclusion of Global South perspectives in developing child protection standards and solutions, highlighting a significant gap in current approaches.

Development | Legal and regulatory

Overall assessment

Summary

The discussion revealed remarkably high levels of consensus across diverse stakeholders on fundamental principles: the need for multi-stakeholder collaboration, centering children’s voices, addressing cross-platform abuse, improving law enforcement capacity, and increasing platform transparency. There was also unexpected agreement on nuanced issues like privacy-safety compatibility and the complexity of self-generated content.

Consensus level

High consensus with significant implications – the broad agreement across civil society, academia, industry, and government representatives suggests strong foundation for coordinated action. However, the consensus also highlighted significant implementation gaps, particularly in law enforcement capacity, Global South representation, and platform accountability. This suggests that while there is agreement on what needs to be done, substantial work remains in building the resources and mechanisms to achieve these shared goals.

Differences

Different viewpoints

Privacy vs. Safety in Encryption

Privacy and safety should be viewed as complementary rather than opposing values

Age verification is important to prevent adults from accessing child accounts and content

Kate argues that privacy and safety are aligned and defends end-to-end encryption as essential for human rights defenders and journalists, while Andrew suggests that privacy is being weaponized as an excuse not to stop CSAM sharing on encrypted platforms and advocates for privacy-preserving techniques to block known CSAM.

Human rights | Cybersecurity

Focus on Platform Regulation vs. Law Enforcement

Criminal justice systems lack protective measures and trauma-informed approaches for child victims

Companies need to share information about reporting process effectiveness and design features

Better transparency is needed regarding how companies identify and remove CSEA content

Sabine argues for equal focus on strengthening criminal justice frameworks as platform regulation, emphasizing that governments over-focus on platform responsibility while neglecting law enforcement capacity. Kate and Lisa focus more on improving platform transparency and accountability mechanisms.

Legal and regulatory | Human rights

Unexpected differences

Self-Generated Content Criminalization Approach

One-third of hotline reports involve self-generated content, with 75% being prepubescent children

Self-generated content requires complex approach considering voluntary versus coercive production

While both speakers acknowledge the serious issue of self-generated content, they have different perspectives on how to address it. Robbert presents alarming statistics about prepubescent self-generation and focuses on prevention through education and addressing root causes like grooming. Sabine argues for a more nuanced legal approach that considers the context and voluntary nature of some adolescent content, challenging the assumption that all self-generated content should be treated the same way legally.

Human rights | Legal and regulatory

Overall assessment

Summary

The main areas of disagreement center around the balance between privacy and safety measures, the appropriate focus between platform regulation versus law enforcement strengthening, implementation approaches for age verification, and the scope of multi-stakeholder collaboration in cross-platform efforts.

Disagreement level

The disagreement level is moderate but significant for policy implications. While speakers largely agree on the severity of the problem and the need for comprehensive solutions, their different emphases on technical solutions versus legal frameworks, privacy preservation versus detection capabilities, and industry-led versus multi-stakeholder approaches could lead to very different policy outcomes. These disagreements reflect broader tensions in internet governance between security, privacy, and human rights that require careful balancing rather than simple resolution.

Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized that age assurance is crucial for child protection and that privacy-preserving methods exist, though implementation remains limited across platforms.

Age assurance is a key component but only two of 50 services systematically assure age on account creation

Privacy-preserving age verification mechanisms exist without being intrusive

Human rights | Legal and regulatory

Both speakers recognized the significant financial component of online child exploitation and the important role financial institutions can play in detection and prevention.

Financial extortion is a major component of crimes against children online

Financial sector can play crucial role in flagging suspicious payments related to CSEA

Economic | Legal and regulatory

Both speakers highlighted the complexity of emerging technologies in child exploitation and the need for more sophisticated legal and technical responses to address these evolving threats.

AI-generated synthetic CSAM needs to be separated from real CSAM for effective law enforcement

Technology-facilitated child sexual abuse is not equally criminalized across the world

Cybersecurity | Legal and regulatory

Both emphasized the critical importance of education and prevention strategies, particularly in school settings, as essential components of comprehensive child protection approaches.

Prevention through awareness raising and school materials is crucial alongside enforcement

Digital literacy and media literacy are powerful tools requiring sensitive implementation

Sociocultural | Human rights

Takeaways

Key takeaways

Child sexual exploitation and abuse (CSEA) in live streaming contexts requires a multi-layered, technology-neutral approach combining awareness raising, industry action, regulation, and law enforcement

Safety by design principles should be integrated into platforms from the outset, incorporating both front-end prevention and back-end detection technologies while preserving privacy

Cross-platform collaboration is essential as bad actors exploit multiple services across the tech ecosystem, requiring secure signal sharing between companies

Children’s voices and participation must be centered in designing protection solutions, from technology development to criminal justice processes

Self-generated content represents a significant portion of CSEA cases (one-third of reports, with 75% involving prepubescent children), requiring nuanced approaches that consider voluntary versus coercive production

Multi-stakeholder approaches involving civil society, academia, tech industry, law enforcement, and affected communities are crucial for effective solutions

Transparency in platform practices, moderation techniques, and tool effectiveness is essential for policy development and accountability

Age assurance technologies can be implemented in privacy-preserving ways but are currently underutilized across platforms

Regional differences in legal frameworks, enforcement capacity, and cultural contexts require tailored approaches while maintaining international cooperation

Privacy and safety should be viewed as complementary rather than opposing values in developing solutions

Resolutions and action items

Tech Coalition’s Lantern program will publish results of financial payment provider pilot later in summer 2024

OECD will share Financial Action Task Force research on disrupting financial flows related to live stream sexual abuse

Internet Watch Foundation offered to collaborate with tech companies to validate tool effectiveness using their data

Session organizers committed to producing a summary report of the discussion

Participants agreed to continue conversations at the IGF village and through ongoing multi-stakeholder collaboration

Need to strengthen Safer Internet Center networks globally to provide safe spaces for reporting and support

Companies should improve transparency reporting on moderation practices, especially for live streaming content

Development of privacy-preserving age assurance mechanisms should be prioritized across platforms

Unresolved issues

How to address the resource gap where CSEA cases are growing globally while government and industry resources are declining or stable at low levels

Lack of standardized evaluation criteria for assessing effectiveness of age assurance and content detection technologies

Insufficient law enforcement capacity and training, particularly in developing countries, to handle cross-border CSEA cases

Gap between content detection/reporting and actual prosecution, with many cases falling through criminal justice system cracks

How to balance AI regulation policies with the need to address AI-generated synthetic CSAM

Recidivism on platforms where bad actors can easily recreate accounts after being banned

Limited information sharing across platforms and insufficient safeguards against repeat offenders

Trauma-informed approaches needed in criminal justice systems for child victims of technology-facilitated abuse

How to make reporting mechanisms more accessible and understandable for children

Addressing the weaponization of privacy arguments versus legitimate privacy protection needs

Suggested compromises

Using session metadata and behavioral signals rather than content scanning to preserve privacy while detecting abuse

Implementing on-device machine learning for content detection that doesn’t require third-party access to content

Developing content-oblivious rather than content-aware moderation methods for most harmful content

Creating multi-stakeholder standard-setting processes that include Global South representation and human rights perspectives

Balancing platform responsibility with strengthened law enforcement and criminal justice frameworks

Combining industry self-regulation with government oversight through frameworks like the EU Digital Services Act

Using privacy-preserving age assurance technologies that minimize data collection while protecting children

Implementing graduated responses from design-based features to content detection to human oversight

Developing collaborative approaches between tech companies and educational institutions for curriculum development rather than just tool deployment

Thought provoking comments

I think it’s really important to acknowledge up front that protecting children from CSCA should not just be done in a way that protects and promotes their rights. I think it’s really important to acknowledge that protecting children from CSCA is protecting and promoting their rights, most obviously freedom from violence, but also acknowledging that this CSCA can infringe upon dignity rights, privacy, and a safe online space is really important for enabling children to access a large number of rights in today’s reality, such as opinion, assembly, information, or education and health.

Speaker

Lisa Robbins

Reason

This reframes the entire discussion by establishing that child protection isn’t separate from rights promotion but IS rights promotion. It challenges the common framing that positions safety and rights as competing interests.

Impact

This foundational comment set the tone for the entire session, establishing a rights-based framework that other speakers consistently referenced. It prevented the discussion from falling into the typical ‘safety vs. rights’ dichotomy and instead positioned child protection as inherently rights-affirming.

When we look at the victims, for instance, at the numbers of the hotline at Off Limits, of all the reports coming in, one third of those reports is self-generated, meaning young children making sexualized images themselves… And of those 1 third of reports, 75% is prepubescent, so implying very young children. This is not children going to middle school in their teenagers. This is really young children.

Speaker

Robert Hoving

Reason

This statistic fundamentally challenges assumptions about online child exploitation by revealing that a significant portion involves very young children creating content themselves, suggesting complex issues around grooming, risk behavior, and early exposure rather than just external predation.

Impact

This revelation shifted the conversation from focusing primarily on external threats to recognizing the complexity of self-generated content and the need for different intervention approaches. It influenced subsequent discussions about age verification, education, and the nuanced nature of child-generated material.

So for example, the bad actor might contact a child on a gaming platform, move them to a private messaging platform, and then perhaps use a live streaming platform down the road. So the abuse spans social media, gaming, live streaming, payment apps, and more. But the individual company is obviously unaware of what happened on the other platforms.

Speaker

Sean Litton

Reason

This comment illuminates the sophisticated, multi-platform nature of modern child exploitation, demonstrating why isolated platform responses are insufficient and why cross-platform collaboration is essential.

Impact

This insight fundamentally changed how participants viewed the problem scope, leading to extensive discussion about cross-platform solutions like Project Lantern and the need for industry-wide collaboration rather than individual platform responses.

I think that oftentimes privacy and safety are placed in tension with each other, and I find that framing to be a little bit difficult because I think privacy and safety are very much in line with one another… encrypted services are one of the few and potentially actually only place where they cannot see the content of the communication, and that becomes more and more important as we see the world changing in front of our eyes.

Speaker

Kate Ruane

Reason

This challenges the dominant narrative that encryption and child safety are inherently opposed, arguing instead that privacy technologies can enhance safety for vulnerable populations including children.

Impact

This comment sparked significant debate and pushback from audience members, creating one of the most contentious moments in the discussion. It forced participants to grapple with the complexity of balancing different safety needs and challenged simplistic solutions that would weaken encryption.

I would really like to see the same effort from governments that they at the moment put into platform regulation, they should put into law enforcement and strengthening the criminal justice framework. Because there is a bit of a, I feel like an over-focus at the moment on the responsibility of platforms, while we know that if these cases then really reach the court system, most of them either A, fall through the cracks, or B, the children that are forced to go through the court system leave extremely traumatized.

Speaker

Sabine Witting

Reason

This comment critically examines the policy focus, arguing that the emphasis on platform regulation may be misplaced when the criminal justice system itself is failing children. It highlights systemic gaps in law enforcement and court procedures.

Impact

This shifted the policy discussion from focusing primarily on platform responsibilities to examining the broader ecosystem of child protection, including law enforcement capacity and child-friendly justice procedures. It influenced later discussions about resource allocation and multi-stakeholder approaches.

One of the things they make us understand is that virtually almost every day, there’s something that they encounter in the space… they felt that it’s important for platforms to be faster in identifying incidences of abuse and rescuing young people if they show fall victims, prosecution of perpetrators, and also by also creating, making, having at the back of the mind of designers that children are using the platform.

Speaker

Aidam Amenyah

Reason

This brings the authentic voice of children from Africa into the discussion, emphasizing that young people want to be involved in designing solutions and that they experience these harms daily, not as rare occurrences.

Impact

This comment reinforced the importance of youth participation throughout the session and provided concrete evidence of the scale and frequency of the problem from children’s perspectives. It influenced discussions about design processes and the need to center children’s voices in solution development.

Overall assessment

These key comments fundamentally shaped the discussion by establishing a rights-based framework, revealing the complexity of modern child exploitation, and challenging conventional wisdom about technology solutions. Lisa Robbins’ opening reframing prevented the session from falling into typical safety-versus-rights debates, while Robert Hoving’s statistics about self-generated content forced participants to confront uncomfortable realities about very young children. Sean Litton’s cross-platform analysis shifted focus from individual platform solutions to ecosystem-wide approaches, leading to substantive discussion about industry collaboration. Kate Ruane’s defense of encryption created the session’s most contentious moment, forcing nuanced consideration of competing safety needs. Sabine Witting’s critique of policy priorities challenged the focus on platform regulation over criminal justice reform. Together, these comments elevated the discussion beyond surface-level solutions to examine systemic issues, power dynamics, and the need for comprehensive, rights-respecting approaches to child protection online.

Follow-up questions

How can we better address the gap between technology solutions discussed at conferences and the reality in developing countries, particularly regarding law enforcement capacity and investigative methods?

Speaker

Shiva Bisasa (Trinidad and Tobago)

Explanation

There’s a significant disconnect between advanced solutions being discussed and the practical challenges faced in developing nations where law enforcement lacks resources, skills, and understanding of current technologies to effectively investigate and prosecute cases.

Does Project Lantern contemplate signals from financial transactions to match perpetrator payments or payments from victims?

Speaker

Shiva Bisasa (Trinidad and Tobago)

Explanation

Understanding how financial transaction data can be integrated into cross-platform detection systems could help identify and disrupt the economic aspects of child exploitation networks.

How can we solve the dilemma of growing CSAM cases worldwide while resources (government, private, industry) are declining or remaining stable at low levels?

Speaker

Sérgio Tavares (SaferNet Brasil)

Explanation

This addresses a fundamental sustainability challenge in combating online child exploitation – the mismatch between increasing problem scale and insufficient resource allocation.

How can we develop clear evaluation criteria to assess the effectiveness and robustness of age assurance technologies, and understand who is adversely impacted by these technologies?

Speaker

Sabine Witting (Leiden University)

Explanation

Current industry standards for age assurance lack transparency and multi-stakeholder input, particularly from Global South perspectives, making it difficult to evaluate their true effectiveness and potential harms.

How can we establish better benchmarks for content detection tools in live streaming, particularly to verify whether marketed technologies actually work effectively?

Speaker

Kate Ruane (Center for Democracy and Technology) and Andrew Kempling (Internet Watch Foundation)

Explanation

Many organizations market live streaming detection technologies without sufficient validation of their effectiveness, and there’s a need for independent testing and benchmarking.

How can we better understand the data sets used to train content detection tools for live streaming, including how data is sourced ethically and whether consent exists for training data use?

Speaker

Kate Ruane (Center for Democracy and Technology)

Explanation

Transparency is needed regarding how AI tools for detecting CSAM in live streams are trained, particularly concerning the ethical sourcing of training data and consent issues.

How can we develop better tools and processes to separate synthetic/AI-generated CSAM from real CSAM to help law enforcement prioritize cases involving actual children in harm?

Speaker

Kate Ruane (Center for Democracy and Technology)

Explanation

As AI-generated content becomes more prevalent, law enforcement needs efficient ways to distinguish between synthetic and real abuse material to focus resources on rescuing actual victims.

How can we frame digital literacy initiatives to be sensitive and understandable while not breaching appropriate boundaries when discussing CSAM with children?

Speaker

Cosima (UK Government)

Explanation

There’s a need to develop age-appropriate educational approaches that inform children about online safety without exposing them to inappropriate content or concepts.

How can we navigate the tension between AI regulation restrictions and the need to regulate AI-generated CSAM?

Speaker

Cosima (UK Government)

Explanation

Some jurisdictions are considering broad restrictions on AI regulation, which could inadvertently prevent regulation of AI-generated child sexual abuse material.

How can we establish a proper official body to unite platforms and law enforcement to address the jurisdictional gaps and resource limitations that allow repeat offenders and crime syndicates to operate?

Speaker

Audience member (Finnish Green Party)

Explanation

Current fragmented approaches between platforms and law enforcement across different jurisdictions create gaps that allow perpetrators to continue operating, suggesting need for better coordinated international response mechanisms.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #53 AI for Sustainable Development Country Insights and Strategies

Open Forum #53 AI for Sustainable Development Country Insights and Strategies

Session at a glance

Summary

This IGF Open Forum session focused on leveraging artificial intelligence for sustainable development, examining country-level strategies and challenges in achieving the UN Sustainable Development Goals through AI implementation. The discussion was moderated by Yu Ping Chan from UNDP’s Digital AI and Innovation Hub, with panelists representing diverse stakeholders including academia, private sector, civil society, and government perspectives from organizations like Carnegie Endowment, Co-Creation Hub Africa, Intel, and the Indian government.


The panelists identified several key challenges in the current AI landscape for sustainable development. These include significant digital divides between the Global North and South, with Africa accounting for only 0.1% of global computing capacity, and concentration of AI power among a few multinational players. Energy consumption emerged as a critical concern, with modern AI systems requiring enormous computational resources that could undermine climate sustainability goals. The discussion highlighted the tension between AI’s potential benefits and its environmental costs, emphasizing the need for more efficient, locally-relevant AI solutions.


A central theme was the importance of “local AI” – systems developed by and for local communities rather than imposed from external sources. Speakers stressed that effective AI for development must involve affected communities in design and governance, addressing linguistic diversity and cultural contexts. The Indian government representative shared their DPI (Digital Public Infrastructure) model as an example of successful public-private partnership, making AI tools and datasets accessible at low cost while maintaining responsible AI principles.


Funding challenges were extensively discussed, with traditional donor models proving inadequate for the scale needed. New collaborative funding approaches are emerging, but they require more localized, non-extractive models that can become self-sustaining. The session concluded with cautious optimism about AI’s potential for sustainable development, contingent on addressing equity gaps, building local capacity, and ensuring inclusive governance structures.


Keypoints

## Major Discussion Points:


– **AI Equity Gap and Digital Divides**: The discussion extensively covered the growing divide between Global North and Global South in AI access, with Africa accounting for only 0.1% of world computing capacity and significant disparities in funding, infrastructure, and technical capacity.


– **Localization and Community Engagement in AI Development**: Panelists emphasized the critical need for AI solutions to be developed locally by and with communities, rather than being “helicoptered in from afar,” including considerations of linguistic diversity, cultural context, and user-centered design.


– **Sustainable AI vs. AI for Sustainability**: The conversation distinguished between making AI itself more sustainable (addressing energy consumption, environmental impact) and leveraging AI to achieve broader sustainability goals and SDGs.


– **Funding Models and Governance Challenges**: Discussion of how current funding paradigms often don’t align with local needs, the emergence of collaborative pooled funding efforts, and the importance of multi-stakeholder governance approaches in AI ecosystem development.


– **Evidence-Based Implementation and Capacity Building**: Strong emphasis on the need for concrete evidence of AI impact, moving from “hype to hope to truth,” and prioritizing capacity building and skills development as fundamental requirements for inclusive AI adoption.


## Overall Purpose:


The discussion aimed to examine how AI can be leveraged at the country level to achieve sustainable development goals, with particular focus on addressing challenges of bias, exclusion, capacity gaps, and infrastructure limitations. The session sought to translate global AI discussions into practical, in-country impact strategies while fostering international cooperation and multi-stakeholder collaboration.


## Overall Tone:


The discussion began with cautious optimism tempered by realism, as evidenced by the audience’s initial 5.0 rating on AI optimism for sustainable development. The tone remained constructively critical throughout, with panelists acknowledging both significant challenges and promising opportunities. Speakers demonstrated practical experience-based perspectives rather than theoretical enthusiasm, emphasizing the need for patient, evidence-based approaches. The conversation maintained a collaborative, solution-oriented atmosphere while honestly addressing systemic barriers and inequities in the current AI landscape.


Speakers

**Speakers from the provided list:**


– **Yu Ping Chan** – Head of Digital Partnerships and Engagements at the United Nations Development Program’s Digital AI and Innovation Hub; Session moderator


– **Armando Guio Espanol** – Representative of the global network of internet and society centers (Network of Centers); Academic network representative working on AI impact analysis and evidence gathering


– **Oluwaseun Adepoju** – Managing Director of the Co-Creation Hub Africa; Pan-African Innovation Enabler focusing on AI solutions for societal issues


– **Aubra Anthony** – Non-Resident Scholar at the Carnegie Endowment for International Peace; Researcher focusing on AI funding models and governance in the Global South


– **Anshul Sonak** – Principal Engineer and Global Director at Intel Digital Readiness Programs; Private sector representative working on digital readiness and AI capacity building


– **Participant** – (Multiple instances, appears to be the same person) Additional Secretary Abhishek from the Ministry of Electronics and Information Technology of the Government of India; Government representative discussing India’s DPI model and AI initiatives


– **Audience** – (Multiple instances) Various attendees asking questions, including Jasmine Khoo from Hong Kong


**Additional speakers:**


– **Abhishek** – Additional Secretary, Ministry of Electronics and Information Technology, Government of India (mentioned by name in later parts of the transcript, same person as “Participant”)


Full session report

# Leveraging Artificial Intelligence for Sustainable Development: A Multi-Stakeholder Dialogue on Country-Level Strategies


## Executive Summary


This IGF Open Forum session, jointly organized by UNDP, Carnegie Endowment for International Peace, and Co-Creation Hub Africa, examined practical strategies for leveraging artificial intelligence to achieve the UN Sustainable Development Goals. Moderated by Yu Ping Chan from UNDP’s Digital AI and Innovation Hub, the interactive session brought together perspectives from academia, private sector, civil society, and government to address challenges and opportunities in AI for sustainable development.


The discussion revealed significant challenges including digital divides, resource disparities, and energy consumption concerns, while highlighting promising approaches through community-centered development, evidence-based implementation, and innovative funding models. Audience polling showed moderate optimism (5.0 average on a 1-10 scale) with priorities focused on AI regulation, inclusion, and capacity building.


## Opening Context and Audience Engagement


Yu Ping Chan opened by noting that UNDP works in over 170 countries and territories, with 130+ countries engaged on digital transformation and AI for SDGs. The session used interactive Slido polling to gauge audience perspectives, revealing that participants rated their optimism about AI for sustainable development at an average of 5.0 out of 10, with many rating it at 3, indicating cautious optimism.


Priority polling showed audience focus on AI regulation and governance, inclusion and equity, and capacity building as top concerns. The interactive format was designed to complement another IGF session on international cooperation for AI.


## Evidence-Based Approaches and Information Gaps


Armando Guio Español from the Network of Centers emphasized the critical need for evidence-based decision making in AI development, highlighting significant information asymmetries between stakeholders. He noted that while there is considerable discussion about AI’s potential impact on employment, “Instead of replacement of jobs, for example, what we are seeing right now is augmentation, actually improvement in the work some workers around the world are developing.”


Guio Español stressed the importance of rigorous analysis to understand what AI technologies can actually deliver versus theoretical promises, advocating for methodological approaches developed with MIT colleagues to bridge the gap between AI enthusiasm and practical implementation realities.


## Global AI Equity and Resource Disparities


Aubra Anthony from the Carnegie Endowment for International Peace provided stark evidence of global AI inequities, revealing that “Africa currently accounts for only 0.1% of the world’s computing capacity, and just 5% of the AI talent in Africa has access to the compute power it needs.” This data highlighted how AI development is concentrated among a few multinational players, creating systemic barriers for Global South countries.


However, Anthony reframed these constraints as potential innovation opportunities, suggesting that resource limitations could drive more efficient and locally relevant AI solutions. She advocated for “collaborative pooled funding efforts” and emphasized that AI development must be “non-extractive, self-sustaining, and involve communities impacted by AI.”


## Local AI Development and Community Engagement


Oluwaseun Adepoju from Co-Creation Hub Africa introduced the concept of transitioning “gradually from hype to hope” in AI development, acknowledging that “before we transition from hope to truth, we’re going to make a lot of mistakes, we’re going to have a lot of losses, and we’re also going to see a lot of success at the end of the day.”


Adepoju emphasized that “AI is local and must be built by local people,” explaining that effective local AI development requires substantial community engagement: “Forced to build anything in like a six month project, we spent the first two or three months just engaging the people, co-creating the problem statement with them.” He used the example of plantain classification to illustrate how local context shapes AI applications, noting that what constitutes “ripe” plantain varies significantly across different communities and use cases.


## India’s Digital Public Infrastructure Model


Abhishek, Additional Secretary from India’s Ministry of Electronics and Information Technology, presented India’s Digital Public Infrastructure (DPI) approach to AI development. He explained how India is applying successful DPI principles to AI, providing basic building blocks including compute access, datasets, and testing tools.


The Indian model includes making 35,000 GPUs available at low cost ($1 per GPU per hour) for AI developers, startups, researchers, and students. Abhishek emphasized that this infrastructure-first approach focuses on solving practical problems such as healthcare diagnosis, education personalization, and agricultural advisory services, ensuring AI applications address real societal needs.


India committed to making AI-based healthcare applications available through an AI use case repository for Global South countries, particularly Africa, at the upcoming AI Impact Summit in February.


## Environmental Sustainability and Energy Considerations


A crucial dimension addressed energy consumption concerns. Abhishek noted that “when we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high,” mentioning that an H200 GPU consumes power equivalent to one U.S. home. He emphasized the need to balance AI productivity gains with renewable energy objectives and carbon footprint reduction.


This environmental perspective introduced important constraints often overlooked in discussions of AI’s potential benefits, with Abhishek suggesting the need to “prioritise which are the tasks which AI should do, which are the tasks that AI need not do.”


## Personal Productivity and Skills Development


Anshul Sonak from Intel highlighted research showing AI can save “15 hours per week” in personal productivity tasks, emphasizing that “bringing AI skills to everyone should be a national priority.” He advocated for systematic approaches to capacity building that extend beyond technical training to include diverse expertise areas including AI ethicists and security experts.


Sonak stressed the importance of sustainable AI development and multi-stakeholder education approaches that recognize successful AI implementation requires diverse skills and perspectives rather than purely technical capabilities.


## Funding Models and Commercial Viability


The discussion revealed tensions around funding approaches. Anthony identified that “historical donor-led funding approaches are insufficient” and advocated for collaborative pooled funding that better aligns with local needs. Adepoju supported a “patient capital approach needed to avoid commercialisation pressure that compromises safety and equity,” suggesting AI innovations should have at least one year to prove safety and equity before facing commercialization pressure.


However, the Indian government representative emphasized that “AI applications must address real problem statements to be commercially viable and publicly fundable,” highlighting ongoing debates about balancing innovation patience with practical sustainability requirements.


## UNDP’s AI Hub Initiative


Chan announced that UNDP launched an AI Hub for Sustainable Development earlier this month with Italy as part of the G7 Presidency, aimed at accelerating AI adoption in Africa through international cooperation and resource sharing. This represents institutional commitment to supporting AI development in regions with limited resources.


## Audience Questions and Measurement Challenges


Audience questions highlighted ongoing challenges in measuring and tracking performance of locally-built AI systems across different contexts. Questions about efficient measurement approaches and performance tracking indicate gaps in developing appropriate evaluation frameworks for diverse AI applications.


The discussion of different levels of “local” in AI development – from language and cultural adaptation to local problem-solving – revealed the complexity of implementing truly community-centered AI solutions.


## Key Commitments and Next Steps


Several concrete commitments emerged:


– India’s continued provision of low-cost GPU access and healthcare AI applications for Global South countries


– Carnegie Endowment’s commitment to publishing research on funding needs and market inefficiencies in AI ecosystem development


– Co-Creation Hub’s maintenance of patient capital approaches prioritizing safety and equity


– UNDP’s AI Hub for Sustainable Development to accelerate AI adoption in Africa


## Conclusion


The session demonstrated a pragmatic approach to AI for sustainable development that moves beyond theoretical discussions toward evidence-based implementation strategies. The emphasis on community engagement, local ownership, and environmental sustainability indicates growing sophistication in addressing AI development challenges.


The moderate optimism reflected in audience polling, combined with focus on regulation, inclusion, and capacity building, suggests recognition that realizing AI’s potential for sustainable development requires addressing fundamental structural challenges around access, governance, and resource distribution.


The combination of immediate commitments (GPU access, healthcare applications) with longer-term research initiatives (funding models, measurement frameworks) provides a balanced approach addressing both urgent needs and systemic challenges. The session’s interactive format and inclusion of diverse global perspectives, particularly from the Global South, offers a model for future AI governance discussions that prioritize community needs alongside technological advancement.


Session transcript

Yu Ping Chan: Good afternoon, everyone, and welcome to IGF Open Forum session on AI for Sustainable Development, Country Insights and Strategies. Thank you also to everyone who’s joining us online from around the world. Good morning, good afternoon, good night, wherever you are. My name is Yuping Chan, I’m Head of Digital Partnerships and Engagements at the United Nations Development Program’s Digital AI and Innovation Hub. I’ll be moderating today’s session. And we’re very pleased to organize today’s session with the Carnegie Endowment for International Peace and the Co-Creation Hub Africa. Some of you might have been just at the session over in the other room about the international cooperation and the importance of international cooperation for AI. And here we’re proud to complement that with really an in-depth look at what it means to advance AI to leverage, to achieve the sustainable development goals on a country level, really turning global discussions into in-country impact while examining the challenges from bias and exclusion to capacity and infrastructure. So to kick everything off, and perhaps this session will be a little bit different from what you’ve experienced before, is to have a little bit more of an interactive flow with members of the audience. And this is where we wanted to use Slido to really take the pulse of the conversations in the room and what you yourself are thinking. So I’d like to first start by inviting everyone, including our online audience, to engage through Slido. And this is where we have our UNDP colleagues moderating online as well. So I also encourage our online audience to participate in the discussions. So the first question that I want to ask everyone to answer via your phones and devices is on screen right now. And you can scan the QR code and enter your answer to this question of what topic or theme would you like to hear about in this particular discussion? And this will be a chance for our speakers also to reflect on the results and prepare your answers so that we can really try and interact with the audience here. And so as we’re doing that and asking all of you to write back, to put in your responses via the Slido QR code. So scan the QR code, drop a word or phrase, there can be multiple words or phrases, and we’ll see the responses come on screen. And as you’re doing that, I also wanted to emphasize why from the United Nations Development Program, this particular issue is so important. The question of how we leverage AI at the country level to achieve sustainable development. We are present in over 170 countries and territories around the world. We work with more than 130 countries now to leverage digital and AI to achieve the sustainable development goals. And while we’re tremendously optimistic about the role that AI can play in supporting sustainable development, we’re also conscious of the significant AI equity gap that exists between the global south and the global north. And the question of how many people will potentially be left behind in this global AI revolution. And so we’re working to close this equity gap in a number of ways that I mentioned before. And a lot of our speakers today are working in this very practical area as well. How do we leverage AI in novel and exciting ways to achieve sustainable development? So very quickly, a few more minutes to fill in your responses, especially also those online. While I introduce the panel as well. On site, we have Mr. Oluwaseun Adepoju, Managing Director of the Co-Creation Hub Africa. Online we have Obra Anthony, Non-Resident Scholar at the Carnegie Endowment for International Peace. Armando Guio-Español, Network of Centers. And Anshul Sonak, who is Principal Engineer and Global Director at the Intel Digital Readiness Programs. So it’s really a multi-stakeholder panel reflecting the best of the IGF. The idea that we both have government representatives, people from the international organizations, the technical sector, civil society, and really looking to see how we collectively as a community can come together. We also would have a representative from the Indian government, Additional Secretary Abhishek. But I believe he is actually in another session and hopefully will come over shortly soon. So very quickly now, we have reflected on this first question. And here I think we see a number of results where we have, for instance, the three keywords of AI regulation, inclusion, capacity building. And I believe the last one is media, AI in the media. And I also see a very interesting number, 2947217, which is possibly not a response. But you can see a little bit of the scale of the challenges, I think, that really are confronting us today with AI. But I’d also like the speakers perhaps to reflect on the areas that are highlighted in green because these seem to be top of mind among our audience, both online and in screen. And so thank you for those reflections, inclusion, AI regulation, and AI media. Let’s also take another second question. On a scale of one to 10, I ask the audience, how optimistic are you that AI can accelerate inclusive sustainable development within the next five years? One means very pessimistic, and 10 means very optimistic. Okay, we’re going towards the rather negative field. I see a lot of threes. It’s a bit of six. And overall, the score seems to be 5.0, very evenly split in the room, with a rather strong emphasis on number three, which is quite negative. So I think this is actually particularly interesting. We’ve actually done a survey at UNDP through our human development report, which shows that overall, most people tend to be optimistic. So I’m wondering whether maybe it’s the IGF community that slants us a little bit more towards the conservative side. But that’s an interesting reflection, that overall, we are in the middle in terms of optimism about AI and its potential to accelerate inclusive sustainable development. And I think perhaps that points to some of the challenges that we collectively are trying to address. All right, thank you for your responses on that. I’m going to start then to ask, again, the panel to reflect on perhaps what they’ve seen from the audience, and maybe think about those in your responses to some of the questions. And again, we’re going to have an opportunity for the audience to also come in as well to keep this a little bit more interactive. So let’s start a little bit now by going to our distinguished panelists. And we’ll start with, I believe, Armando. And so the question is really in terms of setting the scene. And I will ask this of all our panelists at the same time. How do you see the current landscape of leveraging AI for sustainable development?


Armando Guio Espanol: Yeah, well, thank you very much for this invitation to the UNDP, of course. I think that it’s, I really like the exercise of starting with these questions. I had been reflecting on this question, as you shared with us, with some time for preparation. And definitely, I think from the experience, so I’m here also representing the global network of internet and society centers. We are a network, an academic network of 130 centers around the world. And we have been basically working on this topic and on these issues. And what we are trying to do, it’s, of course, we are, the first thing is that we are working on bringing more evidence about the impact of AI and what AI is really achieving, where are we, where are we standing, and basically try to navigate all this immense, big amount of information that we are processing. Decision makers are getting into a lot of information, evidence about the work that is going on right now, the kind of technologies available. So we want to really help decision makers, policy makers, and of course, colleagues around the world to look into the kind of technologies that there are, the real features that we have, and the real impact of this technology. That’s the other thing that we really want to have access to good evidence of where is the impact and what’s specifically the main issues related with AI. One of the things we have been going through is, for example, we are measuring the impact of AI in the future works, and we have been working with colleagues around the world, especially now with colleagues at MIT, and we have been developing this idea of an epoch or an analysis in order to analyze the real impact of AI, for example, in specific areas. What we are seeing right now is that instead of replacement of jobs, for example, what we are seeing right now is augmentation, actually improvement in the work some workers around the world are developing, and actually AI being helpful in that sense. So this is just an example of how we need to gather this kind of evidence. We need to gather this kind of methodologies and analysis in order to make good decisions, and that’s why I think we can achieve a sustainable use of this technology, at the same time, a sustainable development, because basically what we are doing is trying to really understand what the technology is doing, and in that sense, we have to reduce those big information asymmetries that we have right now. I think that it’s good that we center ourselves on measuring the risks on AI, and that’s also going to be extremely helpful for some of the conversations we’re having on AI governance and AI regulation, as it has been also mentioned, but definitely we need good evidence in order for that process to take place in a way in which really it’s going to be helpful for many countries. And perhaps the last point in this first remark that I would like to make is that we are seeing a lot of efficiencies being gained and a lot of benefits also from the use of the technology. That’s something that we really need to highlight. Of course, there are… There are cases in which the technology is not being used for the best purposes, but we also see that there are some benefits, and that’s something which basically we want, especially countries from majority world, global south countries, to understand and have enough elements to determine how to better use and deploy these technologies in their society. So that’s the kind of work we’re trying to do, building capacity by building evidence, by taking this evidence into those decision makers and trying to promote also research, local research in many regions around the world, and also to provide collaborations in that sense. So that’s what we’re doing, and hopefully that’s first a good glance of the kind of work ahead and some of the challenges in which we are working right now. Thank you.


Yu Ping Chan: Thank you, Armando. I think that landscape of knowing what the challenges are and really having the kind of information that we have to make sure that there are informed decisions about the use of AI is particularly important. Let me go to Aubra now and ask her how she sees the current landscape of leveraging AI for sustainable development.


Aubra Anthony: Thanks so much, Yuping, and thanks to UNDP broadly and my fellow panelists. It’s an important discussion. I’m looking forward to diving in with you all and those in the audience as well. So Yuping, you asked about the current landscape. The way I see it is both promising and fraught for a few different reasons. The first reason that I want to point out is just with AI, I think the risk that we face in the context of the SDGs and inclusion has to do with digital divides that have been longstanding for many years, and with AI, I think we see the risk that those digital divides become more calcified and are linked to a few different things. In the context of tech broadly, but also specifically with AI, I think we see that power is becoming incredibly concentrated with just a few multinational players dominating the discourse, dominating the priority setting, and dominating the types of business models that end up getting pushed out. Sometimes often, these business models aren’t serving the populations that are most in question when we consider how we achieve the SDGs. The notion the bigger is better, a lot of these different themes and narratives end up not really well serving our priorities for the SDGs. I think that the concern is that the broad trend lines are just a continued entrenchment of that concentration, and it ends up that field shaping decisions, really consequential decisions continue to be made in ways that benefit those who are already benefiting the most from AI, both in terms of financially, but also as Armando said, the information asymmetries, et cetera, and the resources that are needed to disrupt that are globally very scarce. Just as an example, Africa currently accounts for only 0.1% of the world’s computing capacity, and just 5% of the AI talent in Africa has access to the compute power it needs as a result. Beyond that, on the data front, even though something like 2,000 worlds, 7,000 languages are spoken on the continent, those languages are considered under-resourced in the context of NLP, because there just isn’t or historically hasn’t been enough digital data on them to train LLMs. These different issues of inclusion crop up when you think about the way that concentration is affecting access globally, but there’s also opportunity there. If you flip it on its head, because of those constraints, I think we’ve seen some really amazing innovations emerge around building AI that’s more robust and less compute-intensive or less energy-intensive with the development of so-called smaller language models and things like this. Innovations that are better suited to the challenges at hand, the constraints at hand. Many firms have managed to do really groundbreaking work in light of those limitations, and in doing that, they offer a really fantastic alternative model to this brute force, bigger speakers, better ethos that’s been dominating the AI playing field. Firms like Lillapa AI, who have developed that small language model, Incubi LM, that stands as hundreds of millions of low-resource language speakers. There are promising signals, as well as the more pessimism-inducing ones, in line with the Slido results. I think we see both sides of the spectrum coming up. Just very quickly, I think there’s a couple of other points here that are worth highlighting in terms of the landscape of AI. I think also the sense of perceived urgency and a mentality of catch-up among many countries. If you don’t catch up, you’ll be left behind. This is very much tied to the digital divide. It’s a growing concern, especially in the context of Africa, which is where we’ve been focusing a lot of our research over the last several months. Some projections show that GDP growth attributed to AI may be 10 times lower or more in Africa than the AI-fueled growth elsewhere. That really creates this sense of urgency. It’s not just keeping up with your neighbors, keeping up with the Joneses. It’s really often coming from this perception that AI can serve as an accelerant of much-needed economic development. Of course, that’s good, but broadly, it’s also, I think, and this is an important thing for us to discuss as a community, I think, again, the flip side of that is that broadly, it’s tough right now to create the space that’s needed to ensure that we’re seeing AI as just another tool in the toolbox, in the arsenal of tools that we have available, that we can apply for what are often very systemic, political, and socially-rooted issues, right? Reduction of poverty, gender inequality, climate change, right? The AI is one tool in the toolbox, and when you have this sense of urgency, that can both help drive the conversation of how we leverage those tools to suit our needs, but I think it also risks forcing us to adopt a solution that may not always match the problem, right? As Armando pointed out, ensuring that we have the evidence that helps us make the decision of when AI is the right tool for that issue. So I think that’s one of the current challenges that we face, and then I have a third point that I’ll talk about more later when we have a little bit more time, and it’s really just around funding, right? I think a lot of the issues that we see right now have to do with the disparities in funding with the diminishment of U.S. foreign assistance and with others’ foreign assistance profiles becoming smaller, I think that really creates an additional urgency around how we address some of these problems. But in the interest of time, I’ll leave it there, and we can talk a little bit more about that later.


Yu Ping Chan: Actually, I do think that question about funding will be an interesting one. It will also be good to hear from voices in the room as to what they feel about this particular moment, because I do think that that is that moment of urgency. We couldn’t agree more with this point about the importance of bringing the global South and focusing on Africa. This is why, for instance, UNDP just launched last week in Rome the AI Help for Sustainable Development, which was with the Italians as part of the G7 Presidency that is really focused on accelerating AI adoption in Africa and really focusing on African countries and empowering and strengthening local AI ecosystems in Africa. So on the point of Africa, let me turn to Oluwaseun, who is here with us in the room. In your work at the Co-Creation Hub, what do you see as the current landscape of leveraging AI for sustainable development and the challenges there?


Oluwaseun Adepoju: Thank you so much, and thank you to the panellists who have spoken before me. I think they’ve raised a lot of important points there, but practically in the work that we do and what we observe every day, I want to point to four major things. Number one is that when we talk about AI for sustainable development and the excitement that comes with the potentials and the opportunities that AI brings to the society, we usually don’t also talk about the balance that we need to create for the unintended consequences of artificial intelligence. In the work that we do being the Pan-African Innovation Enabler, we’ve seen how as the models are getting smarter and bigger and big data is becoming easy to process, there’s also the heavy consumption of energy. In some of the work we’ve done recently, we’ve been benchmarking what’s led to transition from proof-of-work to proof-of-stake in blockchain, and now that can actually be a future iteration for some of the unintended consequences of AI when it comes to energy as well. Number two trend that we are seeing is the transition of artificial intelligence from the stage of hype to the stage of hope. I believe every new technology goes through three stages, the stage of hype, where there’s a fear of missing out and everybody’s dropping investment, everybody’s talking about it, and I think Aubrey mentioned the fact that there’s that pressure from countries and organisations not to miss out in the AI race. And when we use the word race for a new technology, it’s very obvious that everybody wants to take part and nobody wants to be a last comer to the table when it comes to artificial intelligence. But we’ve seen that we’re transitioning gradually from hype to hope. We can see use cases that we can point to, that is driving confidence in a lot of ways and I also believe that the third stage, which is the stage of truth, we’re going to get there, but before we transition from hope to truth, we’re going to make a lot of mistakes, we’re going to have a lot of losses, and we’re also going to see a lot of success at the end of the day. But I think the stage that we are now requires a lot of intentionality in the way we innovate with the technology and also using a multi-stakeholder approach to building AI solutions. We’ve seen that a lot of people are technologically excited about AI. and Anshul Sonak. We have a lot of work to do in education. We have a lot of work to do in AI, but, you know, some of the work we do in education requires that we bring at least 32 types of professionals in the room, especially when we’re building AI for EdTech, especially solutions that include children. And also, for example, to build a very useful EdTech solution for children in Africa, for example, we need a lot of people in the room. We need a lot of people in the room. We need AI ethicists or technology ethicists in the room. You need safeguarding professionals. You need, you know, people who can look at the technology stacks and in terms of digital security as well. So that multistakeholder approach, we’ve started seeing it especially in countries like Nigeria and Rwanda where it’s no longer about the technical people but also multistakeholder approach, and we’ve started seeing it in the world. So, you know, we have a lot of work to do in education. And then, finally, linguistic equity. And a lot of people, because for AI, we see it across two classifications, the technology optimists and the technology skeptics. Some skeptics believe that linguistic equity in artificial intelligence is just a talk and it’s not concrete. And so, you know, we have a lot of work to do in that area. And then, finally, we have a lot of work to do in the multistakeholder approach. Linguistic equity is very important and we’ve started seeing some work in that area. Aubrey mentioned people building small language models, and this is because we need linguistic equity to build stacks and, you know, for some of the languages that are missing. And for us, we do a lot of benchmarking and testing of some of the large language models. So, there’s a lot of work in evidence and also to ensure that AI is, first of all, local, and it’s building features that helps people benefit from the technological dividends, I mean, at the lowest level possible.


Yu Ping Chan: Again, this is where we also, as UNDP, also see that priority in focusing on the areas that you had mentioned. So, for instance, when it comes to linguistic diversity, we’ve been working in countries such as Ghana, around low-resource languages, and looking at how we can use that around low-resource languages and looking towards digitalising those languages to create precisely what you said, that kind of inclusive models that can then serve the engine of AI. On the multi-stakeholder element, where, again, this is something that unites us all at the IGF and that you have actually mentioned, I’m glad now we can actually turn to Anshul, who is actually part of Intel and the work that Intel is doing to build digital readiness and capacity. Anshul, a little few reflections from you, perhaps, on what are the needs and what is the current situation with regards to AI? And, Anshul, I’m sure you have a lot to say about AI for sustainable development.


Anshul Sonak: ≫ Thanks, Yiping. Good morning. So, calling from Silicon Valley, this is a very interesting conversation. This is 6 a.m. for me in the morning. ≫ Thank you for being there. ≫ I appreciate that. I’m really hearing all the comments by all my fellow panellists and 100 per cent agree with everything. As a rural development professional coming from a rural background, I think AI is a great opportunity. So, really appreciate all the comments made by fellow panellists. From my reflection standpoint, I look at it as two big strands emerging in this space. One is sustainable AI itself, so how do we make AI more local, more clean, more green, more safe, more private, more fast, more cheap? So, this is all about AI technology itself, so that’s one conversation where we need to be paying attention at all levels. The other one is, what can AI do for larger sustainability? So, this is probably more relevant to this audience, what can AI do for larger sustainability? So, it’s not a technology conversation, it’s truly a developmental conversation. What does AI for sustainability truly mean? Now, research again shows enough that more than I think there was a nature research two years back, more than 79 per cent of SDGs can be done responsibly and appropriately. So, this gives a big opportunity and big challenge, yes, that’s the true reflection. Opportunity-wise, it can be a potential big equaliser. It’s truly a new electricity, so everything can change once electricity comes in your home, right? Just from a personal productivity standpoint, if you really use AI appropriately, we just did a research recently which shows that you can save roughly 15 hours of your time every week. You can save 15 hours of your own personal productivity. And then you can figure out how to use that time more responsibly for more value creation for yourself and your life, right? So, that’s an opportunity side. Challenges side, of course, I mean, we heard about AI divide argument, right? Not just technology divide, but there are much bigger asymmetries which are getting quite clear. There’s a gender divide, there’s a racial divide, there’s a colour divide, there’s a country divide. So, there are many issues which are emerging. If we want to be a truly inclusive and responsible society, we really need to have this conversation on how do we bring it together and create some kind of an equalisation for a long-term sustainability. So, there are opportunities and challenges in AI for sustainability itself, and there has to be a separate conversation on how to make AI itself more sustainable. These are the two big reflections. I’m happy to kind of be in this conversation. Thank you so much, Anshul.


Yu Ping Chan: This is Anshul from the Ministry of Electronics and Information Technology of the Government of India. I think it might be a good time to come back to that slide that we had at the start. This is where we actually polled both the online participants as well as those in the room to ask them actually what was top of mind for them in terms of the conversation here and what to reflect on. So, those areas in green were actually what came up as sort of the areas that they like to hear about when it comes to AI for sustainable development. So, I turn to you very quickly for a quick response on these and what you see as the state of the field when it comes to leveraging AI for sustainable development.


Participant: So, like the thing that CF in the response, like, of course, regulation, inclusion, capacity building is very, very important. But when we look at sustainability in issues for AI, I would say that the energy use, especially with regard to renewable energy, is very, very important. So, I would say that the energy use, especially with regard to renewable energy, is very, very important. When we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high. To the extent that normally, in fact, now we are going to Blackwells and B200, which are more energy-intensive, but I was told that even an H200 consumes power equivalent to one U.S. home. So, when we are building applications and when we are trying to save time, as Anshul was mentioning, when we are trying to push the technology, when we are building applications and when we are trying to save time, as Anshul was mentioning, when we are trying to push the technology, when we are building applications and when we are trying to save time, as Anshul was mentioning, when we are trying to push on productivity, when we are trying to push on benefits in various sectors, you also have to see where do we balance the SDG objectives for renewable energies, for climate, with regard to more efficient computing systems. At some level, we will have to see that the benefits of AI applications and models should not outweigh the costs that come in because of high energy usage. So, this will require extensive research in building these systems, which are low energy consuming. This will involve more investments in renewable energies. This will involve limiting the use of AI for non-essential functions, like things that humans can do better. Why do we need to rely on using AI for doing the same things? When we found people using it for very simple tasks, like writing poems and writing text or summarising text. We need to prioritise which are the tasks which AI should do, which are the tasks that AI need not do. How do we limit the energy consumption for powering AI systems? How do we prioritise usage of AI? How do we not ignore the needs of the challenges that climate change poses? How do we reduce our carbon footprint? These are issues that I think are as important as the issues related to AI regulation or inclusion and building capacities.


Yu Ping Chan: Thank you so much, Abhishek. So, we’re now going to turn back to the panel and ask all of you to reflect on a little bit of what you’ve heard from fellow panellists, as well as what you see from the responses of the audience on the screen. And link that to perhaps a small tailored question based on your expertise in your areas of expertise. I’ll start with Aubrey, I think. And I want to pick up on a point that you had raised just now, and now I think has been picked up by a couple of other colleagues as well, around that question of funding and governance. And I find it particularly interesting, because India, for instance, will be chairing the next AI Action Summit. And so, in that question of governance and funding, how do you think funding models are shaping national AI ambitions when it comes to global majority? How do you think that’s going to play out? And how can we as an international community address some of these challenges there?


Aubra Anthony: Yeah, thanks, Yuping. And, yeah, a very auspicious time, really. I mentioned earlier some of the issues that I think we’re all tracking, right? The US foreign assistance has been effectively shuttered, and many of the largest bilateral donor governments and NGOs, philanthropies, have also kind of moved to shift away from international funding. And so, I think it’s really important for us to think about that. And I think it’s also important for us to think about how we as an international community address some of these challenges there. And I think it’s really important for us to think about how we as an international community address some of these challenges there. And NGOs, philanthropies, have also kind of moved to shift away from historical levels of foreign assistance. So, right now, it’s unfortunately a pretty precarious time for funding, not just AI applications, but the necessary components of AI ecosystems globally, right? So, the fundamental ecosystem strengthening that needs to be in place for AI to be leveraged in a responsible, sustainable way by locally impacted actors, as Shun mentioned, right? Like, the enablers, right? So, the enablers that are really key to having an ecosystem thrive, things like compute, interoperable privacy-preserving data systems, the talent that’s necessary, right? The capacity in-country to be able to do that. And so, I think it’s really important for us to think about that. And I think it’s also important for us to think about how we as an international community address some of these challenges there. And so, right now, it’s unfortunately a pretty precarious time for funding, not just AI applications, but the necessary components of AI ecosystems globally, right? So, the fundamental ecosystem strengthening that needs to be in place for AI to be leveraged in a responsible, be able to design these systems, which a lot of that may fall more into the realm of DPI than AI uniquely. But I think that’s absolutely part of this conversation. But even with those trends, I think there’s also a very strong growing recognition that given the scale and the scope of the need, supporting ecosystem development through these kind of historical donor-led, siloed, uncoordinated investment really leads to a sum that’s far less than just the addition of its parts. So because of that, I think in large part, there’s been an increase in recent years in these more collaborative pooled funding efforts. We’ve seen this with the AI4D Funders Collaborative that was launched in 2023 at Lesley Park, the current AI, Public Interest AI initiative that was launched earlier this year at the Paris AI Action Summit. Yuping, you mentioned the UNDP and Italian government’s launch of the AI Hub earlier this month or last month, which is very exciting. And then we’ll see what comes from the Indian Summit next year, right? There are a lot of different efforts that I think are trying to meet the moment and hopefully moving us in a better direction. But so part of the landscape and part of this kind of broader conversation needs to recognize that these larger, more multilateral, more multi-stakeholder funding initiatives that can honestly really better address the scale of the challenge financially are emerging. But they’re also kind of introducing new complexities and new challenges for those who are having to navigate that, right? So whether that’s governments, practitioners, the people who are having to navigate AI ecosystem strengthening are having to navigate a lot of different trends and trend lines. And I’m going to say something that I think Shen mentioned earlier. And we all hopefully at this point agree on. But if we don’t, I would love to get into more discussion here. But I think the assertion that I want to make here is that for AI to really deliver for the SDGs and for sustainable development broadly, it cannot be something that is helicoptered in from afar, right? Its development and deployment have to involve communities impacted by AI. Its governance has to involve the communities who are impacted by AI. And the problem is that critically, the funding paradigm historically has really not aligned with that as well. It’s really been more about AI that’s produced elsewhere reaching foreign shores. And so I think the way that we see this shaping out is really going to have a fundamental effect on whether we can actually achieve this goal that I think we all share of better leveraging AI for sustainable development. And I think there’s been a really solid movement amongst the ICT community over the last several years with the principles for digital development and kind of a recognition that the funding paradigm needs to shift. It needs to be more localized. And we need to better appreciate all of that. But we at Carnegie, we’ve been doing a lot of research on this and trying to understand where the funding needs are really best matched by what’s on supply and where there are divergences, right? Where there’s kind of a market inefficiencies that are coming up there. And so just very quickly, I’ll share at a high level some of the things that we found through the interviews, the consultations that we’ve been doing. Oh, sorry. Yes?


Yu Ping Chan: Yeah, I just want to give enough time to all the panelists and hopefully have some questions from the floor. So if you don’t mind, I think the Carnegie research actually sounds very interesting. And perhaps you could share some links in the chat for everybody to consult, if you don’t mind. Absolutely.


Aubra Anthony: So I was going to say we’re going to be publishing this soon. So hopefully, everyone can see this. But I can give you the three key takeaways from the different research that we’ve done around the funding that we’ve discussions that we’ve had. It’s basically, and I think we can get into more detail in the chat. A, funding must be structured to be non-extracted. And I think this is a key thing that’s come up in other comments as well. It must be capable of becoming self-sustaining at some point, even if it’s not at the outset. If donors are coming in to fund, there needs to be some path towards sustainability in the long term, whether that’s through engagement with the commercial sector or otherwise. And then also, lastly, and again, happy to share more links to this, but we really need to ensure that the way that funding is structured and supports ecosystem development plays to different stakeholders’ strengths, both in terms of what’s being brought to the table, but also how risk aversion factors in. And I think those are really big issues that often go unappreciated, especially when you have a lot of different stakeholders coming together, which is what’s critical here. Again, in the interest of time, I’ll stop there. But thanks so much for that.


Yu Ping Chan: Thank you, Ava. And actually, that, again, is a nice segue into Oluswane, when she mentioned, for instance, the role of ecosystems, the entrepreneurial ecosystems. That’s where, for instance, your work at the AI Hub working directly with tech innovators is relevant. And how are you supporting that work? And what do you think, in response to some of the areas of challenge that have already been highlighted so far?


Oluwaseun Adepoju: Thank you so much. Quickly, when I mentioned earlier that there is hype around AI in the early days, I think if you look at some of the amount of money that has gone into supporting AI innovators in Africa, we’ve not gotten at least even 2% of that investment. Because in the early days, everybody, all the innovators were just using Chargipiti to do summarization. And they would put a label on it that they were building AI companies. But in recent times, we’ve seen that you can no longer do that. And one of the strategies we used is to say, you are not just an AI company by just name. What are the core societal issues that you’re using AI to solve? And then recently, we’re supporting innovators using integrating artificial intelligence and DPI. Because I think we need to start connecting artificial intelligence to core societal issues. There’s also the situation of breaking what is not broken, or trying to fix what is not broken with artificial intelligence. So if, for example, a state in Nigeria has a social register that they use in distributing farm produce, or seed input, or farm inputs to farmers, but the state has been struggling with actually identifying the trends of who they are giving the seeds to, what does the trend look like in terms of output as well. That’s an instance of using artificial intelligence to get output, rather than build something that, we are all excited when you present it, or when you talk about it, but when you look at the practicalities of the application, it’s not really solving any issue, right? So for us, we are very intentional around AI solutions that are connected to core societal issues. When you need to transverse the Maslow hierarchy of need, you will not waste the little investment we have on white label AI projects. We want to see use cases. And I think some of the monetary commitment that comes to us as an organization to support innovators building AI solution, we are not in a hurry, because there’s that pressure to quickly demonstrate that something works. We are not in a hurry. If we are unable to get 10 use cases, we are fine. If we find two or three that works and solves practical societal issues, we are good with that. But there’s also the quick idea that you need to commercialize. I think it takes a while for us to demonstrate good use cases of artificial intelligence. And that’s why some of the work we do now, we based it on the public value theory, first of all, and not support the startups on the commercialization side first. Because once commercialization pressure comes, you begin to compromise on the safety of what you’re building. So we usually give ourselves at least one year for you to actually prove that what you’re working on is good, it solves problem, it’s safe, and it’s equitable in so many ways. And then you cannot start talking about your commercialization trail. So we usually work with patient capital now, because this innovation that needs patients must not be brought under pressure of commercialization, first of all. And I think that’s what some of the big techs are compromising on when it comes to some of the conversations around safety, around equity, and the good use of people’s data. There are so many tools that we’ve been experimenting with that we know, fundamentally, this is ethically wrong in the way the data is scripted to be able to fit those models. And we must not repeat that. And that’s why for us, AI is local and must be built by local people. And lastly, I’m very happy that we saw a capacity building on that side of the responses from people. For us, we’re doing a year-long module on AI for Business Masterclass, where we gather business people all across Africa, representing 33 countries every month, where we have conversations around for them to really understand artificial intelligence. Because sometimes when the society don’t understand a technology, we usually launch a very complex social technical solution on people, which they don’t understand how to use in the first place. When people understand, they’re able to contribute meaningfully and they’re able to use meaningfully as well.


Yu Ping Chan: Thank you very much. I think that was a very comprehensive answer that touches on many aspects. And I’d like to maybe turn to Abhishek here to really reflect on that, because from India has been a leader in the use of digital technologies, AI, and so forth. Reflecting from a macro level on some of the challenges that Suryan has actually mentioned, how has India actually addressed some of these? And how do you balance a lot of the challenges that he’s spoken about to be able to become really a global leader in all these aspects?


Participant: See, when you look at AI or when you look at digital public infrastructure solutions, one thing that one should keep in mind is that all these solutions are sustained and can be scaled up and be used by most people if they are actually addressing a problem statement. If they’re helping solve a problem, then there are ways and means to make it happen, make it commercially viable, make it public funding available if it results in larger social and economic benefits. For example, we had this sort of problem of financial inclusion. We had this challenge of people not having access to banking services, financial services, credit services. and Anshul Sonak. We have seen a lot of benefits that citizens need and this is a lot of needs. It has led to microfinance schemes, it has led to credit schemes, it has led to farmers getting insurance schemes. So when you build a digital public platform like ID, it resulted in a lot of spin-off benefits because all the leakages that were happening in public service programs was cut. People could benefit because they could take up livelihoods and once livelihoods improves, economic benefits, a lot of people can benefit. We have seen a lot of payments, like we realized that many people are out of the organized financial systems because they did not have any tools, they are not eligible for a credit card or a debit card and they are not able to do transactions, digital transactions. With that came the unified payment interface, UPI, as we call it, and today we do almost 20 billion UPI transactions a month and we account for almost 50% of digital transactions globally. We have seen a lot of benefits, like we have seen a lot of benefits in the rural economy, we have seen a lot of benefits in the urban economy, we have seen a lot of benefits in the rural economy and again benefits come in. Similarly, when we are looking at AI-based applications, again we have to look at what problem statements we are solving. For example, if you look at healthcare, one key challenge that we have is that for diagnosing tuberculosis or diagnosing diabetic retinopathy, we do have hospitals which have got X-ray machines and which can do diagnosis in the hospital. If there is an AI tool which can do diagnosis which is as correct or in times better than a human radiologist, then that tool can replace the human radiologist in that hospital. If it is offering that service, the health department, health ministry will be able to meet the cost for that. Similarly, if you look at education, there are a lot of needs of personalised learning plans, there are a lot of needs for augmenting the availability of science and maths teachers in rural areas where those teachers are not there, or helping the children with special needs, get access to lesson plans, get access to content that might be more useful for them, create content in all Indian languages. We are a diverse country. So therein, again, AI-based applications can create a lot of value and there will be public funding available for taking up such solutions. People will be willing to pay for such solutions. So, again, if it solves the problem statement, it becomes very useful. We have seen similar things coming in agriculture, where in AI-based applications, farmer advisories is helping farmers increase their incomes, reduce their input costs. Reduce the water that they use for irrigation. Do timely interventions for fertilisers, for pesticides, access the right markets, the right prices. When the farmers benefit economically, they will be willing to pay a cost for that. So if we design AI-based applications across sectors, so that it solves some social needs or it addresses some economic benefits, there will be a provision for funding them, there will be a provision for building a commercial model out of that, and then those are the solutions only that will ultimately be sustained.


Yu Ping Chan: Thank you very much. And I really want to still try and give some opportunity for some questions from the floor. So I’m going to ask Armando and Anshul, if you don’t mind, to try and keep your responses to a minute. Very quickly, from your respective perspectives as academia and research, as well as the private sector, reflections on what you’ve heard so far and any thoughts that come to mind. Maybe we start with Armando.


Armando Guio Espanol: Sure. Well, yeah, in a minute. I was just going to say that we really need to focus more on implementation and what is working or not. And especially we need to see the efforts that are being done on implementing several of the policies and some of the ideas that have been shared here. I think that we need to understand where are those accelerators of the implementation side and what is working and what is not. Because also we have to be very aware of how this process is taking place. And that will allow us probably to be a little bit more efficient with the resources that we’re having and a little bit more accurate in the kind of support that we’re receiving and also the support that we’re giving in that sense. So I think understanding more of that process, we need to analyze a little bit more of the implementation side, what is working, and we need to start delivering more results in that sense from all fronts. I think it’s very important right now. So that’s my minute. Thank you. That was a great minute. Anshul?


Anshul Sonak: Yeah, my minute, I mean, this requires a balanced, responsible public-private partnership and a great leadership. You have Abhishek sir sitting on the stage and his ministry, for example, really prepared this capacity-building tools for population-scale impact. Look at their example, AFRL for engaging public. Look at their example on education, what they have been doing with COVID-19. Look at their example on education. Look at their example on education, what they have been doing with companies like Intel and others. Employability, entrepreneurship. So these four E’s, right? Engagement of public, entrepreneurship, education, and economic development using employability, right? Creating the right public-private partnership model is very important, and hence the civil society dialogue is very critical.


Yu Ping Chan: Great. Thank you. Okay, before I open the floor, very quickly, another slido so we can all check in here. The slido that you see that Megan is now going to put on screen that you can answer via your QR code is going to be on the question of having heard all these conversations based on your own experiences and so forth, what do you think should be the number one priority for supporting or enabling an inclusive AI ecosystem? In one word or phrase, what should be the number one priority if we are to ensure an inclusive AI ecosystem? In short, what do you think should be the number one priority for supporting or enabling an inclusive AI ecosystem? And you can’t repeat the answer that you gave for the first question. So let’s try not to have the same answers that we saw just now. Capacity building has come up again, so despite my entreaty to you to try and have another answer, this clearly is a priority. One word. What is the priority for ensuring an inclusive AI ecosystem? And then I’m going to ask colleagues as well to start thinking of the question that you’d like to ask our distinguished panelists, and I will ask the distinguished panelists again to try and keep the answers short so we can try and hear from as many people as possible. I will start, I think, with one question that we have in the chat, while those in the room are still doing the QR code. How can IGF, WSIS, and other international stakeholders continue to support the adoption of AI and digital healthcare systems in Africa to achieve sustainable development, especially in this era of digital transformation and health globally today? I think Abhishek talked a little bit about the Indian experience, and specifically how health is actually particularly important here. I’ll ask you if you have any thoughts on that. So this question of digital health, especially in Africa.


Participant: Yeah, in fact, what I would like to say is that what we are looking to do, especially as part of the Impact Summit that we are hosting in February, is that adopt the DPI playbook that we have in which we built DPI, we built a repository for DPI applications and made it available for the whole world, especially the countries of the world, especially in the United States. So we’re going to do that. We’re going to do that in the context of our focus in Africa. We’re going to do that in the context of the global south. Similarly, the AI-based applications in health care, whether it is for cataract screening or whether it’s for diagnosing breast cancer or for tuberculosis or for diabetic retinopathy or similar use cases, those solutions will be made available as part of an AI use case repository. And any countries of the global south, especially countries of Africa, African Union, if they are wanting to use those solutions, we will be providing them. So we’re going to do that in the context of our focus in Africa. We’re going to do that in the context of the global south, especially countries of Africa, African Union, if they are wanting to use those solutions, we will be more than happy to offer those solutions for adoption in these countries with the necessary fine-tuning with the local data sets as may be needed.


Yu Ping Chan: And that really speaks to responsible AI, which is also right up there on the QR responses. Any questions from the floor here? Participants that are sitting here have heard the conversation, the distinguished panelists, and would like to ask a question or comment. I think I saw a gentleman over there. You can come up to the mic right here if you would like to say something. Yes, please. Thank you.


Audience: Is this mic on? A question for Mr. Abhishek. The Indian DPI model was very much rolled out in a partnership between the Indian government and the private sector, like letting the private sector decide what they want to do with the technology. How does that work? How does that work? How does that work? How does that work? How does that work? How does that work? So, you are basically asking the Indian government and the private sector, like letting that kind of agile, quick mindset into government and then using open source, using interoperability to roll that out. And in doing that mix of governance, do you see a change in that approach from the various deterministic kind of applications that came out building on top of an ad hoc, building on top of a UPI, so the idea of the payments? Is that the same way that you’re going to be thinking about building AI applications on top of your DPI?


Participant: It’s going to be similar. It’s going to follow the similar playbook that we had for the DPI. And while in DPIs, as you rightly mentioned, we build the basic building blocks like the Aadhaar or the UPI or the data layer and then various applications for various sectors are built on top of that. In AI, what we look at, what we are doing is that we are providing, again, the basic ingredients for building AI applications, which include providing access to affordable compute to all those who need it, especially AI developers, start-ups, researchers, academicians, students. So we have kind of made available almost 35,000 GPUs at a very low cost of a dollar per GPU per hour for those who are needing it. Then we are also enabling data sets platform called AI Coach, wherein data sets from across domains, across sectors, both from the public and the private sector. And then we are also enabling the data sets platform called AI Coach, wherein data sets from across domains, across sectors, both from the public and the private sector. And then we are also enabling the data sets platform called AI Coach, wherein data sets from across domains, across sectors, both from the public and the private sector. and the skills that they have to build AI applications. So, similarly, we are also providing tools for bias mitigation, for privacy preservation, for identifying deep fakes. So, all those tools which are required to test your applications for conformance to the responsible AI principles, are also being provided on a common platform. So, all the necessary ingredients which will be common for those who are fine-tuning, or those who are doing inferencing, or those who are building sector-wide applications will be provided as a common utility. Similar to the DPI model. So, that’s how we are going ahead with our AI development.


Yu Ping Chan: Thank you, Abhishek. Any other questions from participants here? Our in-person audience here at the IGF? Yes. Please introduce yourself as well.


Audience: Hi. This is Jasmine Khoo from Hong Kong. So, I heard about a concept about localizing the AI applications. So, also a thing that I heard about is, the local people need to build their own AI system. So, my question is, because there may be national level, when you say local, do you mean like national level, regional level, or even from a grassroots community? I just want to clarify what you mean, someone mentioned about the local people need to build their own AI system. And is there a way to measure or help those local people to efficiently build a system with the knowledge and a system to track the performance? Because when you’re helping them, you have to put yourself into their shoes. So, how does it come into an efficient way to do that? Thank you very much.


Yu Ping Chan: Maybe I’ll ask Yun to take the question, and then maybe I’ll ask Armando also to reflect on this question on measurement, and also then turn it over to other panelists.


Oluwaseun Adepoju: Yes, thank you so much. When we say local, it could mean the three examples you shared. For example, when you’re building agricultural stacks, for example, there are, I’ll use the example of plantain. There are different classifications of the, it’s a form of banana, right, plantain, right? And we’ve tested a number of foreign models that does not cater for that classification of plantain. So, when we say local, that means that people that came from the region where plantain is originated from, and those who have the history of the classification needs to contribute to it, right? That’s one of the examples when we say local. But also, some of the work we do in Northern Nigeria, for example, when we work with farmers or vulnerable groups, when you want to use their data, for example, for social welfare distribution from the government, for example, in regions where there’s flooding or there’s extreme poverty, we usually go back to them to ask for the permission to use their data and what we’re using their data to do, right? And that is why we use an SMS system where you get a prompt, you say yes or no for us to use your data for the process we’re building. That doesn’t mean that we’re not using their data for good, it’s for their good, but also we need to let them know what the data is being used for, right? Also in terms of contribution, and what we mean by local as well does not mean that the local people are the technical people building it, but they are aware and they are contributing to the stacks or a knowledge session or even a validation session of what you’re building for them so that at the end of the day, people believe that they co-created the AI solution that they are using. And trust me, when you connect AI and DPI, for example, you need the buying of the people. Forced to build anything in like a six month project, we spent the first two or three months just engaging the people, co-creating the problem statement with them, and then they contribute to the feedback when we build the first iteration of the solution. So that at the end of the day, the buying is almost automatic because they’ve been part of the journey. We’ve seen situation where people have invested millions of dollars on solutions, tech solutions that people rejected. Not even technology this time around. I think in Sudan, for example, the government built a well for people to use because of water issues. But after building that, the government discovered that the people still go to the local wells to go and fetch. And they ask them the question, why are you going to the local, you’re walking distance to fetch water again? And the women said, that is the only time we have to catch up in the evenings when we walk, when we take a walk to where we usually fetch water. And yet, potable water has been built in those communities for them. So for us to avoid unintended consequences of the technology we’re building, it’s good for it to be local. And that is a definition of local in the context of what I said.


Yu Ping Chan: Okay. Thank you so much. Armando and then Obra very quickly. And then we’re going to go back to a last round of one-liners from the, very quickly to the audience. Armando and then Obra. And then I see two very quick questions here. Armando, in 30 seconds.


Armando Guio Espanol: Perfect, no, not 30 seconds. No, well, I was just going to say that definitely this is very contextual. Again, I think we can talk about regional or local or national infrastructure and technologies being provided. That’s something that depends. So for example, in Latin America, we’re having some cases of LLMs being developed for the whole region as a regional project. We will see some elements that will succeed and others that will, of course, not deliver as expected. So this is still very challenging. And I think the big element for me is the governance of these technologies and this kind of public infrastructure that we are building. Who is being involved and what’s the kind of participation, stakeholder participation, and who is taking part in the decisions being made about the functioning, the training, and the whole development of this kind of infrastructure, which is critical for many sectors. So there’s also some perhaps food for thought about the governance of this technology, of this infrastructure, and of course, of many of the related projects that we are going to be seeing with the use of this technology and that will have a sector and national, regional impact, of course. So just for us to consider at the same time, not only the technology, but also the governance side there.


Yu Ping Chan: Oprah, over to you. Thank you so much, Armando.


Aubra Anthony: Yeah, just very briefly, I think I would just press one. Everything that Trinh mentioned around the need for engagement, it’s not, you know, when we talk about local contributions, it’s not just about tech expertise that needs to be brought in. It’s also about just engagement with the community and user-centered design, human-centered design specifically. And we’ve seen a range of examples where that’s been done really effectively. And also, importantly, ways that those opportunities for engagement can be turned into opportunities for capacity development. And so I don’t think we have time to go into it, but we can share a number of links of where that’s been done effectively. I’m thinking specifically of Togo in the context of COVID. They worked initially to develop a model for delivery of cash benefits with a university partner far, far away, and then transitioned that to a real strategic goal for the government of capacity development in country based from those learnings, kind of moving toward more sovereign approaches to developing AI from that experience. So I think there’s a range, there’s a spectrum of ways that that kind of local focus can look different across different contexts. But I think there’s a rich body of examples to draw from for that.


Yu Ping Chan: I think we’re out of time. So I’m very sorry to those who wanted to ask questions here, but the speakers will be available for a little bit. I’m going to ask Anshul and then Abhishek for their last sort of 10-second wrap-up. But at the same time, I just wanted to give the audience a chance to reflect on the question that we asked at the start. On a scale of one to 10, and to see if your opinions have changed since we asked the question just now, you recall that the results ended up at about the even 5.5 level where we were equally optimistic and pessimistic. So after having heard this conversation on a scale of one to 10, how optimistic are you that AI can accelerate inclusive sustainable development over the next five years? Then I’ll ask Anshul and then I’ll ask Abhishek for their last comments. Anshul.


Anshul Sonak: Yeah, 10-second comment, bringing AI skills for everyone has to be a national priority.


Participant: And then Abhishek, your closing words. See, like what I would say, I agreed that bringing AI skills, plus at the same time, I would say bringing more global partnership for enabling sharing of application, sharing of data sets, sharing of algorithms, and sharing of expertise. So if we bring it through summits, through conferences, more global sharing, it will really, really help us in moving forward.


Yu Ping Chan: Thank you so much. And again, to everybody here, to our distinguished panelists, to everybody in the room that’s really contributed to what I thought was actually a very rich and engaging discussion, and to also your views here. Thank you so much for all being here today and let’s all thank our distinguished panelists with a quick round of applause and to my online moderators in support for the UNDP. My thanks again to Co-Creation Hub and the Carnegie Endowment for co-organizing this event with us. Have a good day, everybody, and we hope you enjoyed the session as much as we did organizing it. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you, yeah. Thank you. Thank you. Thank you. Thank you, yes. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you very much. Thank you. Thank you. Thank you, thank you. Thank you. Thank you. Thank you very much. It’s very encouraging. Thank you, thank you. Thank you I did a lot for him. You don’t do this for me. But by all I mean, it was quite moving share. Mate, show us that thing. This ball?


A

Armando Guio Espanol

Speech speed

195 words per minute

Speech length

1106 words

Speech time

338 seconds

Need for evidence-based decision making to reduce information asymmetries and understand real AI impact

Explanation

Armando argues that decision makers need access to good evidence about AI’s real features and impact to make informed decisions. He emphasizes the importance of gathering methodologies and analysis to understand what AI technology is actually doing, rather than relying on assumptions or hype.


Evidence

Example of measuring AI impact on future work with MIT colleagues, finding augmentation rather than job replacement. Development of epoch analysis methodology to analyze real AI impact in specific areas.


Major discussion point

Current Landscape and Challenges of AI for Sustainable Development


Topics

Development | Economic


Agreed with

– Oluwaseun Adepoju

Agreed on

Need for evidence-based decision making in AI development


Governance of AI infrastructure requires stakeholder participation in decision-making processes

Explanation

Armando emphasizes that the governance of AI technologies and public infrastructure being built is critical and must involve proper stakeholder participation. He argues that decisions about the functioning, training, and development of AI infrastructure should include various stakeholders given its impact on many sectors.


Evidence

Examples from Latin America where LLMs are being developed as regional projects, with mixed success expected


Major discussion point

Localization and Community Engagement in AI Development


Topics

Legal and regulatory | Development


Agreed with

– Aubra Anthony
– Oluwaseun Adepoju

Agreed on

Community engagement and stakeholder participation essential for AI governance


Focus needed on implementation and understanding what works in practice rather than just theory

Explanation

Armando argues that there needs to be more focus on implementation and understanding what is actually working or not working in practice. He emphasizes the need to see efforts being made in implementing policies and ideas, and to understand the accelerators of implementation.


Major discussion point

Implementation and Practical Applications


Topics

Development | Legal and regulatory


A

Aubra Anthony

Speech speed

177 words per minute

Speech length

2282 words

Speech time

770 seconds

AI development is both promising and fraught due to concentrated power with few multinational players and digital divides

Explanation

Aubra argues that while AI shows promise for SDGs, there’s a risk that longstanding digital divides will become more calcified. She points out that power is becoming concentrated with few multinational players who dominate discourse, priority setting, and business models that often don’t serve populations most relevant to achieving SDGs.


Evidence

Africa accounts for only 0.1% of world’s computing capacity, only 5% of AI talent in Africa has access to needed compute power, and 7,000 languages spoken on the continent are considered under-resourced for NLP training


Major discussion point

Current Landscape and Challenges of AI for Sustainable Development


Topics

Development | Economic


Disagreed with

– Participant

Disagreed on

Priority focus for AI development constraints


Historical donor-led funding approaches are insufficient; need for collaborative pooled funding efforts

Explanation

Aubra argues that traditional donor-led, siloed, uncoordinated investment leads to results that are far less than the sum of their parts. She advocates for more collaborative pooled funding efforts that can better address the scale and scope of needs for AI ecosystem development.


Evidence

Examples include AI4D Funders Collaborative launched in 2023, Public Interest AI initiative launched at Paris AI Action Summit, and UNDP-Italian government AI Hub launch


Major discussion point

Funding and Governance Models for AI Development


Topics

Development | Economic


AI development must be non-extractive, self-sustaining, and involve communities impacted by AI

Explanation

Aubra argues that for AI to deliver for SDGs, it cannot be helicoptered in from afar but must involve communities impacted by AI in its development, deployment, and governance. She emphasizes that funding must be structured to be non-extractive and capable of becoming self-sustaining.


Evidence

Reference to principles for digital development and recognition that funding paradigm needs to shift to be more localized


Major discussion point

Funding and Governance Models for AI Development


Topics

Development | Human rights principles


Agreed with

– Armando Guio Espanol
– Oluwaseun Adepoju

Agreed on

Community engagement and stakeholder participation essential for AI governance


User-centered design and community engagement can be turned into capacity development opportunities

Explanation

Aubra argues that local contributions to AI development should include not just technical expertise but also community engagement and human-centered design. She emphasizes that opportunities for engagement can be transformed into capacity development opportunities.


Evidence

Example of Togo during COVID, which worked with a university partner to develop cash benefit delivery model and then transitioned to building in-country capacity for more sovereign AI development approaches


Major discussion point

Localization and Community Engagement in AI Development


Topics

Development | Capacity development


O

Oluwaseun Adepoju

Speech speed

179 words per minute

Speech length

2049 words

Speech time

685 seconds

Transition from AI hype to hope stage, requiring intentional innovation and multi-stakeholder approaches

Explanation

Oluwaseun argues that AI is transitioning from a hype stage (where everyone was investing due to fear of missing out) to a hope stage where concrete use cases can be identified. He emphasizes that this transition requires intentional innovation and multi-stakeholder approaches, especially when building AI solutions for sectors like education that involve children.


Evidence

Example of building EdTech solutions for children in Africa requiring 32 types of professionals including AI ethicists, safeguarding professionals, and digital security experts. Examples from Nigeria and Rwanda showing multi-stakeholder approaches.


Major discussion point

Current Landscape and Challenges of AI for Sustainable Development


Topics

Development | Capacity development


Patient capital approach needed to avoid commercialization pressure that compromises safety and equity

Explanation

Oluwaseun argues that AI innovation needs patient capital and should not be rushed into commercialization. He emphasizes that commercialization pressure leads to compromises on safety, equity, and proper use of people’s data, and advocates for giving at least one year to prove solutions work safely before discussing commercialization.


Evidence

Examples of tools they’ve experimented with that are fundamentally ethically wrong in how data is used to fit models. Reference to big tech companies compromising on safety and equity due to commercialization pressure.


Major discussion point

Funding and Governance Models for AI Development


Topics

Economic | Human rights principles


Disagreed with

– Participant

Disagreed on

Approach to AI development speed and commercialization pressure


AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity

Explanation

Oluwaseun argues that AI companies should focus on core societal issues rather than just using AI for the sake of it. He emphasizes connecting AI to practical problems and avoiding the trap of trying to fix what isn’t broken with AI technology.


Evidence

Example of supporting innovators integrating AI with DPI, example of Nigerian state using AI to analyze trends in seed distribution to farmers rather than building unnecessary new systems, example of plantain classification that foreign models cannot handle properly


Major discussion point

Localization and Community Engagement in AI Development


Topics

Development | Sustainable development


Agreed with

– Participant

Agreed on

AI solutions must address real problems to be sustainable and scalable


Local involvement means community participation in problem definition, validation, and co-creation of solutions

Explanation

Oluwaseun explains that ‘local’ means involving people from the region in contributing to AI solutions, getting permission for data use, and ensuring community participation in co-creating solutions. He emphasizes that people should believe they co-created the AI solution they are using to ensure buy-in.


Evidence

Examples include plantain classification requiring input from people who originated the crop, SMS permission system for data use in Northern Nigeria, example from Sudan where government-built wells were rejected because women preferred walking to traditional wells for social interaction


Major discussion point

Localization and Community Engagement in AI Development


Topics

Development | Human rights principles


Agreed with

– Armando Guio Espanol
– Aubra Anthony

Agreed on

Community engagement and stakeholder participation essential for AI governance


A

Anshul Sonak

Speech speed

225 words per minute

Speech length

593 words

Speech time

157 seconds

AI presents opportunities as a potential equalizer but faces challenges from various divides (gender, racial, country)

Explanation

Anshul argues that AI can be a potential big equalizer, like electricity, that can change everything when properly implemented. However, he acknowledges significant challenges including various divides (gender, racial, color, country) that create asymmetries that need to be addressed for truly inclusive and responsible society.


Evidence

Research showing AI can save 15 hours per week in personal productivity, nature research indicating 79% of SDGs can be addressed by AI when used responsibly


Major discussion point

Current Landscape and Challenges of AI for Sustainable Development


Topics

Development | Human rights principles


Bringing AI skills to everyone should be a national priority

Explanation

Anshul argues that developing AI skills for the entire population should be treated as a national priority. He emphasizes the importance of balanced, responsible public-private partnerships and strong leadership to achieve population-scale impact in AI capacity building.


Evidence

Reference to India’s ministry example with capacity-building tools, examples of engagement, education, entrepreneurship, and employability programs


Major discussion point

Capacity Building and Skills Development


Topics

Development | Capacity development


Agreed with

– Yu Ping Chan

Agreed on

Capacity building is fundamental priority for inclusive AI


P

Participant

Speech speed

229 words per minute

Speech length

1715 words

Speech time

447 seconds

Energy consumption of AI systems must be balanced against SDG objectives for renewable energy and climate

Explanation

The participant argues that the high energy consumption of AI systems, particularly advanced GPUs, must be balanced against sustainable development goals for renewable energy and climate. They emphasize that benefits of AI applications should not outweigh costs from high energy usage.


Evidence

Example that H200 GPU consumes power equivalent to one U.S. home, mention of even more energy-intensive Blackwells and B200 systems


Major discussion point

Current Landscape and Challenges of AI for Sustainable Development


Topics

Development | Sustainable development


Disagreed with

– Aubra Anthony

Disagreed on

Priority focus for AI development constraints


AI applications must address real problem statements to be commercially viable and publicly fundable

Explanation

The participant argues that AI solutions are sustained and scalable when they actually address real problems and help solve them. They emphasize that when solutions create social and economic benefits, funding becomes available through commercial viability or public investment.


Evidence

Examples from India including financial inclusion through digital ID leading to microfinance and credit schemes, UPI enabling 20 billion monthly transactions representing 50% of global digital transactions, AI applications in healthcare for tuberculosis and diabetic retinopathy diagnosis


Major discussion point

Digital Public Infrastructure and Scalable Solutions


Topics

Development | Economic


Agreed with

– Oluwaseun Adepoju

Agreed on

AI solutions must address real problems to be sustainable and scalable


Disagreed with

– Oluwaseun Adepoju

Disagreed on

Approach to AI development speed and commercialization pressure


India’s DPI model provides basic building blocks (compute access, datasets, testing tools) for AI development

Explanation

The participant explains that India is applying its successful DPI playbook to AI development by providing basic ingredients including affordable compute access, datasets platform, and testing tools. This approach makes common utilities available for AI developers, startups, researchers, and students.


Evidence

35,000 GPUs available at $1 per GPU per hour, AI Coach datasets platform with data from public and private sectors, tools for bias mitigation, privacy preservation, and deepfake identification


Major discussion point

Digital Public Infrastructure and Scalable Solutions


Topics

Infrastructure | Development


Similar playbook approach being applied to AI as was used for successful DPI implementations

Explanation

The participant explains that India is following the same successful approach used for DPI development, where basic building blocks are provided and various applications are built on top. The model involves public-private partnerships with government providing infrastructure and private sector building applications.


Evidence

Success of Aadhaar, UPI, and data layer implementations that enabled various sector applications to be built on top


Major discussion point

Digital Public Infrastructure and Scalable Solutions


Topics

Infrastructure | Economic


Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise

Explanation

The participant argues that global partnerships are essential for enabling sharing of AI applications, datasets, algorithms, and expertise. They emphasize that conferences and summits facilitate this sharing which helps in moving forward collectively.


Evidence

Reference to upcoming Impact Summit in February where AI healthcare applications will be made available through repository for Global South countries, especially Africa


Major discussion point

Funding and Governance Models for AI Development


Topics

Development | Economic


Need to prioritize AI usage for essential functions and limit energy consumption for non-essential tasks

Explanation

The participant argues that there should be prioritization of which tasks AI should perform versus tasks that humans can do better. They emphasize limiting AI use for non-essential functions like simple text writing or summarization to reduce energy consumption and carbon footprint.


Evidence

Examples of people using AI for very simple tasks like writing poems, writing text, or summarizing text that humans can do effectively


Major discussion point

Implementation and Practical Applications


Topics

Sustainable development | Development


Y

Yu Ping Chan

Speech speed

196 words per minute

Speech length

3045 words

Speech time

930 seconds

Capacity building emerged as top priority from audience responses for inclusive AI ecosystems

Explanation

Yu Ping Chan notes that capacity building consistently emerged as a top priority in audience responses when asked about priorities for supporting inclusive AI ecosystems. This reflects the community’s recognition that building human capabilities is fundamental to inclusive AI development.


Evidence

Slido poll results showing capacity building as recurring top response from both online and in-person participants


Major discussion point

Capacity Building and Skills Development


Topics

Development | Capacity development


Agreed with

– Anshul Sonak

Agreed on

Capacity building is fundamental priority for inclusive AI


A

Audience

Speech speed

164 words per minute

Speech length

307 words

Speech time

111 seconds

Clarification needed on what ‘local’ means in AI development – whether national, regional, or grassroots community level

Explanation

An audience member from Hong Kong sought clarification on the concept of localizing AI applications, asking whether ‘local’ refers to national level, regional level, or grassroots community level. They also questioned how to measure and help local people efficiently build AI systems with proper knowledge and performance tracking.


Major discussion point

Localization and Community Engagement in AI Development


Topics

Development | Capacity development


Need for understanding how India’s public-private partnership model for DPI can be applied to AI development

Explanation

An audience member asked about how India’s successful DPI model, which involved partnership between government and private sector with agile approaches and open source interoperability, would be applied to building AI applications. They wanted to understand if the same governance approach would be used for AI as was used for applications built on top of Aadhaar and UPI.


Evidence

Reference to India’s DPI success with Aadhaar and UPI systems


Major discussion point

Digital Public Infrastructure and Scalable Solutions


Topics

Infrastructure | Economic


International stakeholders should support AI adoption in African digital healthcare systems

Explanation

An audience member asked how IGF, WSIS, and other international stakeholders can continue to support the adoption of AI and digital healthcare systems in Africa to achieve sustainable development. This question emphasized the importance of international cooperation in the era of digital transformation and global health challenges.


Major discussion point

Current Landscape and Challenges of AI for Sustainable Development


Topics

Development | Infrastructure


Agreements

Agreement points

Need for evidence-based decision making in AI development

Speakers

– Armando Guio Espanol
– Oluwaseun Adepoju

Arguments

Need for evidence-based decision making to reduce information asymmetries and understand real AI impact


AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity


Summary

Both speakers emphasize the importance of understanding what AI actually does and focusing on real, measurable impacts rather than hype or theoretical benefits. They advocate for evidence-based approaches to AI development and implementation.


Topics

Development | Economic


Community engagement and stakeholder participation essential for AI governance

Speakers

– Armando Guio Espanol
– Aubra Anthony
– Oluwaseun Adepoju

Arguments

Governance of AI infrastructure requires stakeholder participation in decision-making processes


AI development must be non-extractive, self-sustaining, and involve communities impacted by AI


Local involvement means community participation in problem definition, validation, and co-creation of solutions


Summary

All three speakers agree that AI development and governance must involve meaningful participation from stakeholders and communities that will be impacted by the technology, rather than top-down approaches.


Topics

Development | Human rights principles


AI solutions must address real problems to be sustainable and scalable

Speakers

– Oluwaseun Adepoju
– Participant

Arguments

AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity


AI applications must address real problem statements to be commercially viable and publicly fundable


Summary

Both speakers emphasize that AI solutions should focus on solving actual societal problems rather than applying AI for its own sake. Solutions that address real needs become commercially viable and attract sustainable funding.


Topics

Development | Economic


Capacity building is fundamental priority for inclusive AI

Speakers

– Anshul Sonak
– Yu Ping Chan

Arguments

Bringing AI skills to everyone should be a national priority


Capacity building emerged as top priority from audience responses for inclusive AI ecosystems


Summary

Both speakers recognize capacity building as a critical foundation for inclusive AI development, with Anshul advocating it as a national priority and Yu Ping noting it as the top audience priority.


Topics

Development | Capacity development


Similar viewpoints

Both speakers critique traditional funding approaches and advocate for alternative funding models that prioritize long-term sustainability and community needs over quick commercialization and donor-driven agendas.

Speakers

– Aubra Anthony
– Oluwaseun Adepoju

Arguments

Historical donor-led funding approaches are insufficient; need for collaborative pooled funding efforts


Patient capital approach needed to avoid commercialization pressure that compromises safety and equity


Topics

Development | Economic


Both speakers emphasize that community engagement in AI development should be meaningful and transformative, turning participation into capacity building opportunities rather than mere consultation.

Speakers

– Aubra Anthony
– Oluwaseun Adepoju

Arguments

User-centered design and community engagement can be turned into capacity development opportunities


Local involvement means community participation in problem definition, validation, and co-creation of solutions


Topics

Development | Capacity development


Both speakers advocate for systematic, large-scale approaches to AI development involving strong public-private partnerships and emphasizing the importance of building national capabilities and international cooperation.

Speakers

– Anshul Sonak
– Participant

Arguments

Bringing AI skills to everyone should be a national priority


Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise


Topics

Development | Economic


Unexpected consensus

Energy consumption and sustainability concerns in AI development

Speakers

– Participant
– Oluwaseun Adepoju

Arguments

Energy consumption of AI systems must be balanced against SDG objectives for renewable energy and climate


Transition from AI hype to hope stage, requiring intentional innovation and multi-stakeholder approaches


Explanation

While the discussion focused primarily on social and economic aspects of AI for development, there was unexpected consensus on the need to balance AI benefits with environmental sustainability concerns, showing awareness that AI development must consider its environmental footprint.


Topics

Development | Sustainable development


Need to prioritize AI applications and avoid unnecessary complexity

Speakers

– Participant
– Oluwaseun Adepoju

Arguments

Need to prioritize AI usage for essential functions and limit energy consumption for non-essential tasks


AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity


Explanation

Both speakers unexpectedly converged on the idea that not all tasks need AI solutions, and there should be intentional prioritization of where AI is applied, challenging the common assumption that more AI adoption is always better.


Topics

Development | Sustainable development


Overall assessment

Summary

The speakers demonstrated strong consensus on several key principles: the need for evidence-based, community-engaged AI development; the importance of addressing real societal problems rather than applying AI for its own sake; the critical role of capacity building; and the need for alternative funding models that prioritize sustainability over quick commercialization. There was also unexpected agreement on environmental sustainability concerns and the need for selective AI application.


Consensus level

High level of consensus on fundamental principles of responsible AI development, with speakers from different sectors (academia, private sector, government, civil society) aligning on core values of community engagement, evidence-based approaches, and sustainable development. This strong consensus suggests a mature understanding of AI development challenges and points toward actionable collaborative approaches for AI for sustainable development initiatives.


Differences

Different viewpoints

Approach to AI development speed and commercialization pressure

Speakers

– Oluwaseun Adepoju
– Participant

Arguments

Patient capital approach needed to avoid commercialization pressure that compromises safety and equity


AI applications must address real problem statements to be commercially viable and publicly fundable


Summary

Oluwaseun advocates for patient capital and avoiding early commercialization pressure, giving at least one year to prove solutions work safely before discussing commercialization. The Indian government representative emphasizes that AI solutions need to be commercially viable or publicly fundable from the start by addressing real problems, suggesting a more immediate focus on practical implementation and sustainability.


Topics

Economic | Development | Human rights principles


Priority focus for AI development constraints

Speakers

– Participant
– Aubra Anthony

Arguments

Energy consumption of AI systems must be balanced against SDG objectives for renewable energy and climate


AI development is both promising and fraught due to concentrated power with few multinational players and digital divides


Summary

The Indian government representative prioritizes energy consumption and environmental sustainability as key constraints that must be addressed in AI development. Aubra focuses more on power concentration, digital divides, and access inequalities as the primary constraints, with less emphasis on environmental concerns.


Topics

Sustainable development | Development | Economic


Unexpected differences

Urgency vs. patience in AI implementation

Speakers

– Oluwaseun Adepoju
– Aubra Anthony

Arguments

Patient capital approach needed to avoid commercialization pressure that compromises safety and equity


AI development is both promising and fraught due to concentrated power with few multinational players and digital divides


Explanation

While both speakers advocate for inclusive AI development, they have different perspectives on timing. Oluwaseun explicitly argues for patience and taking time to ensure safety and equity, while Aubra emphasizes the urgency created by digital divides and the risk of being left behind. This creates tension between careful, patient development and the perceived need to act quickly to avoid further marginalization.


Topics

Development | Human rights principles | Economic


Overall assessment

Summary

The discussion shows relatively low levels of direct disagreement, with most speakers sharing common goals of inclusive, sustainable AI development. The main areas of disagreement center on implementation approaches, timing, and priority constraints rather than fundamental objectives.


Disagreement level

Low to moderate disagreement level. The speakers largely align on core principles but differ on tactical approaches, suggesting that while there is broad consensus on the vision for AI for sustainable development, there are legitimate debates about the best pathways to achieve these goals. This level of disagreement is constructive and reflects different expertise areas and regional perspectives rather than fundamental ideological divisions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers critique traditional funding approaches and advocate for alternative funding models that prioritize long-term sustainability and community needs over quick commercialization and donor-driven agendas.

Speakers

– Aubra Anthony
– Oluwaseun Adepoju

Arguments

Historical donor-led funding approaches are insufficient; need for collaborative pooled funding efforts


Patient capital approach needed to avoid commercialization pressure that compromises safety and equity


Topics

Development | Economic


Both speakers emphasize that community engagement in AI development should be meaningful and transformative, turning participation into capacity building opportunities rather than mere consultation.

Speakers

– Aubra Anthony
– Oluwaseun Adepoju

Arguments

User-centered design and community engagement can be turned into capacity development opportunities


Local involvement means community participation in problem definition, validation, and co-creation of solutions


Topics

Development | Capacity development


Both speakers advocate for systematic, large-scale approaches to AI development involving strong public-private partnerships and emphasizing the importance of building national capabilities and international cooperation.

Speakers

– Anshul Sonak
– Participant

Arguments

Bringing AI skills to everyone should be a national priority


Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise


Topics

Development | Economic


Takeaways

Key takeaways

AI for sustainable development requires evidence-based decision making to reduce information asymmetries and understand real impact rather than relying on hype


Current AI landscape is characterized by concentrated power among few multinational players, creating digital divides that risk excluding Global South countries


AI development must be localized and community-driven, involving affected populations in problem definition, validation, and co-creation of solutions


Funding models need to shift from traditional donor-led approaches to collaborative, pooled funding that is non-extractive and builds toward self-sustainability


Multi-stakeholder approaches are essential, requiring diverse expertise including AI ethicists, safeguarding professionals, and security experts beyond just technical teams


AI applications must address real societal problems to be viable and sustainable, rather than creating solutions for non-existent problems


Energy consumption of AI systems must be balanced against climate and renewable energy objectives


Digital Public Infrastructure (DPI) model can be successfully applied to AI development by providing basic building blocks like compute access, datasets, and testing tools


Capacity building and AI skills development should be national priorities for inclusive AI ecosystems


Resolutions and action items

India will make AI-based healthcare applications available through an AI use case repository for Global South countries, especially Africa, at the upcoming AI Impact Summit in February


India is providing access to 35,000 GPUs at low cost ($1 per GPU per hour) for AI developers, startups, researchers, and students


UNDP launched the AI Hub for Sustainable Development with Italy as part of G7 Presidency to accelerate AI adoption in Africa


Carnegie Endowment will publish research findings on funding needs and market inefficiencies in AI ecosystem development


Co-Creation Hub will continue patient capital approach, giving innovators at least one year to prove solutions work safely and equitably before commercialization pressure


Unresolved issues

How to effectively measure and track performance of locally-built AI systems across different contexts (national, regional, grassroots)


Specific mechanisms for ensuring linguistic equity in AI development for under-resourced languages


How to balance the urgency of AI adoption with the need for careful, community-engaged development processes


Concrete strategies for addressing the AI talent and compute capacity gaps in Africa (only 0.1% of world’s computing capacity, 5% of AI talent has needed access)


How to prioritize AI usage for essential vs. non-essential functions to manage energy consumption


Governance frameworks for AI infrastructure that ensure meaningful stakeholder participation in decision-making


Suggested compromises

Adopt a ‘patient capital’ approach that delays commercialization pressure for at least one year to ensure safety and equity in AI development


Use public value theory as foundation for AI projects before pursuing commercialization trails


Balance global AI ambitions with local capacity building by providing basic infrastructure (compute, data, tools) while allowing local innovation on top


Combine global cooperation for sharing applications and expertise with local ownership and governance of AI systems


Focus on augmentation rather than replacement of human capabilities, as evidence shows AI is more effective in improving rather than replacing jobs


Thought provoking comments

Instead of replacement of jobs, for example, what we are seeing right now is augmentation, actually improvement in the work some workers around the world are developing, and actually AI being helpful in that sense… we need good evidence in order for that process to take place in a way in which really it’s going to be helpful for many countries.

Speaker

Armando Guio Espanol


Reason

This comment challenges the dominant narrative of AI as a job destroyer and reframes it as a tool for worker augmentation. It emphasizes the critical need for evidence-based decision making rather than fear-driven policies, which is particularly insightful given the tendency toward sensationalized AI discourse.


Impact

This comment set a foundational tone for the entire discussion by establishing the importance of evidence over hype. It influenced subsequent speakers to focus on practical applications and real-world outcomes rather than theoretical concerns, and established the theme of moving from ‘hype to hope’ that other panelists later built upon.


Africa currently accounts for only 0.1% of the world’s computing capacity, and just 5% of the AI talent in Africa has access to the compute power it needs… These different issues of inclusion crop up when you think about the way that concentration is affecting access globally, but there’s also opportunity there.

Speaker

Aubra Anthony


Reason

This comment provides stark quantitative evidence of the AI equity gap while simultaneously reframing constraints as innovation opportunities. It’s particularly insightful because it moves beyond general statements about digital divides to specific, actionable data points that illustrate the scale of the challenge.


Impact

This comment fundamentally shifted the discussion from abstract concepts of inclusion to concrete data about resource disparities. It prompted other speakers to focus on practical solutions and local innovation, and established the framework for discussing how constraints can drive innovation rather than simply being barriers to overcome.


We’ve seen that we’re transitioning gradually from hype to hope. We can see use cases that we can point to, that is driving confidence… but before we transition from hope to truth, we’re going to make a lot of mistakes, we’re going to have a lot of losses, and we’re also going to see a lot of success at the end of the day.

Speaker

Oluwaseun Adepoju


Reason

This three-stage framework (hype → hope → truth) provides a sophisticated analytical lens for understanding technology adoption cycles. It’s particularly insightful because it acknowledges both the inevitable failures and the learning process inherent in technological development, offering a realistic yet optimistic perspective.


Impact

This framework became a recurring reference point throughout the discussion, helping other panelists contextualize their observations about AI development. It encouraged a more nuanced conversation about expectations and timelines, moving away from binary success/failure thinking toward a more mature understanding of technology evolution.


AI is one tool in the toolbox, and when you have this sense of urgency, that can both help drive the conversation of how we leverage those tools to suit our needs, but I think it also risks forcing us to adopt a solution that may not always match the problem.

Speaker

Aubra Anthony


Reason

This comment addresses a critical cognitive bias in technology adoption – the tendency to apply new tools to problems they weren’t designed to solve simply because of their novelty or perceived importance. It’s insightful because it warns against ‘solution in search of a problem’ thinking while acknowledging the legitimate urgency around AI adoption.


Impact

This observation prompted several panelists to emphasize problem-first rather than technology-first approaches. It influenced the discussion toward more careful consideration of when AI is and isn’t appropriate, and reinforced the importance of evidence-based decision making that Armando had established earlier.


For us, AI is local and must be built by local people… when you connect AI and DPI, for example, you need the buying of the people. Forced to build anything in like a six month project, we spent the first two or three months just engaging the people, co-creating the problem statement with them.

Speaker

Oluwaseun Adepoju


Reason

This comment redefines what ‘local’ means in AI development, moving beyond geographic considerations to include community engagement, co-creation, and cultural understanding. The practical example of spending half the project timeline on community engagement challenges conventional tech development timelines and priorities.


Impact

This comment significantly influenced the discussion’s focus on community engagement and participatory design. It prompted questions from the audience about what ‘local’ means and led to rich examples from other panelists about successful community-centered AI projects. It also reinforced the theme that sustainable AI development requires patience and genuine partnership rather than rapid deployment.


When we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high… We need to prioritise which are the tasks which AI should do, which are the tasks that AI need not do.

Speaker

Abhishek (Indian government representative)


Reason

This comment introduces a crucial sustainability constraint that challenges the ‘AI for everything’ mentality. It’s particularly insightful because it connects AI development directly to climate goals and forces a conversation about resource allocation and prioritization that is often overlooked in AI enthusiasm.


Impact

This comment brought environmental sustainability into sharp focus and prompted discussion about the trade-offs between AI benefits and environmental costs. It influenced the conversation toward more thoughtful consideration of when AI is truly necessary versus when it’s simply convenient, adding a critical dimension to the ‘appropriate technology’ discussion.


Overall assessment

These key comments collectively transformed what could have been a typical ‘AI is great/AI is dangerous’ discussion into a nuanced exploration of practical implementation challenges and opportunities. The comments established several crucial frameworks: the hype-hope-truth progression, the tool-in-toolbox perspective, the importance of evidence-based decision making, and the centrality of community engagement. Together, they shifted the conversation from abstract policy discussions toward concrete, actionable insights about how to develop AI responsibly and effectively. The comments also successfully balanced optimism about AI’s potential with realistic acknowledgment of constraints and challenges, creating space for both innovation and caution. Most importantly, they elevated voices and perspectives from the Global South, ensuring the discussion remained grounded in the realities faced by the communities most likely to be left behind in AI development.


Follow-up questions

How to measure the real impact of AI in specific areas like future of work and job augmentation vs replacement

Speaker

Armando Guio Espanol


Explanation

He mentioned they are developing methodologies with MIT colleagues to analyze real impact of AI, specifically noting they see augmentation rather than replacement of jobs, but emphasized need for better evidence and analysis


How to address information asymmetries between different stakeholders in AI development and deployment

Speaker

Armando Guio Espanol


Explanation

He highlighted the need to reduce big information asymmetries that exist and help decision makers understand what technologies are available and their real features


How to balance AI benefits with energy consumption and climate goals

Speaker

Abhishek (Indian government representative)


Explanation

He raised concerns about high energy consumption of AI systems and the need to balance productivity gains with renewable energy goals and carbon footprint reduction


How to prioritize which tasks should use AI versus tasks humans can do better

Speaker

Abhishek (Indian government representative)


Explanation

He questioned why AI is being used for simple tasks like writing poems when humans can do them better, suggesting need for prioritization framework


Research on funding paradigms and market inefficiencies in AI ecosystem development

Speaker

Aubra Anthony


Explanation

She mentioned Carnegie is conducting research on where funding needs are best matched by supply and where market inefficiencies exist, with findings to be published soon


How to structure funding to be non-extractive and capable of becoming self-sustaining

Speaker

Aubra Anthony


Explanation

She identified this as a key finding from their research – funding must have a path towards sustainability whether through commercial sector engagement or otherwise


How to measure and track performance of locally-built AI systems efficiently

Speaker

Jasmine Khoo (audience member from Hong Kong)


Explanation

She asked about ways to measure or help local people efficiently build AI systems and track their performance when providing assistance


What constitutes ‘local’ in AI development – national, regional, or grassroots community level

Speaker

Jasmine Khoo (audience member from Hong Kong)


Explanation

She sought clarification on the definition and scope of ‘local’ when discussing locally-built AI systems


How to focus more on implementation and understanding what is working or not working in AI for development

Speaker

Armando Guio Espanol


Explanation

He emphasized need to analyze implementation side, identify accelerators, and understand what works to be more efficient with resources


How to transition AI applications from proof-of-work to proof-of-stake models to reduce energy consumption

Speaker

Oluwaseun Adepoju


Explanation

He mentioned benchmarking blockchain’s transition and suggested this could be a future iteration for addressing AI’s unintended energy consequences


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #1 Digital Inclusiveness in the Kingdom of Saudi Arabia

Open Forum #1 Digital Inclusiveness in the Kingdom of Saudi Arabia

Session at a glance

Summary

This discussion focused on Saudi Arabia’s digital inclusiveness initiative launched by the Digital Government Authority to enable accessibility for people with disabilities and elderly citizens. Fawaz Alanazi presented how Saudi Arabia has achieved high rankings in global digital government indexes, including third place in the World Bank’s Government Tech Maturity Index and sixth in the UN E-Government Development Index, as part of the Vision 2030 digital transformation goals. The Digital Government Authority, established in 2021, operates an innovation hub that provides digital advisory services, training programs like Kudratik which has trained over 400 government employees, and various specialized labs including design studios and usability labs.


The digital inclusiveness program addresses a significant need, as Saudi Arabia has 1.6 million elderly citizens (4.8% of the population) and 1.4 million people with disabilities (4.2% of the population) who face various impairments including cognitive, mobility, visual, and hearing challenges. The program aims to ensure independence and ease of use in digital services for everyone by aligning with WCAG accessibility standards and humanizing the service experience. During the Q&A session, an attendee from the Norwegian Digitalization Agency asked about regulatory compliance, WCAG requirements implementation, and why private sector entities aren’t included in the legislation. Alanazi explained that Saudi Arabia uses an e-participation platform for citizen feedback and collaborates with private sectors through the SADF initiative for public-private partnerships. The discussion highlighted Saudi Arabia’s comprehensive approach to digital transformation while emphasizing the importance of leaving no one behind in accessing government digital services.


Keypoints

**Major Discussion Points:**


– **Saudi Arabia’s Digital Government Achievements**: The country has achieved high rankings in global digital government indexes, including 1st regionally in ESCOA, 3rd globally in Government Tech Maturity Index, and 6th in E-Government Development Index, supported by the Digital Government Authority established in 2021.


– **Digital Inclusiveness Program Launch**: A new initiative targeting 1.6 million elderly citizens (4.8% of population) and 1.4 million people with disabilities (4.2% of population) to ensure government digital services are accessible and usable for everyone, emphasizing independence and ease of use.


– **Innovation Hub Infrastructure**: The Digital Government Authority operates comprehensive facilities including design studios, usability labs, learning experience labs, and emerging technology labs, along with programs like Kudratik training (400+ employees trained) and digital advisory services.


– **WCAG Standards and Regulatory Framework**: Discussion of following Web Content Accessibility Guidelines (WCAG) standards for government services, with questions raised about compliance requirements, regulatory oversight, and why private sector inclusion is limited compared to other countries.


– **Collaborative Testing and Implementation Process**: Government entities can reserve and utilize the inclusiveness labs to test their services and products, receiving detailed reports on current accessibility status and improvement recommendations for end users.


**Overall Purpose:**


The discussion aimed to present Saudi Arabia’s digital inclusiveness initiative as part of Vision 2030, showcasing how the Digital Government Authority is working to make government digital services accessible to elderly citizens and people with disabilities through specialized labs, testing facilities, and adherence to international accessibility standards.


**Overall Tone:**


The discussion maintained a formal, informative, and promotional tone throughout. It began as a structured presentation highlighting achievements and initiatives, then shifted to a more interactive Q&A format where the presenter addressed technical questions about regulations, standards, and implementation. The tone remained consistently professional and collaborative, with the presenter expressing appreciation for questions and emphasizing partnership approaches to digital transformation.


Speakers

– **Fawaz Alanazi**: Works with the Digital Government Authority in Saudi Arabia, presenting on digital inclusiveness initiatives and digital transformation programs


– **Audience**: Multiple audience members asking questions during the session, including:


– Adil Hussain from Norwegian Digitalization Agency and the Norwegian Authority for Universal Design of ICT


– An Arabic-speaking questioner (name appears to be Qusay based on the response)


**Additional speakers:**


– **Ms. Bayan Alghuraybi**: Mentioned at the beginning as a co-presenter but no actual speaking content is provided in the transcript


– **Dr. Areej Alfaraj**: Mentioned multiple times in the transcript but no actual speaking content is provided


Full session report

# Discussion Report: Saudi Arabia’s Digital Inclusiveness Initiative


## Executive Summary


This session featured a presentation by Fawaz Alanazi from Saudi Arabia’s Digital Government Authority on the country’s digital inclusiveness initiative. The presentation covered Saudi Arabia’s digital government achievements, infrastructure for supporting accessibility, and programmes targeting elderly citizens and people with disabilities. The session included a Q&A portion with questions from Adil Hussain of the Norwegian Digitalisation Agency and Norwegian Authority for Universal Design of ICT, as well as an Arabic-speaking participant named Qusay.


*Note: The transcript contains technical issues with repetitive text and incomplete sections, particularly regarding contributions from co-presenters Ms. Bayan Alghuraybi and Dr. Areej Alfaraj, which affects the completeness of this summary.*


## Key Participants


– **Fawaz Alanazi** – Digital Government Authority, Saudi Arabia (primary presenter)


– **Adil Hussain** – Norwegian Digitalisation Agency and Norwegian Authority for Universal Design of ICT


– **Qusay** – Arabic-speaking participant


– **Ms. Bayan Alghuraybi and Dr. Areej Alfaraj** – Listed as co-presenters but their contributions are not captured in the available transcript


## Saudi Arabia’s Digital Government Achievements


Alanazi presented Saudi Arabia’s international rankings in digital government:


– First place regionally in ESCOA


– Third place globally in the World Bank’s Government Tech Maturity Index


– Sixth place in the UN E-Government Development Index


These achievements support Saudi Arabia’s Vision 2030 initiative. The Digital Government Authority, established in 2021, serves as the central coordinating body for the country’s digital transformation efforts.


## Digital Inclusiveness Programme Context


The programme addresses the needs of specific demographic groups in Saudi Arabia:


– 1.6 million elderly citizens (4.8% of the 35 million population)


– 1.4 million people with disabilities (4.2% of the population)


– Expected increase in elderly population to 11% in 2023


Alanazi explained the programme’s philosophy: “what we mean by inclusiveness is to ensure the independency and ease of use of digital services for everyone, while inclusiveness aims not to digitise the services, but rather to ensure they are designed to include everyone.”


The programme follows three main objectives:


1. Increasing user independence


2. Aligning with international accessibility standards (WCAG)


3. Humanising the service experience


## Innovation Hub Infrastructure


The Digital Government Authority operates an innovation hub providing multiple services:


– Digital advisory services


– Training programmes (including the “Kudratik” programme that has trained over 400 government employees)


– Awards recognition


– Emerging technology initiatives


The hub includes specialised laboratories:


– Design studios


– Usability labs for user experience testing


– Learning experience labs


– Emerging technology labs


## Laboratory Operations


The digital inclusiveness laboratories operate on a reservation-based system where government entities can book sessions to test their services and products against accessibility standards. Following each testing session, the laboratories provide comprehensive reports with assessments and recommendations for improvements.


## Questions and Responses


### Norwegian Participant Questions


Adil Hussain posed four specific technical questions:


1. **Regulatory authority for business compliance**: Asked about mechanisms for ensuring business compliance with accessibility requirements.


2. **WCAG requirements**: Inquired how many of the 78 WCAG success criteria public sector bodies must follow.


3. **Private sector inclusion**: Questioned why private sector entities are not included in the digital accessibility legislation, particularly given Saudi Arabia’s high e-government ranking.


4. **Scope of digital services**: Asked what types of digital services the legislation covers.


### Saudi Response


Alanazi’s responses mentioned:


– Use of an e-participation platform for citizen feedback


– The SADF (Saudi Development Fund) initiative for public-private partnerships in digital service development


– Collaborative approaches with private entities rather than mandatory compliance


– Following WCAG standards “precisely”


However, the responses did not provide detailed answers to all the specific technical questions raised.


### Arabic Language Question


Qusay asked about the laboratory experience and procedures. Alanazi responded by explaining the reservation system for accessing the labs and mentioned that final reports are provided after testing sessions.


## Programme Implementation


The digital inclusiveness programme addresses various types of impairments including cognitive, mobility, visual, and hearing challenges. The approach emphasises designing services to be inherently accessible rather than retrofitting existing services.


The programme aligns with Web Content Accessibility Guidelines (WCAG) standards to ensure Saudi Arabia’s digital services meet internationally recognised accessibility criteria.


## Conclusion


The session provided an overview of Saudi Arabia’s systematic approach to digital inclusiveness, highlighting the country’s infrastructure investment and commitment to international standards. The Q&A session revealed interest from international participants in understanding the technical and regulatory details of implementation, though some specific questions about compliance mechanisms and private sector involvement were not fully addressed in the available transcript.


Session transcript

Fawaz Alanazi: and Ms. Bayan Alghuraybi. Hi, everyone, and welcome to this session where we talk about one of the latest initiatives that have been recently launched by the Digital Government Authority in Saudi Arabia, which is the digital inclusiveness in the Kingdom of Saudi Arabia to enable people with disability and elderly. Today we’ll talk about the overall of the digital transformation and innovation in Kingdom of Saudi Arabia. Then we’ll talk about the digital inclusiveness program with its usability lab, and we’ll end up with some success stories from digital inclusiveness initiatives. If you have any question, you may ask at the end of this session. To start with, actually, the digital government or the Vision 2030 prioritized digital transformation and innovation to ensure the successful achievement of the national goals of vibrant society, thriving economy, as well as ambitious nation. And because of that, as you can see, Saudi Arabia is ranked top in the government global indexes. For example, it’s ranked first regionally in the ESCOA, which is the Government Electronic and Mobile Service Index. It’s ranked number third globally in the Government Tech Maturity Index by the World Bank, and it’s ranked number six globally in the E-Government Development Index, as well as number eighth in the Waseda Index as digital government most progressing country. The Digital Government Authority, who was established in 2021, to play a main role of organizing the digital landscape in Saudi Arabia in collaboration with the government entities. The Digital Government Authority, as a result, have established an innovation hub for the sake of fostering innovation, as well as enable government entities to design and test their creative solution. What we have or provide inside the innovation hub? First of all, there are some programs and initiatives falls under the innovation hub. Firstly, we provide digital advisory and study where we consult a government entity in terms of digital transformation agenda, as well as enriching the digital content with the latest digital study. At the same time, one of the success story is the training program, which is called Kudratik. We have trained more than 400 employees among whether they are professional or executive employees in different government entities. At the same time, we are providing awards and digital competitiveness program to incentivize government entity to reshape their digital solutions and products to their end user. At the same time, we provide innovation and experience design, and lastly, emerging technology program to make some proof of concept of the latest technology and examine that in collaboration with the government entities to see to what extent the technology can bring digital solution for different government entities. Secondly, we provide some labs and spaces. These labs include design studio, usability lab, learning experience lab, emerging technology lab, as well as experiment or ex-reality lab. For example, in the learning experience, we are observing the current tools and techniques being provided as learning techniques, and we reshape those techniques inside the observation room, as well as in the emerging technology lab, we are examining to what extent some emerging technologies, for example, digital twin can play a vital role when it comes to the digital transformation and innovation. Now let’s talk about our initiative that have been recently launched by digital government authority, which is digital inclusivity program. As you can see, before we start, there is a need for such initiative. More than 1.6 million elder population, which equals to 4.8 of the total population, is of 35 million, and this number is expected to increase by 11% in 2023. At the same time, 1.4 million people with disability in Kingdom of Saudi Arabia, which is equal to 4.2% of the total population of 35 millions. Some examples of those disability include cognitive impairments, mobility impairments, visual impairments, and other physical difficulties and hearing impairments. So because of that, the digital government authority have launched the digital inclusivity program to advance the government excellence in the digital inclusivity and accessibility of government services. It provides essential tools and enablers to support digital integration and foster innovation in the delivery of government services. And there are a couple of things I would like to highlight here. Firstly, what we mean by inclusiveness is to ensure the independency and ease of use of digital services for everyone, while inclusiveness aims not to digitize the services, but rather to ensure they are designed to include everyone. The objective of digital inclusivity program includes increasing the independence rate of users in accessing government services by designing services that are accessible and easy to use for all groups, including elderly and persons with disability. And secondly, aligning with the accessibility standard means making sure that the digital services and platform follow recognized guidelines like WCAG to ensure they are usable for everyone, including people with disability. Thirdly, humanizing the service experience for the sake of providing accessible services to everyone. and Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Dr. Areej Alfaraj, Ms. Bayan Alghuraybi, Mr. Fawaz Alanazi and Ms. Bayan Alghuraybi, Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Yes, this is Adil Hussain from Norwegian Digitalization Agency and the Norwegian Authority for Universal


Audience: Design of ICT in other part of the world. So, the term used is digital accessibility and digital inclusiveness. But in Norway, we use the term universal design and that’s why we have authority for universal design of ICT Norway. So, my question is about, do you have, I have actually three questions. Then question one is regulatory authority that regulate the businesses if they compliance with the national legislation. And my second question is about, you mentioned you are following WCAG standards and WCAG have like 78 requirements, 78 success criteria. So, how many of these requirements businesses are obliged to follow? My third question is like you mentioned, the legislation only applies to public sector bodies. So, in Saudi Arabia, I see it’s a sixth rank in the UN e-government index. So, my third question is why not private sectors are included? And maybe I have one more question. It’s about, you mentioned digital services. So, what kind of digital services the legislation applies to? Is it only websites or mobile applications or self-service machines, maybe digital documents like PDF, EPUBs or other documents? Thanks.


Fawaz Alanazi: Thank you. Let’s answer the first question about the regulations. Actually, there is what we call it a platform. It’s called e-participation where people can send their feedback about the current services and products being provided by government entities. So, the end user with our citizens, visitors can reshape the future of the government in Saudi Arabia. So, in terms of regulations, the government, digital government authority works hand-in-hand with other government entities to ensure the latest legislation when it comes to digital transformation and innovation. The second question, I believe, or it was about the ranking of Saudi Arabia as the sixth in the UN. It’s a success story that we brought here to ensure collaboration. It’s all about collaboration. Digital government authority have worked with all government entities to ensure there is no one left behind when it comes to providing a digital service or delivery of digital services to the end user. I think the third question was about the naming


Audience: of whether it’s called accessibility or… My third question was actually the number of requirements businesses, public sector bodies have to follow. In WCAG, you mentioned WCAG. So, WCAG actually has 78 success criteria. So, how many requirements public sector bodies in Saudi have to follow? And fourth question was why the legislation not applies to private sector bodies because, as you mentioned, Saudi Arabia ranks sixth in the UN e-government ranking. So, because I know that private sector… private sector, they also develop services that government supply again to their citizens. So, that’s why I’m asking question why it don’t apply to private sector. Two


Fawaz Alanazi: questions, basically. I got it. For the private sectors, actually, we have an initiative called SADF, a digital government authority, which includes the PP or three PP, where there is a collaboration between government entities and private sectors to design a digital service for the government entity or on behalf of government entities by the private sectors. So, there is an initiative. We have it. It’s called SADF for such goal. For the requirements, as far as I know, we are following precisely the WCAG standards for the time being. Thank you. The question is, how do you see the experience of digital exclusiveness lab? And how do the procedures start to reach the final product as an initiative? And then, in this context, is it considered one of the accelerators of the application of technology in the Kingdom of Saudi Arabia? Thank you. May God bless you, my brother Qusay. Thank you for your question. The experience, we may have exhibited the labs provided by the digital government authority to the government entities as a space to test the services and current products. We may even have exhibited the visionary lab. The visionary lab is where the observation, care and services and products are carried out for all government services. So, the request starts from the government side. They make the reservation to visit the center and test the current services available to them and examine the observation, testing environment inside these labs. In light of this, the final report will be shared on the current situation of the offerings of services and products and how they will be improved for the end users later on. I hope I answered your question. I think we don’t have another question. Thank you again for attending such session. Thank you so much. Thank you. Thank you.


F

Fawaz Alanazi

Speech speed

73 words per minute

Speech length

1450 words

Speech time

1180 seconds

Saudi Arabia’s Digital Transformation and Government Rankings – Saudi Arabia ranks highly in global digital government indexes, including first regionally in ESCOA and third globally in Government Tech Maturity Index

Explanation

Saudi Arabia has achieved top rankings in multiple international digital government assessments, demonstrating its success in digital transformation initiatives. These rankings reflect the country’s commitment to prioritizing digital innovation as part of Vision 2030.


Evidence

Ranked first regionally in ESCOA (Government Electronic and Mobile Service Index), third globally in Government Tech Maturity Index by World Bank, sixth globally in E-Government Development Index, and eighth in Waseda Index as digital government most progressing country


Major discussion point

Saudi Arabia’s digital government achievements and international recognition


Topics

Development | Infrastructure


Saudi Arabia’s Digital Transformation and Government Rankings – Digital transformation is prioritized under Vision 2030 to achieve national goals of vibrant society, thriving economy, and ambitious nation

Explanation

The Saudi government has made digital transformation a central pillar of its Vision 2030 strategy. This prioritization aims to support the achievement of three key national objectives across social, economic, and governance dimensions.


Evidence

Vision 2030 prioritized digital transformation and innovation to ensure successful achievement of national goals of vibrant society, thriving economy, and ambitious nation


Major discussion point

Strategic importance of digital transformation in Saudi Arabia’s national development


Topics

Development | Economic


Digital Government Authority’s Innovation Hub and Services – The Digital Government Authority was established in 2021 to organize Saudi Arabia’s digital landscape in collaboration with government entities

Explanation

A dedicated government authority was created to coordinate and organize the digital transformation efforts across Saudi Arabia. The authority works collaboratively with various government entities to ensure cohesive digital development.


Evidence

Digital Government Authority established in 2021 to play main role of organizing digital landscape in Saudi Arabia in collaboration with government entities


Major discussion point

Institutional framework for digital governance in Saudi Arabia


Topics

Legal and regulatory | Infrastructure


Digital Government Authority’s Innovation Hub and Services – The innovation hub provides programs including digital advisory, training (Kudratik program training 400+ employees), awards, and emerging technology initiatives

Explanation

The Digital Government Authority operates comprehensive programs to support digital transformation across government entities. These programs include consulting services, capacity building, incentive mechanisms, and technology exploration initiatives.


Evidence

Kudratik training program trained more than 400 employees among professional or executive employees in different government entities; digital advisory and study services; awards and digital competitiveness program; emerging technology program for proof of concept


Major discussion point

Comprehensive support services for digital transformation


Topics

Development | Infrastructure


Digital Government Authority’s Innovation Hub and Services – Multiple specialized labs are available including design studio, usability lab, learning experience lab, and emerging technology lab for testing and development

Explanation

The authority has established various specialized laboratory facilities to support different aspects of digital service development and testing. These labs provide dedicated spaces for experimentation, user testing, and technology evaluation.


Evidence

Labs include design studio, usability lab, learning experience lab, emerging technology lab, and ex-reality lab; learning experience lab observes and reshapes learning techniques; emerging technology lab examines technologies like digital twin


Major discussion point

Infrastructure for digital innovation and testing


Topics

Infrastructure | Development


Digital Inclusivity Program for Elderly and People with Disabilities – Saudi Arabia has 1.6 million elderly (4.8% of population) and 1.4 million people with disabilities (4.2% of population) who need accessible digital services

Explanation

There is a significant demographic need for accessible digital services in Saudi Arabia, with substantial populations of elderly citizens and people with disabilities. This demographic data demonstrates the importance of designing inclusive digital services.


Evidence

1.6 million elder population (4.8% of 35 million total), expected to increase by 11% in 2023; 1.4 million people with disability (4.2% of total population); disabilities include cognitive, mobility, visual, physical, and hearing impairments


Major discussion point

Demographic justification for digital inclusivity initiatives


Topics

Human rights | Development


Digital Inclusivity Program for Elderly and People with Disabilities – The digital inclusivity program aims to ensure independence and ease of use of digital services for everyone, following WCAG accessibility standards

Explanation

The program focuses on making digital services accessible and usable for all citizens, particularly elderly and disabled populations. It emphasizes user independence and follows international accessibility guidelines to ensure comprehensive coverage.


Evidence

Program aims to advance government excellence in digital inclusivity and accessibility; follows WCAG standards; inclusiveness means ensuring independency and ease of use for everyone; services designed to include everyone rather than just digitize


Major discussion point

Comprehensive approach to digital accessibility


Topics

Human rights | Development


Digital Inclusivity Program for Elderly and People with Disabilities – The program focuses on increasing user independence, aligning with accessibility standards, and humanizing service experiences

Explanation

The program has three main objectives that work together to create truly inclusive digital services. These objectives ensure both technical compliance and user-centered design approaches.


Evidence

Objectives include: increasing independence rate of users in accessing government services; aligning with accessibility standards like WCAG; humanizing service experience for accessible services to everyone


Major discussion point

Multi-faceted approach to digital inclusivity


Topics

Human rights | Development


Regulatory Framework and Implementation Questions – Response indicates collaboration through e-participation platform for citizen feedback and SADF initiative for public-private partnerships in digital services

Explanation

The government has established mechanisms for citizen engagement and private sector collaboration in digital service delivery. These initiatives provide channels for feedback and partnership in developing government digital services.


Evidence

E-participation platform where citizens and visitors can send feedback about current services; SADF initiative includes public-private partnerships where private sectors design digital services for government entities


Major discussion point

Collaborative governance and public-private partnerships in digital services


Topics

Legal and regulatory | Development


Digital Inclusiveness Lab Operations and Procedures – Labs serve as testing spaces for government entities, with reservation-based visits and comprehensive reporting on service improvements

Explanation

The digital inclusiveness labs operate through a structured process where government entities can book sessions to test their services and products. The process includes observation, testing, and detailed reporting with recommendations for improvement.


Evidence

Government entities make reservations to visit the center and test current services; observation and testing environment provided; final reports shared on current situation and how services will be improved for end users


Major discussion point

Operational procedures for digital inclusivity testing and improvement


Topics

Infrastructure | Development


A

Audience

Speech speed

114 words per minute

Speech length

293 words

Speech time

153 seconds

Regulatory Framework and Implementation Questions – Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation

Explanation

An audience member from Norway raised several technical questions about the regulatory framework, comparing it to international practices. The questions focused on enforcement mechanisms, technical requirements, and scope of coverage.


Evidence

Questions about regulatory authority for business compliance; WCAG has 78 success criteria – how many must businesses follow; why legislation only applies to public sector when Saudi Arabia ranks sixth in UN e-government index; what types of digital services are covered


Major discussion point

International comparison and technical details of regulatory framework


Topics

Legal and regulatory | Human rights


Digital Inclusiveness Lab Operations and Procedures – Questions about lab experience procedures and role as technology accelerator in Saudi Arabia

Explanation

An audience member inquired about the practical operations of the digital inclusiveness lab and its broader impact on technology adoption. The question sought to understand both the procedural aspects and strategic significance of the lab.


Evidence

Questions about experience of digital exclusiveness lab, procedures from start to final product, and whether it’s considered a technology accelerator in Saudi Arabia


Major discussion point

Operational details and strategic impact of digital inclusivity labs


Topics

Infrastructure | Development


Agreements

Agreement points

Importance of comprehensive digital inclusivity infrastructure and testing procedures

Speakers

– Fawaz Alanazi
– Audience

Arguments

Multiple specialized labs are available including design studio, usability lab, learning experience lab, and emerging technology lab for testing and development


Labs serve as testing spaces for government entities, with reservation-based visits and comprehensive reporting on service improvements


Questions about lab experience procedures and role as technology accelerator in Saudi Arabia


Summary

Both the presenter and audience members recognize the value and importance of having dedicated laboratory facilities for testing and improving digital inclusivity. The audience’s detailed questions about lab operations demonstrate engagement with and validation of the infrastructure approach.


Topics

Infrastructure | Development


Need for regulatory framework and standards compliance in digital accessibility

Speakers

– Fawaz Alanazi
– Audience

Arguments

The digital inclusivity program aims to ensure independence and ease of use of digital services for everyone, following WCAG accessibility standards


The program focuses on increasing user independence, aligning with accessibility standards, and humanizing service experiences


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Summary

Both speakers acknowledge the importance of following established accessibility standards like WCAG and having proper regulatory frameworks. The audience’s technical questions about implementation details show agreement with the need for standards-based approaches.


Topics

Legal and regulatory | Human rights


Similar viewpoints

Both recognize the importance of involving multiple stakeholders (citizens, private sector, government) in digital service delivery and the need for comprehensive regulatory approaches that extend beyond just public sector entities.

Speakers

– Fawaz Alanazi
– Audience

Arguments

Response indicates collaboration through e-participation platform for citizen feedback and SADF initiative for public-private partnerships in digital services


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Topics

Legal and regulatory | Development


Both acknowledge that digital inclusivity initiatives serve broader strategic purposes beyond just compliance, acting as drivers for overall technological advancement and social development.

Speakers

– Fawaz Alanazi
– Audience

Arguments

Digital Inclusity Program for Elderly and People with Disabilities – Saudi Arabia has 1.6 million elderly (4.8% of population) and 1.4 million people with disabilities (4.2% of population) who need accessible digital services


Questions about experience of digital exclusiveness lab, procedures from start to final product, and whether it’s considered a technology accelerator in Saudi Arabia


Topics

Human rights | Development


Unexpected consensus

International benchmarking and comparison of digital accessibility approaches

Speakers

– Fawaz Alanazi
– Audience

Arguments

Saudi Arabia’s Digital Transformation and Government Rankings – Saudi Arabia ranks highly in global digital government indexes, including first regionally in ESCOA and third globally in Government Tech Maturity Index


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Explanation

The unexpected consensus emerged around the value of international comparison and learning from global best practices. The Norwegian audience member’s detailed technical questions and the presenter’s positive reception of international ranking achievements show mutual appreciation for cross-border knowledge sharing in digital accessibility approaches.


Topics

Legal and regulatory | Development


Overall assessment

Summary

The discussion showed strong consensus around the fundamental importance of digital inclusivity infrastructure, standards-based approaches to accessibility, and multi-stakeholder collaboration. Both the presenter and audience members demonstrated shared understanding of the technical and regulatory complexities involved in implementing comprehensive digital accessibility programs.


Consensus level

High level of consensus with constructive engagement. The audience’s detailed technical questions indicated validation of the approach rather than disagreement, and the presenter’s responses showed openness to international comparison and learning. This consensus suggests strong potential for international collaboration and knowledge sharing in digital accessibility initiatives, with implications for broader adoption of similar comprehensive approaches globally.


Differences

Different viewpoints

Terminology and regulatory approach for digital accessibility

Speakers

– Fawaz Alanazi
– Audience

Arguments

The program focuses on increasing user independence, aligning with accessibility standards, and humanizing service experiences


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Summary

The audience member from Norway highlighted differences in terminology (digital accessibility vs universal design) and questioned the regulatory framework’s scope and enforcement mechanisms, while the Saudi presenter focused on their current approach without addressing these regulatory gaps


Topics

Legal and regulatory | Human rights


Scope of digital inclusivity legislation coverage

Speakers

– Fawaz Alanazi
– Audience

Arguments

Response indicates collaboration through e-participation platform for citizen feedback and SADF initiative for public-private partnerships in digital services


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Summary

The audience questioned why private sector is excluded from digital accessibility legislation despite Saudi Arabia’s high e-government ranking, while the Saudi response focused on voluntary partnerships rather than mandatory compliance for private entities


Topics

Legal and regulatory | Development


Unexpected differences

International terminology and approach differences

Speakers

– Fawaz Alanazi
– Audience

Arguments

The digital inclusivity program aims to ensure independence and ease of use of digital services for everyone, following WCAG accessibility standards


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Explanation

The Norwegian audience member’s emphasis on ‘universal design’ terminology versus Saudi Arabia’s ‘digital inclusivity’ approach revealed unexpected international differences in conceptual frameworks for the same goals, suggesting varying national approaches to accessibility policy


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

The main disagreements centered around regulatory framework comprehensiveness, private sector inclusion, and specific implementation details of accessibility standards


Disagreement level

Moderate disagreement level with significant implications – the questions raised by the international audience member highlighted potential gaps in Saudi Arabia’s regulatory approach, particularly regarding private sector compliance and specific technical requirements, which could affect the overall effectiveness of digital inclusivity initiatives


Partial agreements

Partial agreements

Similar viewpoints

Both recognize the importance of involving multiple stakeholders (citizens, private sector, government) in digital service delivery and the need for comprehensive regulatory approaches that extend beyond just public sector entities.

Speakers

– Fawaz Alanazi
– Audience

Arguments

Response indicates collaboration through e-participation platform for citizen feedback and SADF initiative for public-private partnerships in digital services


Questions raised about regulatory authority for business compliance, specific WCAG requirements implementation, and why private sector isn’t included in legislation


Topics

Legal and regulatory | Development


Both acknowledge that digital inclusivity initiatives serve broader strategic purposes beyond just compliance, acting as drivers for overall technological advancement and social development.

Speakers

– Fawaz Alanazi
– Audience

Arguments

Digital Inclusity Program for Elderly and People with Disabilities – Saudi Arabia has 1.6 million elderly (4.8% of population) and 1.4 million people with disabilities (4.2% of population) who need accessible digital services


Questions about experience of digital exclusiveness lab, procedures from start to final product, and whether it’s considered a technology accelerator in Saudi Arabia


Topics

Human rights | Development


Takeaways

Key takeaways

Saudi Arabia has achieved significant digital government rankings globally, including 1st regionally in ESCOA and 3rd globally in Government Tech Maturity Index, supporting Vision 2030 goals


The Digital Government Authority established in 2021 operates an innovation hub with multiple specialized labs and programs, including the successful Kudratik training program that has trained 400+ government employees


Digital inclusivity is a critical need in Saudi Arabia with 1.6 million elderly (4.8% of population) and 1.4 million people with disabilities (4.2% of population) requiring accessible digital services


The digital inclusivity program focuses on ensuring independence and ease of use for all users, following WCAG accessibility standards and humanizing service experiences


Government entities can reserve and utilize the digital inclusiveness labs for testing services and products, receiving comprehensive reports on improvements needed for end users


Collaboration between government and private sector exists through initiatives like SADF and public-private partnerships for digital service development


Resolutions and action items

Government entities can make reservations to visit and test their services in the digital inclusiveness labs


Final reports will be shared with government entities on current service status and improvement recommendations


Continued collaboration through e-participation platform for citizen feedback on government services


Unresolved issues

Specific number of WCAG requirements (out of 78 success criteria) that Saudi public sector bodies must follow was not clearly answered


Why private sector legislation compliance is not mandated despite Saudi Arabia’s high e-government ranking remains unclear


Scope of digital services covered by legislation (websites, mobile apps, self-service machines, digital documents) was not specified


Details about regulatory authority mechanisms for ensuring business compliance were not fully explained


Suggested compromises

None identified


Thought provoking comments

So, the term used is digital accessibility and digital inclusiveness. But in Norway, we use the term universal design and that’s why we have authority for universal design of ICT Norway.

Speaker

Adil Hussain (Norwegian Digitalization Agency)


Reason

This comment is insightful because it introduces a comparative international perspective and highlights how different countries conceptualize the same fundamental challenge. The distinction between ‘digital accessibility/inclusiveness’ versus ‘universal design’ represents different philosophical approaches – one focusing on accommodation and the other on inherent design principles that work for everyone from the start.


Impact

This comment shifted the discussion from a purely Saudi-focused presentation to a more comparative, international dialogue. It established a framework for cross-cultural learning and prompted deeper questions about implementation approaches, regulatory frameworks, and scope of coverage.


WCAG have like 78 requirements, 78 success criteria. So, how many of these requirements businesses are obliged to follow?… why not private sectors are included?

Speaker

Adil Hussain


Reason

This series of questions is particularly thought-provoking because it challenges the scope and depth of Saudi Arabia’s digital inclusivity initiative. By asking about specific compliance requirements and private sector inclusion, it exposes potential gaps in coverage and implementation that could limit the program’s overall effectiveness.


Impact

These questions forced the presenter to address practical implementation challenges and revealed some limitations in the current approach. The questions about private sector inclusion particularly highlighted a significant gap, as private companies often develop services used by government entities, creating potential accessibility bottlenecks.


what we mean by inclusiveness is to ensure the independency and ease of use of digital services for everyone, while inclusiveness aims not to digitize the services, but rather to ensure they are designed to include everyone.

Speaker

Fawaz Alanazi


Reason

This distinction is philosophically important as it clarifies that digital inclusivity isn’t just about making services digital, but about ensuring digital services are inherently accessible. This represents a mature understanding that technology alone doesn’t solve accessibility challenges – intentional inclusive design does.


Impact

This comment established the conceptual foundation for the entire discussion and helped frame the Saudi approach as being focused on design principles rather than just technological implementation. It set the stage for more nuanced questions about how this philosophy translates into practice.


Overall assessment

The key comments fundamentally transformed what began as a straightforward presentation into a more complex, internationally-informed dialogue about digital inclusivity approaches. Adil Hussain’s interventions were particularly impactful, introducing comparative perspectives that challenged the presenters to think beyond their national context and address practical implementation gaps. His questions about private sector inclusion, specific compliance requirements, and terminology differences elevated the discussion from descriptive to analytical, forcing deeper examination of policy choices and their implications. The presenter’s philosophical distinction about inclusiveness provided important conceptual grounding, but the international perspective revealed areas where the Saudi approach might be strengthened, particularly regarding private sector engagement and comprehensive regulatory frameworks.


Follow-up questions

What is the regulatory framework for enforcing digital accessibility compliance in businesses and public sector bodies in Saudi Arabia?

Speaker

Adil Hussain (Norwegian Digitalization Agency)


Explanation

Understanding the enforcement mechanisms is crucial for ensuring actual implementation of accessibility standards rather than just guidelines


How many of the 78 WCAG success criteria are public sector bodies in Saudi Arabia obligated to follow?

Speaker

Adil Hussain (Norwegian Digitalization Agency)


Explanation

This clarifies the specific scope and depth of accessibility requirements that organizations must meet


Why doesn’t the digital accessibility legislation apply to private sector organizations in Saudi Arabia?

Speaker

Adil Hussain (Norwegian Digitalization Agency)


Explanation

Given that private sectors often develop services for government use and Saudi Arabia’s high e-government ranking, understanding this gap is important for comprehensive digital inclusion


What specific types of digital services does the accessibility legislation cover (websites, mobile apps, self-service machines, digital documents like PDFs, EPUBs)?

Speaker

Adil Hussain (Norwegian Digitalization Agency)


Explanation

Defining the scope of covered digital services is essential for organizations to understand their compliance obligations


How can the digital inclusiveness lab experience be leveraged as an accelerator for technology application in Saudi Arabia?

Speaker

Qusay (implied from Arabic question)


Explanation

Understanding the broader impact and scalability of the lab’s work could help maximize its contribution to national digital transformation goals


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #290 Sovereignty and Interoperable Digital Identity in Dldcs

WS #290 Sovereignty and Interoperable Digital Identity in Dldcs

Session at a glance

Summary

This discussion focused on sovereignty and interoperable digital identity in developing countries, particularly within Africa, as part of a workshop at the Internet Governance Forum in Lisbon. The session brought together representatives from various organizations including the OpenID Foundation, CityHub, AFICTA (Africa ICT Alliance), and government officials from Nigeria and Benin, along with experts from Norway and Japan.


The speakers emphasized that digital identity is critical for closing the digital divide and enabling economic development across Africa. Dr. Jimson Olufuye from AFICTA highlighted how digital identity connects to the World Summit on the Information Society (WSIS) goals and the Global Digital Compact, stressing that “if you cannot identify anybody, it means the person does not really exist” in terms of digital inclusion. Representatives from Benin and Nigeria shared concrete examples of their national digital identity implementations, with Nigeria’s NIMSI system integrating national identity numbers across banking, telecommunications, education, and government services, while Benin’s platform enables free movement within ECOWAS countries.


Technical experts from CityHub and the OpenID Foundation discussed the complexity of transitioning from physical to digital identity documents while maintaining security and privacy properties. They emphasized the need for collaboration between public and private sectors, standardization efforts, and respect for national sovereignty in policy frameworks. The Norwegian representative shared experiences from Nordic-Baltic cooperation, highlighting challenges in cross-border identity recognition even among similar countries.


Key themes included the importance of meeting countries at their level of digital readiness, establishing trust frameworks for cross-border interoperability, addressing data sovereignty concerns, and bridging the digital divide through affordable technology and digital literacy programs. The discussion concluded with calls for concrete pilot projects and continued intersessional work to advance regional digital identity integration, particularly supporting the African Continental Free Trade Area objectives.


Keypoints

## Major Discussion Points:


– **Digital Identity Infrastructure and Interoperability**: The workshop focused extensively on building sustainable, interoperable digital identity systems that can work across borders while maintaining security and privacy. Speakers discussed the technical challenges of transitioning from physical documents to digital credentials while preserving their essential properties and trust mechanisms.


– **Regional Cooperation and Cross-Border Implementation**: Significant attention was given to regional initiatives, particularly in Africa through ECOWAS and the African Union framework, as well as Nordic-Baltic cooperation. The discussion emphasized meeting countries “where they are” in terms of digital readiness and creating pilot programs between neighboring countries to test cross-border interoperability.


– **Sovereignty and Data Protection**: A central theme was balancing digital identity interoperability with national sovereignty and data protection. Speakers emphasized the importance of keeping data within national or regional boundaries, respecting local laws and regulations, and ensuring that cross-border systems don’t compromise national control over citizen data.


– **Practical Use Cases and Implementation**: The workshop highlighted concrete applications of digital identity systems, including education credentials, banking services, healthcare access, and travel documents. Examples from Nigeria’s NIN system, Benin’s “It’s Me” card, Japan’s student railway discount system, and Norway’s cross-border services demonstrated real-world implementations.


– **Bridging the Digital Divide and Ensuring Inclusion**: Discussion addressed challenges of digital literacy, internet access, affordable devices, and energy infrastructure, particularly in rural African populations. Speakers emphasized the need for digital identity systems to be inclusive and not leave behind populations lacking digital access or literacy.


## Overall Purpose:


The workshop aimed to explore how to achieve sovereignty-respecting, interoperable digital identity systems, particularly for developing countries. The goal was to share best practices, identify practical next steps for intersessional work, and demonstrate how organizations like AFICTA and CityHub could collaborate to advance digital identity infrastructure that supports economic development, regional integration, and citizen services while maintaining national sovereignty.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared learning among international participants. Speakers were enthusiastic about sharing their experiences and genuinely interested in learning from others’ implementations. The tone was professional yet accessible, with technical experts making complex concepts understandable. There was a strong sense of urgency and optimism about the potential for digital identity to transform citizen services and regional cooperation, balanced with realistic acknowledgment of the significant technical, policy, and infrastructure challenges that remain to be addressed.


Speakers

**Speakers from the provided list:**


– **Naohiro Fujie** – Chair of the OpenID Foundation Japan, working group co-chair of OpenID Foundation Global focusing on identity verification, supports Japanese government and educational institutes in digitalizing certificates and IDs


– **Kossi Amessinou** – From Ministry of Economy and Finance in Benin, works on private sector monitoring and support, former Director of ACT in Benin Development Ministry


– **Tor Alvik** – Subject Director in the Agency for Public Management and e-Government for Norway, works for the Norwegian Digitalization Agency


– **Audience** – Online participant/questioner


– **Jimson Olufuye** – CEO of Contemporary Consulting (ICT firm based in Abuja, Nigeria), Chair of the Advisory Council of the Africa ICT Alliance (AFICTA), Founder and Foundation Chair


– **Debora Comparin** – Technical Director of the Secure Identity Alliance, co-founder of CityHub, works on standardization and technical specifications for digital identity interoperability


– **Abisoye Coker Adusote** – Director General of the National Identity Management Commission (NIMSI), Nigeria, Engineer


– **Moderator** – Gail Hodges, Executive Director of the OpenID Foundation, co-organizer of CityHub


**Additional speakers:**


– **Stéphanie de Labriole** – Executive Director and Senior Program Manager at Alfecta, served as moderator for online participants


Full session report

# Workshop Report: Sovereignty and Interoperable Digital Identity in Developing Countries


## Executive Summary


This workshop, held as part of the Internet Governance Forum in Lisbon, was co-organized by CityHub and the Africa ICT Alliance (AFICTA) in partnership. The session brought together international experts to examine digital identity, sovereignty, and interoperability challenges, with particular focus on developing countries in Africa.


The discussion featured representatives from Nigeria’s National Identity Management Commission (NIMC), Benin’s Ministry of Economy and Finance, Norway’s Digitisation Agency, Japan’s OpenID Foundation, and various organizations working on digital identity standards. The workshop included interactive audience participation through Mentimeter polls and online Q&A, with Stephanie de Labriole serving as online moderator.


Key outcomes included commitments to establish working groups for policy mapping, implement pilot programs between neighboring countries, and continue intersessional work on technical standards and use cases.


## Key Presentations and Discussions


### Opening Framework: Digital Identity and Inclusion


Dr. Jimson Olufuye, CEO of Contemporary Consulting and Chair of AFICTA, established the session’s foundation by emphasizing that “identity is critical to closing the digital divide, because if you cannot identify anybody, it means the person does not really exist.” He connected digital identity directly to WSIS goals and the Global Digital Compact, framing it as fundamental to preventing people from being left behind in digital transformation.


Gail Hodges from the OpenID Foundation, serving as moderator, explained CityHub’s role in working with over 45 countries on digital identity interoperability through three main working groups focusing on use cases (education, refugees, bank accounts), policy frameworks, and technology standards.


### National Implementation Examples


**Nigeria’s Comprehensive System**


Abisoye Coker Adusote, Director General of Nigeria’s NIMC, presented the country’s integrated approach where National Identity Numbers (NIN) are used across banking, telecommunications, education, and government services. The system supports practical applications including school feeding programs, student loan distribution, and census operations.


Nigeria’s approach addresses inclusion through government intervention programs and digital literacy initiatives. During enrollment drives, the system opens digital wallets for unbanked populations, ensuring financial inclusion alongside identity verification. Adusote emphasized the importance of biometric authentication and secure applications that enable individuals to protect their own data.


**Benin’s Regional Integration Model**


Dr. Kossi Amessinou from Benin’s Ministry of Economy and Finance presented the “It’s Me” card system enabling visa-free travel within ECOWAS. He also described the WURI project supported by the World Bank, which aims to facilitate regional integration through digital identity.


Amessinou strongly advocated for data sovereignty, stating that “data centres should be located within Africa to maintain data sovereignty,” reflecting concerns about maintaining control over citizen data and ensuring digital identity infrastructure serves African interests.


### Technical Challenges and International Perspectives


**Nordic Experience**


Tor Alvik from Norway’s Digitisation Agency shared experiences from Nordic-Baltic ICT cooperation that began in 2017. He highlighted that “the linkage of the identity you are coming from one country to the identity you have in the country providing the service is one of the main challenges we actually focus,” demonstrating that even technologically advanced countries with similar legal frameworks face fundamental interoperability challenges.


**Japan’s Digital Wallet Implementation**


Naohiro Fujie, Chair of the OpenID Foundation Japan, announced that Japan had “just yesterday” started providing national ID cards in Apple Wallet. He highlighted unique challenges in transitioning from physical to digital credentials, noting that “unlike physical documents, digital credentials have no difference between copy and original,” requiring new management policies for digital versions.


Fujie advocated for “starting with local standards whilst keeping global standards in mind,” emphasizing bottom-up development that respects local requirements while maintaining international interoperability potential.


### Privacy and Architecture Considerations


Debora Comparin, co-founder of CityHub and contributor to standardization work, raised important privacy considerations by comparing physical and digital documents. She noted that physical documents provide certain privacy properties – they cannot track users’ actions – and argued for maintaining these properties in digital systems.


The discussion addressed surveillance concerns from both government and private sectors. Interactive audience participation through Mentimeter polls explored questions about federated versus centralized systems and surveillance monitoring, with online participants contributing questions about balancing user privacy with fraud prevention.


## Infrastructure and Implementation Challenges


The workshop acknowledged significant practical challenges in achieving digital identity interoperability across Africa. Participants noted that some countries lack basic digital infrastructure, including reliable internet connectivity, sufficient energy infrastructure, and widespread digital literacy.


Adusote provided crucial insight: “There are countries that do not have the simple digital infrastructure. There is no data connectivity, and they have no energy or very little energy. There’s very little or no digital literacy awareness created.” This observation emphasized the need for graduated, flexible solutions that meet countries where they are in terms of digital readiness.


## Proposed Next Steps and Commitments


### Working Group Establishment


Participants committed to establishing working groups to identify gaps across sub-regions and regions, mapping trust frameworks and interoperability criteria. This would provide systematic assessment of readiness levels and identify champions across different African countries.


### Policy Mapping Initiative


Speakers committed to completing comprehensive policy mapping of regulations and legislation across countries before the next Internet Governance Forum, aiming to provide actionable frameworks for cross-border trust establishment.


### Pilot Program Implementation


The discussion identified specific pilot program opportunities between neighboring countries, including Nigeria-Cameroon, Nigeria-Niger, and Uganda-Kenya pairings. These pilots would test both technical interoperability and policy frameworks in real-world conditions.


### Intersessional Work Program


Participants emphasized maintaining momentum between IGF sessions through continued technical working groups, with broader community participation and regular progress reporting on use cases, policy frameworks, and technology standards.


## Key Consensus Areas


The workshop demonstrated strong agreement on several fundamental principles:


– Digital identity is essential for closing the digital divide and ensuring inclusion in digital transformation


– Data sovereignty and maintaining control over citizen data through appropriate legal frameworks and infrastructure placement is crucial


– Multi-stakeholder collaboration between government, private sector, technical community, and civil society is necessary


– Practical implementation through pilot programs and real-world use cases is needed rather than purely theoretical discussions


– Regional integration should leverage existing frameworks like ECOWAS and African Union initiatives


## Conclusion


The workshop established a foundation for continued international cooperation on digital identity interoperability, with concrete commitments for working groups, policy mapping, and pilot programs. The discussion balanced technical possibilities with practical constraints, emphasizing the need for solutions that respect sovereignty while enabling beneficial cross-border functionality.


The consensus on fundamental principles, combined with recognition of implementation challenges, provides a realistic framework for advancing digital identity interoperability in developing countries through graduated, collaborative approaches that prioritize inclusion and sovereignty.


Session transcript

Moderator: Substyptional, in знаешь du. … Great. So a warm welcome to Sovereignty and Interoperable Digital Identity in DLDC’s workshop here in the UNIGF in Lisbon, Norway. I am Gail Hodges, the Executive Director of the OpenID Foundation, a global open standards body that serves billions of users and millions of developers with identity standards. I’m also the co-organizer of CityHub, one of the co-hosts of the workshops today. It’s a personal delight to be here representing the OpenID Foundation, CityHub, and working in partnership with Alfecta. It’s quite serendipity that the IGF MAG has brought us together for this joint session just a few months before we started convening. We found that we have quite a lot in common with our Alfecta partners, and it’s a real pleasure to co-present with them today. I’m joined here by a very impressive group of speakers, including Dabar Kamparan, Technical Director of the Secure Identity Alliance and co-founder of the CityHub, Nahira Fujisan, Chair of the OpenID Foundation of Japan, Tor Alvik, the Subject Director in the Agency for Public Management and e-Government for Norway, as well as on my other side, Dr. Jimson Lafoyer, the CEO of Contemporary Consulting, Founder, Foundation Chair, and Chair of the Advisory Council of Alfecta, Dr. Kosi Amesinu, Private Sector Monitoring and Support, Ministry of the Economy and Finances in Benin, Director General and CEO of the National Identity Management Commission, NIMSI, Nigeria, Abisoy, Coker, Adesote. I will also have joining me a moderator for our online participants, Stéphanie de Labriole, Executive Director of the… Alfecta Senior Program Manager. I will invite all of our speakers to introduce themselves before they make their first comments. We have three objectives for our workshop session today. Our first is to surface different viewpoints on sovereignty and interoperable digital identity and DLDCs, why it’s important, where we are seeing successes today, and how to bring that success to all countries. We will also discuss the importance of the digital identity and DLDCs, and how to make them more inclusive and inclusive for all. We will also discuss the importance of the digital identity and DLDCs, and how to make them more inclusive for all. We will also discuss the importance of the digital identity and DLDCs, and how to bring that success to all countries. We will also explore how policymakers can leverage examples and shared experiences of successful interoperable digital identity implementations, and to inform and support of identity interoperability in Africa, ultimately enhancing economic activity and promoting regional integration. We also will get into what practical steps we can take as a community during our intersessional work before the next IGF. And we will look at how to make those steps more inclusive for all. And we will look at how the tools of CitiHub, the sustainable and interoperable digital identity project, how those tools can help in the context of the African free trade area proposal, and ensuring that those tools are safeguarding national sovereignty. We welcome audience participation in our workshop, and please do have your mobile phones to hand, because shortly we will bring up a QR code for you to participate in our first question. Actually, with that, I’m going to turn it over to Dr. Ngozi Okafor, who will be our moderator for this session. Thank you. Thank you so much, and thank you so much for having me here today. So, I’m going to go ahead and bring up a QR code for you to participate in our first question. Actually, with that, may I ask our technicians to please bring up the slide. You’re invited to scan the QR code and answer our first question, and all of these answers are wholly anonymous, which is our first question, and you can freeform type in your response, why is sovereignty and interoperable digital technology and digital transformation important to our society? And we will be taking you into this session today. I can see those in the room already picking up their phones and scanning the QR code. So as soon as we see those first answers coming in, we should see them pop up on the screen from both our online participants and those that are joining us in the room. We will use the same Mentimeter tool later on in this session, so this is a good way to make sure we’ve got it starting to work in advance. There you go. I see the first answer is coming through on national security. Identity, fundamental right, give it another moment, and then we’ll transition into our first set of speakers. Protection of critical and classified data, recognizing individuals in the digital world, protection, a lot of common themes, a lot of shared interests here, personal digital sovereignty. Okay so people can continue to answer the survey questions as they’re typing them in and we’ll take note of these responses as part of our rapporteur and our summary. So I think we can transition back to the holding slide and I’ll introduce our first speaker. Thank you very much. I’d like to introduce Dr. Jimson, the CEO of Contemporary Consulting, Founder and Foundation Chair, Chair to the Advisory Council of Alfecta. Dr. Jimson, can you kindly introduce yourself and elaborate on what Alfecta does and its roles in WSIS, the IGF, the Enhanced Corporation, and the Global Digital Compact?


Jimson Olufuye: Thank you, Gail. Good morning, everyone. both on site and online. It’s a pleasure to have you all here. My name again is Jimson Olufuye. I have the privilege of being the CEO of Contemporary Consulting, an ICT firm based in Abuja, Nigeria, to data centers, security, integration, and research. I’m also the Chair of the Advisory Council of the Africa ICT Alliance, AFICTA. AFICTA is a concerned private sector-led ICT alliance of ICT associations, companies, and individual professional stakeholders across Africa. We started off in 2012 with six countries in Africa, and today we’re in 43 African countries. Our vision is to fulfill the promise of the digital age for everyone in Africa. As a matter of fact, AFICTA is one of the outcomes of the WSIS. We’re fully engaged with our governments across Africa, engaged with UNECA, with AU, that’s African Union, and we’re also engaged at the CSTD, that is Commission for Science and Technology for Development, that oversee WSIS. So WSIS is about people-centered information society, wherein everyone will benefit from the great infrastructure we have in ICT. And we know that today ICT rules everything, internet basically. And so we are fully committed to ensuring that Africa takes its rightful place in terms of sovereignty, in terms of protection of its Assets. Assets in terms of identity. And these lines of WSIS Action Line 8, talking about cultural identity, other WSIS Action Lines about cyber security, about cooperation, collaboration, and about common purpose for development, especially the sustainable development goal. As we know, this is IGF, another outcome of the WSIS, and we also know about the Global Digital Compact. The Global Digital Compact is also an outcome of WSIS, because when the enhanced cooperation failed, in January 2018, by July, the Secretary General launched the high-level panel on digital interdependence, a digital cooperation led by Belinda Gates and Jack Ma. And that led eventually to the Summit of the Future last year. And that’s September 22nd. The world leaders, they met together and agreed to the Summit of the Future, the Pact of the Future, which has GDC, Global Digital Compact, as one of the outcomes. And it has five objectives. The first objective is about closing the digital divide. And we know identity is critical to closing the digital divide, because if you cannot identify anybody, it means the person does not really exist. And we’re talking about inclusivity. We’re talking about multi-stakeholder. We’re talking about nobody being left behind. So Objective 1 talks about bridging all the divides so that we can achieve the Sustainable Development Goal. And of course, Objective 2 is also there, about ensuring everybody benefits from the digital economy. Also human rights, Objective 3. Objective 4, talking about data governance, interoperable data governance. and also AI. The AI, data is king. So our data matters a lot and identity matters a lot. So we are really proud to be associated with CIDE, CIDE and the alliance collaboration. We are grateful to Mark that brought us together. I trust it’s going to be an enriching relationship and also we can see we have in this panel the government of Benin represented by Dr. Kofi and government of Nigeria in terms of data identity represented by Dr. Abisoye Coker. So this is kind of collaboration and we are proud of it and we want to continue to enrich it going forward. Now when we talk about sovereignties, about laws, appropriate laws that guide data, data of your citizens, protection of your data, this is very important. As someone that is in data ecosystem, in fact in Nigeria we are one of the data controllers so we know the importance of data. They need to protect it and so we believe strongly that sovereignty matters a lot. We need to ensure that our people trust us, that is trust the leadership, the government and also they can have liberty to conduct businesses, enjoy government services in terms of health, in terms of commerce, in terms of travels, immigration, identity generally. And so we are fully invested that is AFICTA and our members AFICTA in collaborating with all our government ages across Africa so that we can all fulfill the vision of WSIS, the expectation of GDC and the hope for every African.


Moderator: Thank you very much. Thank you so much Dr. Jimson. Just to kind of quote you back to say that identity is critical to closing the digital divide is clearly a key theme for what brings us together today in which I think has brought many of our audience members to this particular conversation. We certainly don’t wish to see anyone be left behind, so thank you very much for those comments. I’ll turn now to Dr. Kosi Emesenu from Benin. Dr. Kosi, please do introduce yourself as well, and please elaborate on what you see from Benin’s perspective on best practices in policy and the direction you hope to see emerge in Benin and across Africa.


Kossi Amessinou: Okay, it’s okay. Thank you. I’m Kosi Emesenu. I come from Benin. I’m from Ministry of Economy and Finance in Benin. I was Director of ACT in Benin Development Ministry before, at the Ministry of Economy. Today, I’m working to support and monitor private sector process in Ministry of Economy in Benin. We are talking about identity today. When we are talking about identity, interoperability of digital identity is a challenge for us. This challenge is shared by all the community ecosystems today, all over the world. Biometric documents, for example, particularly digital identity can confer the win-to-win benefit to countries such as good governance, balancing spatial planification, financing inclusion, to provide to people. Women employment, we can take care of this very well. Better social protection can be done when we take care of identity very well. We’ve provided in Benin one platform called East Road when we put our interoperability process inside. In this platform, we have one key called FID for each people in Benin. With that key, we can have two kinds of cards. You have one card called It’s Me. Every people can have it for free. We can have also biometric card provided with ECOWAS to do all thing of activities and also travel anywhere in ECOWAS space. Do your business without visa process. When we talk about digital, we know that it’s important to talk also about the challenge of security. Security because identity have impact to our global digital economy and environment locally. It’s important for us that digital identification can reflect the legal identity of people. If we don’t have legal identity of people, it will be very difficult for us to know who is outside the information we have online. In Benin, World Bank help us with project one called WURI project. WURI project come and consolidate the achievement of the process start by government alone, provide the secure personal identification for each person in Benin. With WURI project today, government facilitate e-commerce, work on securing digital ecosystem. Anywhere in Benin today, people can have one space where they can receive their card. We also decide any African can enter Benin without visa today. For that, we need to have person who are coming for us receive some ID when we enter the country. When we enter, we can identify you by your visa process for free. Visa is for free, you can enter, you can identify you from that. But also, this card semoir received by people can help them to pay SIM card, can help them to go to hospital and identify the SIM and have all the service inside. This card semoir, my colleague use it for example to go to Ghana, this card semoir. You can check it, you can control the QR code on it and know all your identification. We are looking for this kind of process for all Africa. If it is possible for African Union for example, I work to let all the African people have their ID across Africa before the end of 2026, for example. We will have possibility for each African to enter Benin without visa process. This will be something very good and we are looking for that. That is also the African we want. It is important for us today to work on the challenge of security, this identification call. This challenge, for this challenge we put in the process now in Benin, one agency will work on digital space monitoring. Every day we monitor digital space to see what level of security challenge you have today. Thank you very much.


Moderator: Wonderful. Thank you so much, Dr. Kosi. Just to kind of play back a little bit of what I heard there at the end, the same walk card really bringing to life, you know, for individuals within Benin, how to access their SIM card, how to access their healthcare services, how to allow that movement of people across jurisdictions like in Ghana, and that serving as a model for what could happen more broadly within Africa. And obviously then by extension, the rest of the world, you know, part of our conversation between Afekta and CitiHub. Really important for that financial inclusion. And also bringing to life the importance of that foundational civil registry capability that countries have to empower their residents and empower the movement of people. So thank you for those thoughtful comments. I’d now like to move on to Director General Abisoy Odusote from the Nigerian government and the lead for NIMSI. May I ask you, DG Abisoye, to kindly introduce yourself and your role with NIMSI and to elaborate on what Nigeria is doing on both identity, civil registry, digital identity, your work within Africa, and your thoughts about global interoperability of digital identity.


Abisoye Coker Adusote: Good morning, everyone. I am Engineer Abisoye Coker-Adusote, the Director General of the National Identity Management Commission. Thank you, Gil, for your questions. Just to quickly give you a summary update on what Nigeria has been able to achieve so far, the NIMSI can shed more light on the tangible progress which has been made in establishing foundational identity which not only supports national development but is increasingly interoperable across the sectors and borders. So, the system that we have designed powers key priorities around social protection, tax reform, financial inclusion, and digital public services. We have successfully integrated the national identity number with the bank verification number which means that every single person that is classified as banked in the population has an account number tied to their NIN. We’ve also been able to achieve the same with the same NIN linkage. We’ve also been able to achieve that linkage with the National Population Commission which means that at birth, every citizen is now able to get issued a NIN from birth. Initially, it used to be from the age of 16, but now it’s from birth. So, we’re able to do that collaboration successfully with UNICEF and the National Population Commission. What we are also doing at the moment is the first of its kind in Nigeria which is the biometric enabled census. So, where we sort of ensure that we don’t have duplicates. in terms of collecting information. They can rely on the NIN to be able to verify the identity of voters or the voting population in Nigeria. We’ve also been able to ensure that the children that are enrolling in the school can definitely get their school feeding through biometric authentication via the NIN. So there are a lot of government programs we’re working on. One is the Credit Cop, which means providing credit access to Nigerians and ensuring that they are biometrically verified via the NIN. We’ve also done another partnership with SMEDAN, which deals with the small-medium enterprises in Nigeria, where we’re able to ensure that MSMEs who have been granted loans or given grants are able to verify their identity through the NIN. For the Joint Admissions and Magic Relation Board, which is the examination that you have to sit to be able to go into a higher institution to pursue your further education, your NIN is required at that instance. So we have a lot of use cases across different ministries, departments and agencies where your NIN is required for you to be able to do what you need to do to be able to access any loans or to be able to further your education or to be able to pay your taxes. And the list is endless. Also, we have partnered with the NEL Fund, which gives student loans to youths that want to be in higher education also. So first of all, it’s kindly Nigerian Student Loan Service, so we’re very proud of Mr. President for being able to bring this vision to life. And you need your NIN so that we can avoid duplication of identity, prevent identity fraud. And I mean, that’s what the beauty of the NIN does. So it’s not just a number that just sits there, but it’s a number that enables you to access different services across government and also the private sector. So regarding the private sector, we also have health insurance. We have banks, we have telcos, we have fintechs, we have manufacturing companies, we have so many use cases around the use of the NIN. Another thing that we’ve been able to do successfully is to ensure continued integration and harmonization of all the sectoral IDs across the ministries, departments and agencies. One thing that Nigeria is very lucky about is that the government has the political will, so we have a president who understands the importance of identity, and so that puts us at an advantage of getting all of these things done. Regarding cross-border interoperability, I want to speak on something. So we had the first West African Economic Summit, which was held a week ago. The focus of that summit was about digital trade across the West African region, and we had quite a few of the presidents from ECOWAS turn up for the event. And one thing was key, that you definitely need digital identity, which is a catalyst for driving digital trade, because with the discussions around the free movement of people across borders, free movement of people and goods and services across borders, and it’s extremely important for us to note how major digital identity is in advancing the cause of digital trade. And we had a lot of conversations, and something stood out to me. It was very obvious that we need to meet each country state at their level of readiness. So there are countries that do not have the simple digital infrastructure. There is no data connectivity, and they have no energy or very little energy. There’s very little or no digital literacy awareness created. So the list is endless, and for us to be able to then obviously then identify. what the barriers are in each of these countries and then we’re able to address the real issues. If you look at the institutional frameworks that we have and if you’re also looking at the legal frameworks, data protection, for example, in Nigeria, our act went live in 2023, May of 2023. However, it does restrict cross-border interoperability, so which means within your state, your country state, you’re able to carry out, you know, your transactions, but then when it comes to cross-border, there’s nothing that points to that there. So I believe that we need to sort of have like a regional agreement based on data sovereignty and, you know, and trust that would sort of allow, you know, data protection acts across each region, allow the modification of their acts to be able to reflect cross-border interoperability. I think it’s very, very, very key because everyone’s obviously worried about data, you know, protecting, you know, the citizens’ data and also ensuring safety of their data and there are also issues around cyber security threat. So I’m sure that I think this happened like a week and a half or two weeks ago where we all witnessed 16 billion passports data breached globally. I mean, we had people like Apple, Google all affected, so I think that’s a real concern for everyone. It’s a huge problem, so we need to ensure that we reflect on this, but at the same time, we’ve got AFTA working, we’ve also got the African Union Digital Interoperability Framework that they’ve put together, so that needs to be adopted. I believe that also regarding cross-border interoperability, you must incorporate digital identity in payment design. It’s extremely important and Nigeria has been able to achieve that of recent. We have the Nigerian interbank settlement system called NIBS, where they have just successfully launched the national payment stack, which is part of the digital public infrastructure. um approach that we have in Nigeria and so that’s been integrated into our national identity management system and and that’s um that allows for cross-border interoperability. We also have the FinTechs have developed some applications too that also allows cross-border interoperability so we need to definitely focus on the digital identity part of things and meet states at the point of readiness for them. I also personally feel that with states that have already made a lot of progress like Nigeria we can definitely have a peering mechanism where we run out a pilot for you know a test pilot for the digital identity cross-border interoperability where we say we pick two member states and then say okay let’s let’s let’s run out this this pilot so we may do Uganda and Kenya or we may do you know Nigeria and Cameroon or Nigeria and Niger and any of the border countries around us I think that would help a great deal but trust frameworks are very key for interoperable IDs across borders, ethical standards for cross-border data sharing, safeguarding sobriety by ensuring ID protocols respect national laws because we all have different laws also political is very very important I can’t stress that enough because there are governments in Africa that have a lot of setbacks there’s political instability and conflicts in their areas so there needs to be a lot that needs to be done across but if I’m to speak to what I mean regarding you know sustainable interoperable digital identity tools and after I think that if we leverage the after phase three and digital trade protocol I think it would help us a great deal to be able to integrate digital identity standards into the e-commerce and digital trade protocol I think I’m going to stop there for now thank you.


Moderator: DG Abisoye, thank you so much for those comments. Just to kind of recap a few key messages that I heard come up is meeting states where they are, recognizing that many jurisdictions have gaps, the importance of the regional agreements that are already emerging, both within ECOWAS as well as broadly across the African Union, the many use cases that Nigeria, for example, has already brought to life for your residents and the potential for that to continue to expand within Nigeria, but within those regional structures like ECOWAS, but also more broadly for Africans to benefit, and that pivotal role that digital identity plays in that transition. So thank you for that offer. Maybe we’ll hold you to it of reaching out the hand to your border countries and setting up those implementations that bring to life the cross-border experiences for your residents. So wonderful examples and great work happening there. I’d now like to transition to our next panel of speakers on the City Hub side, and I think we have some slides from Debora Comparin, so we can get those ready for her comments. Debora, could you kindly introduce yourself and some of the roles that you’re playing in progressing digital identity?


Debora Comparin: And elaborate on City Hub, what it’s seeking to accomplish, and how it’s going about delivering on its goals. Good morning, everyone. It’s, first of all, it’s really a pleasure to be here with you all today and speaking about definitely my favorite topic, that is digital identity. It’s really a passion of mine. I’ve been working on this for the past five years, and what I can say about myself when to introduce me is that I contribute to standardization work. So essentially, it means collaborating with the ecosystem and developing together technical standards, so technical specifications. that can help interoperability. But how hard can it be? So we have heard a lot this morning about the benefits and the social impact that digital identity infrastructure has. And when I, I was thinking to my younger self when, when five years ago I decided to enter into this space of digital identity completely unknown to me before, and I was wondering how hard can it be, right, to just build such digital identity infrastructure? And the answer is very. So I’m still here and there’s still a lot, a lot of work that needs to be done. So let me put that into perspective for a minute before I talk about CDIHUB. Here I like to refer back to physical documents, so digital. Shall I get closer to the mic? Can you hear me okay? Yeah, okay. I’m Italian, I like to move a lot, so bear with me. So I think that when we are talking about building a digital identity infrastructure, it would be very nice to go back, to go back to the physical documents and reflect on the properties of these physical documents that we want to digitalize. And it would be fantastic to maintain some of those properties. So this also is the complexity of the exercise. First of all, when, when you all arrive here, you probably show some, some form of document. So the first step is to digitalize this, this piece of paper or plastic. The second step that we ignore, because we do it every day, is when I prove my identity, I just hand it over to someone. So we need to be able to build these rails to hand this digital version of document. Abisoye Coker-Adusote I was here at IGF this morning and I gave my passport, my ID card. They cannot track my actions and my whereabouts, and I think this is also a very important property that we should keep in mind and maintain in the digital domain. You can also see that this is my document, so I’m the rightful owner of this document. I can’t just pass it over to Gail and she just easily uses it. So that’s the same thing, all these properties should be maintained. And it’s very hard. So there’s a lot of intelligent cryptography definitely behind this to make it happen and a lot of technical work. We are not there yet. There’s also trust that needs to happen. It’s not just about technology, it’s really about collaboration. Because these documents, the digital form, they have to cross borders. So it’s not just valid in my country, it should be valid everywhere I go, it should be understood and trusted, etc. And this is exactly why we created CityHub, because we understood the difficulty of doing this and we absolutely wanted to maintain these properties that I’ve been through with you, because it is about safeguards as well. So it is about preserving privacy and maintaining the security of citizens and individuals in the digital realm. Collectively at CityHub, we thought that it is about collaboration. There’s not a single organization or individual or bright mind in the world that can do it on their own. And so we pulled together an ecosystem of private sector, public sector, so government, research institutes, standard bodies, to collaborate and make this reality, make this vision a reality. This is what CityHub is about. And as you can see, I’m also very proud of this. It’s a new initiative. We started one year and a half ago, but a lot of work has been done. And so I will share with you a couple of slides so that you get a sense of what we have achieved together and what still remains to be done. And you’re most welcome to join us and to contribute. First of all, I heard about DG Abisoye when she said, meet countries where they are. That’s absolutely right. And this is absolutely the approach of CityHub. We don’t have the arrogance of thinking that we’re just going to sit somewhere and define what’s going to happen, how all this digital identity infrastructure should be built, and just go off and share our vision with the world. It’s really about hearing people, hearing about what are the difficulties locally in the different countries. And that’s what we’ve done. And so this is why it’s very important for CityHub that the very first year we spent our time traveling extensively, all of us, and hearing about the different perspectives from countries and the difficulties. And so these are what you see here in the slide, some of the summits in different continents that we’ve done in 18 months. And we’ve collected all the the feedback and the different perspectives and in reports that you can go to the City Hub website and you can download and I’m not here to I don’t have the time unfortunately to get into all the details but indeed there is a confirm a very different level of maturity in different regions but also different needs right. So collectively in all these different summits and in conference participations we have heard inputs, we have been engaging with over 45 countries so this is absolutely massive work and this is what informs the roadmap and the development of City Hub moving forward. I’m going to spend a few minutes to go through some of them what I think are the most important working groups so concrete deliverables that we are carrying forward right as we speak right now this all this work is live, it’s ongoing. First of all use cases, so when we talk about such a complex digital identity infrastructure we need to go back to the user story so what is this for and so we have studied different use cases and because it’s important again to listen to what is important we have prioritized so out of all the discussions we have selected what for the different countries and the different research institutes and multi-stakeholders engagement that we had was the most relevant and so this is what you see in the screen essentially are the top three that were selected by the community so it’s education, refugees and bank account so basically we studied how digital identity is relevant in each of these usages. Then policy, Again, it’s not just about technology, rails and digitalization and cryptography and all the tools that you can come up with. It’s really down to sovereignty of the country and the local rules and regulations that determine how digital identity should be used. So these countries have different perspectives. And so we have started a study of different legal, different regulations and policy at country level. So we’re studying over 10 countries. And we are compiling all these regulations and deriving from it how can we achieve interoperability while respecting the local decisions. This is very important. Again, we’re not here to impose anything. We’re here to listen, to understand and to build rails and to make sure that this digital identity gets done while respecting local regulation. And finally, there’s my favorite, the technology work stream. And this is really about all the tools that I’ve mentioned earlier, how to get this done. It’s very much technical work. So I will not dig into the details now. But this is what I wanted to share with you today. And most of all, if there’s one thing that you can retain by my speech is that you’re most welcome to contribute. It’s really important that we all work together. This infrastructure is for the benefits of all of us individually first as citizens of a country. So I think this is a very important message to take home. So your involvement, whether you are a cryptographer or an engineer or a policymaker or a researcher, it’s very important.


Moderator: Thank you. Thank you so much, Debra. I think your message is to just explain in clear language the transition from physical documents into digital versions of those documents and the importance of maintaining the properties of what you’re doing. residents and what people expect from their digital identity credentials without sacrificing the trust or giving away more information than they need to and doing so in a way that can be secure and trusted and using some fancy cryptography behind the scenes but at the end of the day it’s around those simple user experiences like those champion use cases you gave around education, opening a bank account, and managing the experiences for displaced people that those are really important but the safeguards that we put around that is going to take a lot of engagement and your key message to make sure that the community felt welcome to participate in the City Hub work because it’s going to take a very broad community of subject matter experts and those willing to put the time in to solve for making this cross-border interoperability a reality. So thank you so much. I’m now going to turn to Nahiro Fujie-san, the Chair of the Open ID Foundation Japan, who’s also been very actively involved in hosting the City Hub in Tokyo and works very closely with the Japanese government on this transition from physical identity credentials to digital identity credentials. So please, Fujie-san, could you elaborate on your role, introduce yourself, and share some of the experience in Japan. Okay, good morning everyone.


Naohiro Fujie: I’m Nahiro Fujie from the Open ID Foundation Japan as a chair and also I’m a working group co-chair of Open ID Foundation Global and especially focusing on identity verification which is working groups called Ikebashi and Identity Assurance and also always I support the Japanese government and the educational institute in Japan to digitize their certificates or IDs. So that’s my role in Japan. So, today I express my opinion from the perspective of how to digitalize this identity and have interoperability within a country as well as internationally and globally. So, as you know, Japan is an island nation and there is no border with any other countries, but we have over 120 million population in Japan. We have our own economic sphere in our country, so everyone does not understand what’s the importance of interoperability with other countries so far. But I think that the idea of starting from small to achieve big and huge is very important in this context of interoperability. So, I mean that the most important thing for each country is to define its own standards in accordance with their own law or regulations while keeping global standards in mind. And then to using standard technology like bridging to other countries using standard technology and it brings us global interoperability. So, today I’d like to talk about the situation in Japan in three aspects. First one is education. And the second one is national ideas as Gail mentioned earlier. And the third one is cross-border initiatives including city hub or some kinds of scenarios. So, as I mentioned earlier, To start from small is quite important for us, especially in Japan, I guess. So, we started to define the architecture framework for digital credential with Keio University, one of the biggest universities in Japan. So, to digitalize their identity or certificate Why I say digitalized? Currently, especially educational certificates in Japan are almost all paper-based. So, we have to digitalize, but we have to consider about how to manage the credential itself in a digital manner. So, we started to define the management rule of each type of credentials in education world with Keio University. Sorry, this paper is already published on the internet, but it’s written in Japanese only. So, we are trying to translate into English right now, but so far we provide this in Japanese. For example, we are targeting, we currently have several type of credentials like passport or mobile driver’s license or enrollment certificate in paper manner. And also, in several scenarios, we can use a copy of credential to prove or to verify my identity by some relying parties. But in digital way, there’s no difference between copy and original. So, we have to consider what the copy means in digital manner, and also we have to consider about the deliverable of credential in the digital manner. It doesn’t exist in the physical world, but in the digital world, we have to consider that. So, we classify three types of credentials. First one is original, and the second one is duplicate, which means a copy of the digital. And the third one is delivered credential, I mentioned earlier. So, we need to define the management policy aligned to each type of credential that is required to manage them. Also, we started to, in this scenario, especially for education area, we started to work with National Institute of Informatics, we call NIAI, which is settled under the Ministry of Education in Japan. It’s an educational institute in collaboration with many universities right now. And currently, NIAI operates an academic federation called Gakumin in Japan, and it has interoperability with other countries’ universities, like in the EU or African countries and the United States as well. But they are using, right now, SAMO protocol right now. So, we have to move on to the next level of technology, like how to use a wallet, like that. So, we started to define the framework structure of academic credential using a wallet model. This is the second activity in Japan, to define local standards first in Japan. And the third one is, I’d like to explain about the government-led initiative and the POC project in Japan. This is a quite interesting project. Before moving into this scenario, I think the most important thing for Japan right now is to achieve a situation where digital credentials are being used domestically. Because, as I mentioned, we do not use digital credentials right now. Because we have only paper-based credentials right now. So, as Debora said, it’s important to use digital-based credentials for students. So, the challenge in this project is to utilize national ID cards as well as enrollment certificates which universities issue to the students. And get them to the station, and the students get a discount ticket to ride a train. This is a demonstration use case. From your left side, when students consider to plan a trip to somewhere, the students get a ticket for students. And to buy the ticket from the railway company, the students have to present that they are actually students. Using a digital credential which universities issue to the students, as well as the students have to be verified by the national, like Japan, the government. So, after that, the students have to present the national ID card in a digital manner. After that, they can get a discount ticket and go anywhere. It’s a quite interesting scenario in Japan because they do not have digital identity right now. It’s a good way to give some experience to students to utilize the digital-based credentials. Also, this is quite big news for especially Japanese people. Just yesterday, the Japanese government started to provide a national ID card in the Apple Wallet. Because over 60% of Japanese nationals use iPhone, so it’s a very big progress for digitalized national IDs. Also, we have some initiatives between other countries like between EU and European Union. We have the EU-Japan Digital Partnership Council focusing on how to utilize or how to make interoperable digital identity between two countries or two continents. Also, we have other initiatives with Asian Pacific regions like Asian countries. I mentioned about our initiatives in Japan. How we consider to proceed to the next step from small steps to achieve the big steps.


Moderator: Thank you very much. Thank you so much, Fujie-san. I think starting with the joke of an island nation not having direct borders, but in practice for residents of Japan to conduct their business in their daily lives, they’re very much living across borders and have cross-border transactions. And it’s wonderful to see the leadership role that Japan is taking on the standards as well as on some of those complex interoperability conversations and work with the EU and with other Asian countries, which echoes what we heard earlier from our colleagues here in their African jurisdictions and regions, which are seeking to do the same thing. And that example of something that seems simple, like a discount for a railway ticket for a student, but which actually requires complex digital use cases transforming national ID credential to a digital format, a university credential to a digital format, and then bringing that to a national railway in order to realize something like that discount. Many of us have been fortunate to be poor students at one point or another, and so we know that that will motivate the students, but bringing it to life is very complex. So thank you for your work in driving innovation and the transformation with the Japanese government and other partners. So I’d now like to ask Tor Alvik, working with the Norwegian government as a subdirector on this digital transformation in a Norwegian context, can you kindly introduce yourself and share some of the work that Norway has been doing with the Baltic states for regional development as well as your work with Europe and more broadly?


Tor Alvik: Thank you and good morning. Yes, as you said, my name is Tor Alvik, and I work for the Norwegian Digitalization Agency. This is an entity that tries to build common components, bridging both cross-border interoperability, but also for how we deal with providing services to the public and private sector in Norway. So I will try and move to the next slide. We are one of a group of Nordic-Baltic countries. We are quite similar in both legislation, in population, and in the way people live, very similar countries. So when you came to the airport in Norway maybe yesterday or the day before, if you pay close attention you will often hear the tourists say, oh it’s so nice in Sweden, but it’s the wrong country, this is actually Norway. So how can interoperability with digital ID across borders in a region like this be hard? Well what we found out is, it is very hard. But if you look at some other figures you can see that we have a high level of mobility in the region. A lot of people are moving, settling in different countries, working in another country than they live. We have a great mobility of students and our workforce. This of course makes it very important to also have digital services that function cross-border. In our region we have a very well-developed system for digital identity. It covers for most of the countries more than 90 percent of the population on high security level and almost all services are already digitalized so you can carry out almost every aspect of your life in a digital way. So then when we started looking on how can we make this work cross-border this was a work that was started in relation to the cooperation between the different governments in the region. We have a long-standing cooperation between the administrations in the Nordic countries dating back to the 1950s but If you look at that, it was not until 2017 that we actually broadened that cooperation to also deal with ICT and digital services and cross-border issues. Very early in that work, it was identified that getting digital ID to work interoperably cross-border was one of the main issues that needed to be solved to provide cross-border services to people. What has been important for us in our work is the link between us as technicians and working on the solution and having this close link to the politician side and the decision makers that has made it able for us to address central problems and also foster the uptake of this solution cross-border. We established a project, a working project, where we work together with the different agencies in the different countries. We are working together there since 2018. Some of the main focuses in our cooperation is making sure that each country actually has an EID system that we recognize cross-border. Since we all are members of the EU or closely linked to it, as in Norway’s and Ireland’s case, we build on the ADAS regulation. But just to motivate our politicians and get the understanding that you need to adopt these EID systems so they actually can be recognized, even though we have the legal foundation in place. took time and has been quite challenging. We are also working, and I will come back to that, with some fundamental challenges in how to recognize our citizens cross-border. We build and work with service provision, working with service providers, exploring different user stories and how they function cross-border. We find that many of these services are quite a hindrance to people and need to be addressed at the service level. It is not enough to actually build a digital ID, you have to design the whole user story so that it actually fits a cross-border context. And we also, together with all our other member states in the EU, we also are preparing on the upcoming changes in EIDAS, which would be of course digital wallets and the use of credentials and new models for data sharing. When we have looked at identity in our countries, one of our main observations is that it’s not enough to have a functioning digital identity. You need a system where this functions just as well in the physical world as in the digital world. There is a very close link to our core identity, which is in our countries, or of course in our population registers, but can of course be in other systems. The documents that both are the basis for the digital identity, how you use these documents when issuing a digital identity, making sure that we don’t have duplicates, having biometric verifications of people immigrating and coming to our country. And then you have the issuing process and the use of both digital identity and how. So you need this kind of same level of security both in the physical and digital world. This is not that easy to achieve and also especially challenging when it comes to cross-border identity. So here’s an example from one of my colleagues, Nils Inge is in the audience, so if you want to talk to him afterwards he can give you much more insight in our cross-border cooperation. But this is what happens if you try and use digital identity and try and access a service in our different countries. We have built a digital identity layer so you can come and you can log in and say hi, how are you? And then it gets a lot more quiet because then we have to link you to our national identity systems. Like all of us we have a long life, some of our rights and services may date back 20-30 years for instance. You can have pension rights in another country which you need to access much later. So this linkage of the identity you are coming from one country to the identity you have in the country providing the service is one of the main challenges we actually focus. We can get a digital identity to function and understand. But trying to address this is something we are now trying to work on and I think this is also maybe something that can be taken away from our work to focus on those aspects very early, not just the digital identity. This is a simplification of the user journey. for doing this kind of identity matching steps. As you can see it’s rather complex and when you delve into the details it gets even worse and then you have all the registrations where you lack data and so on, so this can be quite a challenge. So I’m looking forward to the discussion afterwards and hear your insights on what we have done and what can be taken away from that. Thank you very much. Thank you so much, Tor Alvik. It is wonderful to have


Moderator: your personal leadership of the work in Norway, which many in the rest of the world have observed the work in Norway and in the Baltics, you know, to achieve your EID goals and to elaborate the work with EIDIS and the EU Digital Identity Wallet Program, which you’re adjacent to, but which your residents are obviously moving across borders with, not only your Baltic stakes but European partners as well. So thank you for your leadership role in that work and bringing to life those use cases for your residents and other Europeans. I think many learnings for the rest of the world and I too look forward to the next stage of our discussion here to get some of the feedback from our online participants and those in the audience. So we’re going to bring back up the Mentimeter poll where we’ll progress to the next slide, if I can get the clicker back. Very good. If you have not already grabbed the QR code to answer the first question, please do take a picture of that QR code so you can participate in the live survey that we’re going to conduct. We’re going to go through a series of questions and the first couple are policy related. So I’ll put Dr. Jimson and Debra on guard that they’ll speak to the first one while our audience is going to participate in the next conversation. So we wanted to discuss, you know, what practical steps can be taken during the intersessional period between this IGF and the next one. around how to use things like the City Hub tools that we heard, you know, Debora elaborating on both the technical tools for interoperability, the trust framework mapping, the work on champion use cases in order to achieve the goals of the African free trade around lifting up the broader population and making sure that sovereignty is respected. So this is an ambitious goal and one that I know, DG Abisoye, you took part in when we were in Addis Ababa a few weeks ago discussing how these tools could be used. But it’s an important question, right? How could we actually progress this work together? Then our audience will be chiming in. So some of them are already saying, you know, champion use cases. But let me turn first to Dr. Jimson to see if you have any thoughts on next steps that we could take, concrete work we could do together.


Jimson Olufuye: Thank you very much, Gail. Well, we do know that if you cannot measure a process, you cannot really manage it. So there is need to constitute maybe a working group so that we could actually identify the gaps that we have across our sub-regions and regions, identify the champions with regard to parameters like a trust framework, with regard to whether the system is open source, to determine the interoperability criteria, and to ensure that, for example, the NetMundia plus 10 statement is really incorporated in what’s being done. Talking about NetMundia plus 10, it’s about bringing all the stakeholders, the private sector, along with the government, the technical community, civil society, and even the youth around the table to discuss


Moderator: the benefits and the use cases. Abisoye Coker-Adusote Dr. Jimson Olufuye, Debora Comparin, Tor Alvik, Dr. Melissa Sassi, Engr. Yes, two comments on my end. The first one, I think, talking about policy,


Debora Comparin: it’s very relevant to map the different regulation, legislation that underpin digital identity in the different countries. So when we talk about cross-border, it’s a work that we have started, we’re conducting with the collaboration of universities as well in CityHub. I think, just to give you an example why that is important, a country is sovereign on deciding how the national identity, the rules applying to how the national identity ecosystem in a certain country for citizens and residents. So when an identity cross-border digitally, how do you know you can trust that digital form of identity? So that’s trust, it’s jargon, it’s level of assurance, but it’s basically a very complicated word to say that, how do you know that that identity was properly given to the right person in the other country, and that actually that identity belongs to the rightful owner, so there’s not been mismatch passing on to somebody else. So it’s really about how are the rules set up to make sure that that identity was digitalized, and these rules, believe it or not, are different in different countries, and if you don’t trust whatever you receive in the digital form, then there’s no value, right? You can have it digitalized, but then there’s absolutely no usage for it. So I think it’s very important that we do this work, and it’s not just a research work of mapping the differences in rules, it’s really about turning this into something actionable. So how can we transform these values and try and encode it in laws into something that can be used together with the digital identity that is passed? So how can I describe the process to obtain this digital identity so that the other country can decide if to trust it or not? So this is all tools that underpin the work of cross-border interoperability. So I think that that’s a very important piece of work that needs to continue, and I would say more should be completed before the next IGF, so to show actually some concrete results. And together with this, POC, proof of concept. I absolutely agree with some of the comments. I think we need to start getting things done. So it shouldn’t just be research and thinking and technology, but it should actually be implemented and tested in the field. So now Eros said something that I totally agree with. We can start small. We don’t need to have a big bank. So we can actually… take baby steps and have two or three countries get together and CDIV would be most, I think, most helpful in this concept of setting up a proof of concept and testing the technologies and the tools that we are developing to make sure that this could work or maybe to know what to improve and from there scale globally this infrastructure. Thank you, Debora. I think that


Moderator: bias to action characterizes everyone on the panel, you know, to do concrete work that is actionable and to do so, I think, you know, in this intersessional period where we’re seeing the early work in Nigeria, we’re seeing the work with Alfikta and Benin, we’re seeing the work in Norway and in Japan, right? There is this hands-on work that’s already happening within jurisdictions and within regions but continuing to progress that through these forums of like Alfikta and CityHub is critical. So we’ve got some good notes on the page which I think people have had a chance to look at. So I’d like to jump to the second policy question and then turn to a couple of the questions that Stephanie’s captured from the online audience. So for our next interactive topic, and I’m going to put on guard DG Abisoye as well as Dr. Kosi and Dr. Alvik, on this next policy question. So how can policymakers leverage shared experiences of successful interoperable digital identity for African regional benefit? So there are these emerging use cases, there are these emerging policies and best practices, how can we put that to work for the African region? So I’ll come first to Dr. Kosi to see if you have any additional


Kossi Amessinou: comments to make on this point. Thank you, moderator. ID is very important but security of the data is very, very important like in an ID. We need to work very well in security issue. The second one is data center. Where will we put our data? Where do you have it? Is it in Africa? Is it outside Africa? It’s important for us to have our data inside Africa. That is very important.


Moderator: Thank you, Dr. Kosi.


Abisoye Coker Adusote: Gigi Abisoye, anything to add to the importance of data security and preserving data within Africa? I think it’s important to note that we need to ensure that there is regional cooperation. Regarding cross-border interoperability, we need to ensure that the frameworks are developed and taken into consideration, like Rebecca had mentioned earlier, about the underlying identity laws in the country and just developing one that’s acceptable to all for everyone to be able to start off. And I think that she also mentioned about the actionable steps. I think that’s very important. So looking at countries that have sort of matured on the identity level from Ghana, Rwanda, Nigeria, we need to ensure that we take those countries and then run a pilot for cross-border interoperability whilst we’re looking at meeting the other country states at their level of readiness to see how we can help them to scale up. And they can also learn from Nigeria also to see how we’ve been able to achieve this. And they can also learn from India with others to see how they can scale up. So what we’ve also done in Nigeria, outside of the EID card, which you’re all aware of, is that we had recently launched the NIN authentication application, which allows for safeguarding of your digital identity and also has the wallet component to it where you’re able to… Abisoye Coker-Adusote Abisoye Coker-Adusote Abisoye Coker-Adusote Abisoye Coker-Adusote Abisoye Coker-Adusote the adoption of the AFTA framework, the African Union framework, and a lot more continue to go in. So it’s a continuous work in progress. Thank you.


Moderator: Thank you so much, DG Abisoye. So there’s a lot of ground cover between you and Dr. Cosey. So let me see if we have additional comments from Tor Alvik.


Tor Alvik: Thank you very much. It’s very exciting to hear all the experience that comes from Africa. Also, for the last years, I’ve been involved in working with large-scale pilots leading up to the new ADAS regulation. I see this as quite a helpful tool when trying to explore both the governance side and also technical sides of different user stories. For instance, we have been working on digital driver’s license payments, education, and so on. And this combination of practical work and also policymaking and governance analysis in regard of those user stories has been very helpful when we have been trying to understand the legislative side of the new regulations. So maybe that is something that also could be looked into if that could be used also as a tool in other places. So user stories absolutely brings to life the technology, right, to make sure it’s applicable. It is the only way that you can make these things understandable. Talking about standards and cryptography, decision makers often have a


Moderator: very blank face after two minutes. Right. Absolutely. So, Naohiro Fujie, if you have any comments to add, please, in one moment, I’m just going to recap what I see on the screen. A few of the messages from the online audience around the data strategy for development, leveraging these shared experiences, and fostering collaboration, and digital literacy, also important themes. So any additional comments on your side, Fuji-san?


Naohiro Fujie: It’s quite exciting to hear about the African and Norway experience for me. As I mentioned earlier, it’s very important not only focusing on technology, but focusing on the governance or trust framework. Because in many cases, every country has their own regulation or rules, but in some cases, those rules or regulations have to be changed to have interoperability with other countries. So it’s important for us to collaborate with every country’s government to achieve that. In fact, CDIB has a partnership with every country’s government. So every of us have to have some good relationship with every government.


Moderator: These are my additional comments for me. Yeah, so it’s a lot to do, right? So what’s already in place is not yet fit for purpose to achieve interoperability. So changes within existing rules and policies will be necessary to truly achieve the interoperability, whether that’s regional in Africa or whether that’s global context, I would assume. Very good. So a couple of questions coming from online. Stephanie, would you like to direct them to the audience?


Audience: Yes, I will. I think from those questions, because I know we don’t have much time, there’s actually two themes. One is actually linked to what we’ve just discussed, a question like is it federated, is it centralized? And then the word of the way you digitalize, is it leading to some kind of surveillance? So I think there’s a technical… There’s a question for the technical people here to explain, you know, that this is not the case as Citi is not prescribing any model, as we said. And the second one is, it is very important because it’s, I think there was comments made before, there’s a question on how do we ensure that the digital divide is bridged when currently Africa has deep rural population and then escalating cost of internet and energy, you know, while we are all looking


Moderator: into energy transition as well. So I would say there’s two themes in the three questions. Okay. So I think in the context of federated versus centralized ecosystems, I might start on that one myself. Because as you said, as Citi Hub, we are not opinionated on the appropriate architecture for any individual country. And so each jurisdiction is going to apply their own values, their own rules, their own policies to whether they, you know, what architecture they prefer. And some might lean towards centralized, some might lean towards federated, some might have centralized today and then, you know, evolve towards something that’s more decentralized. So I think it’s an evolving approach. But I see Dr. Jimson has a comment to make on that point. Yes, thank you, Gail. These are very good questions. Technically, I think federated


Jimson Olufuye: database is ideal. And that’s what we use in Nigeria. And through API, you can connect other databases. So even across borders, countries can keep their data locally. And then through API, you can share the specific data categories that have been agreed upon based on policy framework. Somebody also asked a question about affordability. Yes, we need to encourage the operators to be concerned about this. Government also need to give incentives. And also, we need to increase the purchasing power of the people. so that everybody can be incorporated in the digital age. Thank you very much.


Moderator: Thank you so much, Dr. Jimson. Any additional comments on federated versus centralized models? Okay, not hearing any. I do think that there may be comments on surveillance and the desire in, I think, many jurisdictions to avert surveillance of citizens using digital identity infrastructure. Would any of you like to comment on the risks of surveillance and how you might reflect on that challenge? Tor? It is a very interesting topic, one we are now facing when we are trying to work with digital wallets, which are coming next.


Tor Alvik: Those are, by design, very user-centric, so you share all your credentials directly between the user and the service provider where you want to use it. For us, that raises questions. That protects the citizens very well, but on the other side, you also need to tackle misuse and fraud, and how you can then build a model that takes both of these things into consideration is a topic I think needs quite a lot of discussion.


Debora Comparin: Please, Debra. One point on surveillance, because I saw some previous comments. Surveillance can come from both government and private sector, so both need to be addressed and monitored. Especially in this digital identity field, where we have both private and public sector involved, then it makes room for reflection. I think it’s, again, like I mentioned earlier, it’s one property we want to maintain of the physical document.


Moderator: Thank you so much. I’d like to hit that last question on the digital divide and perhaps some of our effective representatives might speak to that because obviously one would need to have a mobile device, one would need to have Wi-Fi access, one would need broadband infrastructure to take advantage of digital identity interoperability. So there’s a lot of foundation capabilities that would be important. Any comments on bridging the digital divide and what can and will be an ongoing challenge? Did you have a swing? Thank you for that question. So regarding the digital divide, I think that that can


Abisoye Coker Adusote: be easily breached if you try and create a lot of awareness on digital literacy because even as it stands, those that have mobile phones still are not 100% digitally trained and they don’t understand the implications of the digital identity. So I would say that digital identity is a brilliant concept. It does make life easier, but at the same time you need to obviously safeguard the individual’s rights, the citizens’ rights. So we need to make sure that the laws enacted definitely safeguards all their rights. What we’ve done in Nigeria is to do a few things. The federal government is considering flooding the market with very affordable, basic mobile phones so that it’s extremely affordable and the average Nigerian that’s unbacked is able to use this phone. Another thing we’re doing with the enrollment drive this year is that we are opening accounts. and wallets for all the people that don’t, that are unbanked. So as we go out to enroll people, we are ensuring that we open wallets for them so that they can be, they’re able to participate in government intervention programs and not be excluded at all. So that would help the government a great deal to ensure financial inclusion and also to help to bridge the digital divide. So we’re doing a lot of media awareness on digital literacy. And with the NIN authentication application, we want to ensure that people understand the value of having the digital wallet so you’re able to use this application. This is how you can protect your data yourself. So for you to log in, it is biometric enabled. So you have to obviously use your biometrics to, you know, to log into the application. Great. Thank you. So thank you so much to the panel


Moderator: for the rich discussion today. I know we’re very much at time. So thank you for the thoughtful contributions. And we will be sharing the report from this event with the IGF staff. And there will be, I believe, a recording of this session as well to share with your colleagues. Thank you again. Thank you. Dr. Jimson Olufuye, Debora Comparin, Tor Alvik, Dr. Jimson Olufuye, Dr. Melissa Sassi, Engr.


J

Jimson Olufuye

Speech speed

127 words per minute

Speech length

942 words

Speech time

443 seconds

Identity is critical to closing the digital divide because without identification, people cannot access services

Explanation

Olufuye argues that identity verification is fundamental to digital inclusion, as people who cannot be identified effectively do not exist in digital systems. This prevents them from accessing government services, financial services, and participating in the digital economy.


Evidence

References the Global Digital Compact’s first objective about closing digital divides and achieving Sustainable Development Goals through inclusive identification systems


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Development | Digital identities | Digital access


Agreed with

– Moderator

Agreed on

Digital identity is fundamental to closing the digital divide and ensuring inclusion


Countries need appropriate laws to protect citizen data and ensure sovereignty over their digital assets

Explanation

Olufuye emphasizes that national sovereignty requires proper legal frameworks to govern and protect citizens’ data. He argues that countries must have control over their digital assets and data to maintain independence and citizen trust.


Evidence

References his experience as a data controller in Nigeria and the importance of data protection in the data ecosystem


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Legal and regulatory | Data governance | Privacy and data protection


Agreed with

– Kossi Amessinou

Agreed on

Data sovereignty and keeping data within national/regional boundaries is crucial


Federated database architecture with API connections allows countries to keep data locally while enabling cross-border sharing

Explanation

Olufuye advocates for a technical approach where countries maintain their data sovereignty by keeping databases local while using APIs to share specific agreed-upon data categories. This approach balances national control with international interoperability.


Evidence

Cites Nigeria’s use of federated databases and API connections for cross-border data sharing based on policy frameworks


Major discussion point

Technical Standards and Implementation Approaches


Topics

Infrastructure | Digital standards | Data governance


Disagreed with

– Moderator

Disagreed on

Centralized vs federated architecture preferences


Operators need incentives to make services more affordable and increase purchasing power of citizens

Explanation

Olufuye addresses the digital divide by arguing that governments should provide incentives to telecommunications operators to reduce costs. He also emphasizes the need to increase citizens’ purchasing power to ensure universal access to digital services.


Major discussion point

Bridging the Digital Divide and Inclusion


Topics

Development | Digital access | Economic


K

Kossi Amessinou

Speech speed

96 words per minute

Speech length

678 words

Speech time

420 seconds

Digital identification must reflect the legal identity of people to secure the digital ecosystem

Explanation

Amessinou argues that for digital identity systems to be secure and trustworthy, they must accurately represent people’s legal identities. Without this connection, it becomes difficult to verify who is legitimate in online interactions and transactions.


Evidence

References Benin’s WURI project supported by the World Bank to provide secure personal identification and facilitate e-commerce


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Digital identities | Legal and regulatory | Cybersecurity


Data centers should be located within Africa to maintain data sovereignty

Explanation

Amessinou emphasizes the importance of keeping African citizens’ data within the continent to maintain sovereignty and control. This approach ensures that African countries retain authority over their citizens’ information and reduces dependency on external infrastructure.


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Infrastructure | Data governance | Legal and regulatory


Agreed with

– Jimson Olufuye

Agreed on

Data sovereignty and keeping data within national/regional boundaries is crucial


Disagreed with

– Jimson Olufuye

Disagreed on

Data storage location and sovereignty approach


Benin’s “It’s Me” card enables visa-free travel within ECOWAS and could serve as a model for broader African integration

Explanation

Amessinou describes Benin’s implementation of a unified identity system that provides free basic cards and biometric cards for ECOWAS travel. This system demonstrates how digital identity can facilitate regional integration and free movement of people across borders.


Evidence

Describes the East Road platform with FID keys, free ‘It’s Me’ cards, ECOWAS biometric cards, and visa-free entry system that works across services like SIM cards and healthcare


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Digital identities | Development | Economic


A

Abisoye Coker Adusote

Speech speed

152 words per minute

Speech length

2055 words

Speech time

809 seconds

Nigeria has successfully integrated national identity numbers with banking and telecommunications systems

Explanation

Adusote describes Nigeria’s comprehensive approach to digital identity integration, where the National Identity Number (NIN) is linked across multiple sectors. This integration enables various government and private sector services while preventing duplication and fraud.


Evidence

Details integration with bank verification numbers, telecommunications (SIM linkage), National Population Commission (birth registration), biometric census, school feeding programs, credit systems, student loans, and examination boards


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Digital identities | Infrastructure | Digital standards


Cross-border interoperability requires meeting each country at their level of readiness due to varying infrastructure capabilities

Explanation

Adusote acknowledges that African countries have different levels of digital infrastructure development, from basic connectivity issues to varying degrees of digital literacy. She argues that successful regional integration must account for these disparities and provide appropriate support.


Evidence

References discussions at the West African Economic Summit about countries lacking data connectivity, energy infrastructure, and digital literacy awareness


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Development | Digital access | Infrastructure


Agreed with

– Debora Comparin

Agreed on

Meeting countries at their level of readiness is essential for successful implementation


Regional agreements based on data sovereignty and trust frameworks are needed to enable cross-border data sharing

Explanation

Adusote argues that current data protection laws, like Nigeria’s 2023 act, restrict cross-border interoperability. She advocates for regional agreements that respect national sovereignty while enabling trusted data sharing across borders.


Evidence

Cites Nigeria’s Data Protection Act of 2023 and references the African Union Digital Interoperability Framework and AFTA protocols


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Legal and regulatory | Data governance | Privacy and data protection


Government intervention through affordable mobile phones and digital literacy programs can help bridge the digital divide

Explanation

Adusote describes Nigeria’s strategy to address digital exclusion through government-subsidized basic mobile phones and comprehensive digital literacy campaigns. This approach aims to ensure that unbanked and digitally excluded populations can participate in digital identity systems.


Evidence

Details Nigeria’s plans to flood the market with affordable basic mobile phones and media awareness campaigns on digital literacy


Major discussion point

Bridging the Digital Divide and Inclusion


Topics

Development | Digital access | Inclusive finance


Opening digital wallets for unbanked populations during enrollment drives ensures financial inclusion

Explanation

Adusote explains Nigeria’s proactive approach to financial inclusion by simultaneously enrolling people in identity systems and opening digital wallets for them. This strategy ensures that previously excluded populations can immediately participate in government programs and digital financial services.


Evidence

Describes Nigeria’s enrollment drive that opens accounts and wallets for unbanked individuals, enabling participation in government intervention programs


Major discussion point

Bridging the Digital Divide and Inclusion


Topics

Inclusive finance | Development | Digital identities


Pilot programs between neighboring countries could demonstrate cross-border interoperability before broader implementation

Explanation

Adusote proposes a practical approach to regional integration by starting with bilateral pilot programs between countries that have made significant progress in digital identity. This would test systems and build confidence before scaling to broader regional implementation.


Evidence

Suggests specific pilot combinations like Uganda-Kenya, Nigeria-Cameroon, or Nigeria-Niger as test cases for cross-border digital identity interoperability


Major discussion point

Practical Use Cases and Implementation


Topics

Digital identities | Development | Digital standards


Agreed with

– Debora Comparin
– Tor Alvik
– Moderator

Agreed on

Practical implementation through pilot programs and proof-of-concepts is necessary


Nigeria demonstrates multiple use cases including school feeding programs, student loans, tax collection, and census operations

Explanation

Adusote showcases the breadth of Nigeria’s digital identity implementation across government services, demonstrating how a foundational identity system can enable diverse applications. This comprehensive approach shows the potential for digital identity to transform public service delivery.


Evidence

Lists specific implementations: biometric school feeding, Credit Corp loans, SMEDAN grants for SMEs, JAMB examinations, NEL Fund student loans, health insurance, and biometric census operations


Major discussion point

Practical Use Cases and Implementation


Topics

Digital identities | Development | Economic


Biometric authentication and secure applications help individuals protect their own data

Explanation

Adusote describes Nigeria’s NIN authentication application that empowers citizens to control their own digital identity through biometric security. This approach gives individuals direct control over their data while maintaining security standards.


Evidence

Details the NIN authentication application with biometric login requirements and wallet functionality for citizen data protection


Major discussion point

Privacy and Security Concerns


Topics

Privacy and data protection | Cybersecurity | Digital identities


D

Debora Comparin

Speech speed

145 words per minute

Speech length

1981 words

Speech time

817 seconds

Digital identity infrastructure requires maintaining the same security properties as physical documents in the digital realm

Explanation

Comparin argues that the transition from physical to digital identity documents must preserve key properties like ownership verification, privacy protection, and prevention of unauthorized use. She emphasizes that this requires sophisticated cryptography and careful system design to maintain trust.


Evidence

Uses examples of physical document properties: showing documents without being tracked, proving rightful ownership, and preventing easy transfer to unauthorized users


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Digital identities | Cybersecurity | Privacy and data protection


Collaboration between private sector, government, research institutes, and standards bodies is essential for building interoperable systems

Explanation

Comparin emphasizes that no single organization can solve the complex challenges of digital identity interoperability. She advocates for a multi-stakeholder approach that brings together diverse expertise and perspectives to develop comprehensive solutions.


Evidence

Describes CityHub’s ecosystem approach involving private sector, public sector, government, research institutes, and standards bodies working together


Major discussion point

Technical Standards and Implementation Approaches


Topics

Digital standards | Infrastructure | Digital identities


Agreed with

– Abisoye Coker Adusote

Agreed on

Meeting countries at their level of readiness is essential for successful implementation


Policy mapping of different regulations across countries is crucial for establishing trust frameworks

Explanation

Comparin argues that understanding and mapping the different legal and regulatory frameworks across countries is essential for cross-border interoperability. This work helps establish how to encode trust and assurance levels into digital identity systems.


Evidence

Describes CityHub’s ongoing work with universities to map regulations across over 10 countries and derive interoperability approaches that respect local decisions


Major discussion point

Technical Standards and Implementation Approaches


Topics

Legal and regulatory | Data governance | Digital standards


Education, refugee management, and bank account opening are priority use cases identified by the international community

Explanation

Comparin explains that through extensive consultation with over 45 countries, CityHub identified these three use cases as the most relevant and impactful for digital identity implementation. These represent areas where digital identity can have significant social and economic benefits.


Evidence

References CityHub’s engagement with over 45 countries through summits across different continents and community prioritization process


Major discussion point

Practical Use Cases and Implementation


Topics

Digital identities | Development | Online education


Agreed with

– Abisoye Coker Adusote
– Tor Alvik
– Moderator

Agreed on

Practical implementation through pilot programs and proof-of-concepts is necessary


Surveillance risks come from both government and private sector and need to be addressed in digital identity systems

Explanation

Comparin acknowledges that surveillance concerns are valid and can originate from multiple sources, not just government entities. She emphasizes the need to design systems that protect against surveillance while enabling legitimate identity verification needs.


Major discussion point

Privacy and Security Concerns


Topics

Privacy and data protection | Human rights principles | Digital identities


T

Tor Alvik

Speech speed

134 words per minute

Speech length

1410 words

Speech time

630 seconds

Nordic-Baltic countries demonstrate that even similar nations face significant challenges in achieving cross-border digital identity interoperability

Explanation

Alvik explains that despite having similar legal systems, high digital adoption rates, and close cooperation, the Nordic-Baltic region still struggles with cross-border digital identity implementation. This highlights the inherent complexity of interoperability even under favorable conditions.


Evidence

Describes the region’s high mobility, 90%+ digital identity adoption, similar legislation and population characteristics, yet ongoing difficulties with cross-border services since 2017


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Digital identities | Legal and regulatory | Infrastructure


Identity matching between countries remains one of the main technical challenges for cross-border services

Explanation

Alvik identifies the core technical challenge as linking a person’s identity from their home country to their identity records in the service-providing country. This is particularly complex for long-term rights and services that may span decades.


Evidence

Provides examples of pension rights dating back 20-30 years and the complex user journey required for identity matching across national systems


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Digital identities | Infrastructure | Digital standards


User stories and practical proof-of-concepts are necessary to make complex technical concepts understandable to decision makers

Explanation

Alvik argues that abstract discussions of standards and cryptography fail to engage policymakers, while concrete user stories and pilot programs help decision makers understand the practical benefits and challenges of digital identity systems.


Evidence

References work on large-scale pilots for EIDAS regulation covering digital driver’s licenses, payments, and education, and notes that decision makers have ‘blank faces’ after technical discussions


Major discussion point

Technical Standards and Implementation Approaches


Topics

Digital identities | Digital standards | Infrastructure


Agreed with

– Abisoye Coker Adusote
– Debora Comparin
– Moderator

Agreed on

Practical implementation through pilot programs and proof-of-concepts is necessary


Digital wallet models that are user-centric protect citizens but create challenges for preventing fraud and misuse

Explanation

Alvik highlights the tension between privacy protection and fraud prevention in digital wallet systems. While user-centric models protect citizen privacy by enabling direct sharing between users and service providers, they make it harder to detect and prevent fraudulent activities.


Evidence

Discusses the design challenges of upcoming digital wallets that share credentials directly between users and service providers without intermediary oversight


Major discussion point

Privacy and Security Concerns


Topics

Privacy and data protection | Cybersecurity | Digital identities


N

Naohiro Fujie

Speech speed

104 words per minute

Speech length

1158 words

Speech time

665 seconds

Starting with local standards while keeping global standards in mind is essential for achieving international interoperability

Explanation

Fujie advocates for a bottom-up approach where countries first establish their own digital identity standards in accordance with local laws and regulations, while ensuring compatibility with global standards. This approach enables eventual bridging to other countries through standard technologies.


Evidence

Describes Japan’s work with Keio University to define digital credential architecture and management frameworks, and collaboration with National Institute of Informatics for academic credentials


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Digital standards | Digital identities | Legal and regulatory


Agreed with

– Debora Comparin
– Moderator

Agreed on

Multi-stakeholder collaboration is essential for digital identity success


Digital credentials require new management policies for original, duplicate, and deliverable versions that don’t exist in the physical world

Explanation

Fujie explains that digital credentials create new categories that don’t exist with physical documents, such as deliverable credentials and the concept of digital copies. This requires developing new management policies and rules for how these different types of digital credentials should be handled.


Evidence

Details Japan’s classification system for three types of digital credentials (original, duplicate, deliverable) and the need for different management policies for each type


Major discussion point

Technical Standards and Implementation Approaches


Topics

Digital identities | Digital standards | Legal and regulatory


Japan’s student discount railway ticket system shows how complex digital credential interactions can enable simple user experiences

Explanation

Fujie describes a proof-of-concept that demonstrates how multiple digital credentials (national ID and university enrollment certificates) can work together to enable a simple user experience like purchasing discounted train tickets. This shows the potential for complex backend systems to deliver intuitive services.


Evidence

Details the demonstration project where students use digital national ID cards and university-issued enrollment certificates to purchase discounted railway tickets


Major discussion point

Practical Use Cases and Implementation


Topics

Digital identities | Online education | Infrastructure


M

Moderator

Speech speed

154 words per minute

Speech length

3195 words

Speech time

1241 seconds

Digital identity is fundamental to preventing people from being left behind in digital transformation

Explanation

The moderator emphasizes that identity verification is critical to closing the digital divide and ensuring inclusive participation in digital services. Without proper identification systems, people cannot access government services, financial services, or participate in the digital economy.


Evidence

References the workshop’s focus on ensuring ‘nobody being left behind’ and the connection between identity and digital inclusion


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Development | Digital identities | Digital access


Agreed with

– Jimson Olufuye

Agreed on

Digital identity is fundamental to closing the digital divide and ensuring inclusion


Disagreed with

– Jimson Olufuye

Disagreed on

Centralized vs federated architecture preferences


Cross-border interoperability requires balancing national sovereignty with regional integration benefits

Explanation

The moderator highlights the tension between maintaining national control over identity systems and enabling seamless cross-border services. This balance is essential for regional economic integration while respecting each country’s sovereign rights over citizen data.


Evidence

References the African Free Trade Area proposal and the need to safeguard national sovereignty while enabling regional integration


Major discussion point

Cross-Border Interoperability and Regional Integration


Topics

Legal and regulatory | Data governance | Economic


Multi-stakeholder collaboration is essential for developing sustainable digital identity solutions

Explanation

The moderator emphasizes that successful digital identity implementation requires bringing together diverse stakeholders including government, private sector, technical community, and civil society. No single entity can solve the complex challenges of interoperable digital identity systems alone.


Evidence

References the partnership between OpenID Foundation, CityHub, and Aficta, and the diverse panel of speakers from different sectors and countries


Major discussion point

Technical Standards and Implementation Approaches


Topics

Digital standards | Infrastructure | Digital identities


Agreed with

– Debora Comparin
– Naohiro Fujie

Agreed on

Multi-stakeholder collaboration is essential for digital identity success


Practical proof-of-concepts and pilot programs are necessary to move from theory to implementation

Explanation

The moderator advocates for actionable steps and concrete implementations rather than just theoretical discussions. Pilot programs between countries can demonstrate feasibility and build confidence before broader regional or global implementation.


Evidence

Encourages speakers to commit to concrete next steps and references the bias toward action that characterizes the panel


Major discussion point

Practical Use Cases and Implementation


Topics

Digital identities | Development | Digital standards


Agreed with

– Abisoye Coker Adusote
– Debora Comparin
– Tor Alvik

Agreed on

Practical implementation through pilot programs and proof-of-concepts is necessary


A

Audience

Speech speed

149 words per minute

Speech length

143 words

Speech time

57 seconds

National security is a primary concern for digital identity sovereignty

Explanation

Audience members identified national security as a key reason why sovereignty and interoperable digital technology are important to society. This reflects concerns about protecting critical national infrastructure and maintaining control over citizen data for security purposes.


Evidence

First response in the Mentimeter poll was ‘national security’


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Cybersecurity | Legal and regulatory | Digital identities


Digital identity is a fundamental right that enables individual recognition in the digital world

Explanation

Audience responses emphasized that digital identity serves as a fundamental right that allows individuals to be recognized and participate in digital society. This perspective frames digital identity as essential for human dignity and participation rather than just a technical convenience.


Evidence

Mentimeter responses included ‘Identity, fundamental right’ and ‘recognizing individuals in the digital world’


Major discussion point

Digital Identity Infrastructure and Sovereignty


Topics

Human rights principles | Digital identities | Development


Protection of critical and classified data requires robust digital identity systems

Explanation

Audience members highlighted the importance of protecting sensitive information through proper digital identity infrastructure. This reflects concerns about data security and the need for strong authentication systems to prevent unauthorized access to critical information.


Evidence

Mentimeter responses included ‘Protection of critical and classified data’ and ‘protection’ as common themes


Major discussion point

Privacy and Security Concerns


Topics

Privacy and data protection | Cybersecurity | Data governance


Personal digital sovereignty empowers individuals to control their own digital presence

Explanation

Audience responses emphasized the concept of personal digital sovereignty, suggesting that individuals should have control over their digital identities and how their information is used. This reflects a user-centric approach to digital identity that prioritizes individual agency.


Evidence

Mentimeter response specifically mentioned ‘personal digital sovereignty’


Major discussion point

Privacy and Security Concerns


Topics

Privacy and data protection | Human rights principles | Digital identities


Agreements

Agreement points

Digital identity is fundamental to closing the digital divide and ensuring inclusion

Speakers

– Jimson Olufuye
– Moderator

Arguments

Identity is critical to closing the digital divide because without identification, people cannot access services


Digital identity is fundamental to preventing people from being left behind in digital transformation


Summary

Both speakers emphasize that proper identification systems are essential for digital inclusion, as people without verified identities cannot access government services, financial services, or participate in the digital economy.


Topics

Development | Digital identities | Digital access


Data sovereignty and keeping data within national/regional boundaries is crucial

Speakers

– Jimson Olufuye
– Kossi Amessinou

Arguments

Countries need appropriate laws to protect citizen data and ensure sovereignty over their digital assets


Data centers should be located within Africa to maintain data sovereignty


Summary

Both speakers stress the importance of maintaining control over citizen data through appropriate legal frameworks and infrastructure placement, emphasizing national and regional sovereignty over digital assets.


Topics

Data governance | Legal and regulatory | Infrastructure


Multi-stakeholder collaboration is essential for digital identity success

Speakers

– Debora Comparin
– Moderator
– Naohiro Fujie

Arguments

Collaboration between private sector, government, research institutes, and standards bodies is essential for building interoperable systems


Multi-stakeholder collaboration is essential for developing sustainable digital identity solutions


Starting with local standards while keeping global standards in mind is essential for achieving international interoperability


Summary

All three speakers agree that no single entity can solve digital identity challenges alone, requiring cooperation between diverse stakeholders including government, private sector, technical community, and civil society.


Topics

Digital standards | Infrastructure | Digital identities


Practical implementation through pilot programs and proof-of-concepts is necessary

Speakers

– Abisoye Coker Adusote
– Debora Comparin
– Tor Alvik
– Moderator

Arguments

Pilot programs between neighboring countries could demonstrate cross-border interoperability before broader implementation


Education, refugee management, and bank account opening are priority use cases identified by the international community


User stories and practical proof-of-concepts are necessary to make complex technical concepts understandable to decision makers


Practical proof-of-concepts and pilot programs are necessary to move from theory to implementation


Summary

All speakers emphasize the need for concrete, actionable implementations rather than theoretical discussions, advocating for pilot programs and real-world use cases to demonstrate feasibility and build confidence.


Topics

Digital identities | Development | Digital standards


Meeting countries at their level of readiness is essential for successful implementation

Speakers

– Abisoye Coker Adusote
– Debora Comparin

Arguments

Cross-border interoperability requires meeting each country at their level of readiness due to varying infrastructure capabilities


Collaboration between private sector, government, research institutes, and standards bodies is essential for building interoperable systems


Summary

Both speakers recognize that countries have different levels of digital infrastructure development and that successful regional integration must account for these disparities while providing appropriate support.


Topics

Development | Digital access | Infrastructure


Similar viewpoints

Both Nigerian and Benin representatives demonstrate successful national digital identity implementations that integrate across multiple sectors and enable regional mobility, serving as models for broader African integration.

Speakers

– Abisoye Coker Adusote
– Kossi Amessinou

Arguments

Nigeria has successfully integrated national identity numbers with banking and telecommunications systems


Benin’s “It’s Me” card enables visa-free travel within ECOWAS and could serve as a model for broader African integration


Topics

Digital identities | Infrastructure | Development


Both speakers acknowledge the complex balance between privacy protection and security concerns in digital identity systems, recognizing that surveillance risks exist from multiple sources and that user-centric approaches create new challenges.

Speakers

– Debora Comparin
– Tor Alvik

Arguments

Surveillance risks come from both government and private sector and need to be addressed in digital identity systems


Digital wallet models that are user-centric protect citizens but create challenges for preventing fraud and misuse


Topics

Privacy and data protection | Cybersecurity | Digital identities


Both speakers from Nigeria emphasize the need for government intervention and market incentives to address affordability and accessibility challenges in digital identity adoption.

Speakers

– Abisoye Coker Adusote
– Jimson Olufuye

Arguments

Government intervention through affordable mobile phones and digital literacy programs can help bridge the digital divide


Operators need incentives to make services more affordable and increase purchasing power of citizens


Topics

Development | Digital access | Economic


Unexpected consensus

Technical complexity of digital identity interoperability even among similar countries

Speakers

– Tor Alvik
– Debora Comparin

Arguments

Nordic-Baltic countries demonstrate that even similar nations face significant challenges in achieving cross-border digital identity interoperability


Digital identity infrastructure requires maintaining the same security properties as physical documents in the digital realm


Explanation

Despite representing very different regions (Nordic-Baltic vs. global perspective), both speakers acknowledge the inherent technical complexity of digital identity interoperability, with Alvik showing that even highly developed, similar countries struggle with implementation, while Comparin explains the cryptographic and technical challenges involved.


Topics

Digital identities | Infrastructure | Digital standards


Need for regulatory harmonization while respecting national sovereignty

Speakers

– Abisoye Coker Adusote
– Debora Comparin
– Naohiro Fujie

Arguments

Regional agreements based on data sovereignty and trust frameworks are needed to enable cross-border data sharing


Policy mapping of different regulations across countries is crucial for establishing trust frameworks


Starting with local standards while keeping global standards in mind is essential for achieving international interoperability


Explanation

Representatives from Nigeria, the global CityHub initiative, and Japan all independently arrived at the same conclusion that successful interoperability requires careful mapping and harmonization of different national regulations while respecting local sovereignty – an unexpected convergence across very different contexts.


Topics

Legal and regulatory | Data governance | Digital standards


Overall assessment

Summary

The speakers demonstrated remarkable consensus across multiple key areas: the fundamental importance of digital identity for inclusion, the need for data sovereignty, the requirement for multi-stakeholder collaboration, and the necessity of practical implementation approaches. There was also strong agreement on meeting countries at their readiness levels and the importance of pilot programs.


Consensus level

High level of consensus with significant implications for global digital identity development. The agreement spans technical, policy, and implementation aspects, suggesting a mature understanding of the challenges and a convergent approach to solutions. This consensus provides a strong foundation for international cooperation and suggests that despite different regional contexts, there are universal principles and approaches that can guide digital identity interoperability efforts globally.


Differences

Different viewpoints

Data storage location and sovereignty approach

Speakers

– Kossi Amessinou
– Jimson Olufuye

Arguments

Data centers should be located within Africa to maintain data sovereignty


Federated database architecture with API connections allows countries to keep data locally while enabling cross-border sharing


Summary

Amessinou advocates for keeping all African data within Africa as a sovereignty requirement, while Olufuye proposes a more flexible federated approach where countries maintain local data but can share specific categories through APIs based on policy frameworks


Topics

Infrastructure | Data governance | Legal and regulatory


Centralized vs federated architecture preferences

Speakers

– Jimson Olufuye
– Moderator

Arguments

Federated database architecture with API connections allows countries to keep data locally while enabling cross-border sharing


Digital identity is fundamental to preventing people from being left behind in digital transformation


Summary

Olufuye specifically advocates for federated database systems, while the Moderator emphasizes that CityHub is not opinionated on architecture, allowing countries to choose centralized, federated, or evolving approaches based on their values and policies


Topics

Infrastructure | Digital standards | Data governance


Unexpected differences

Scope of data sovereignty requirements

Speakers

– Kossi Amessinou
– Other speakers

Arguments

Data centers should be located within Africa to maintain data sovereignty


Various arguments about federated systems and cross-border sharing


Explanation

Amessinou’s strict position on keeping all African data within Africa contrasts with other speakers’ more flexible approaches to data sovereignty. This is unexpected given the general consensus on sovereignty importance, but reveals different interpretations of what sovereignty means in practice


Topics

Data governance | Infrastructure | Legal and regulatory


Overall assessment

Summary

The discussion showed remarkable consensus on fundamental goals (digital identity importance, sovereignty respect, cross-border interoperability needs) with disagreements primarily focused on implementation approaches and technical architectures. Main areas of disagreement centered on data storage requirements, system architecture preferences, and specific methods for achieving interoperability while maintaining sovereignty.


Disagreement level

Low to moderate disagreement level. The speakers demonstrated strong alignment on core principles and objectives, with differences mainly in technical implementation strategies and sovereignty interpretation. This suggests a mature discussion where fundamental concepts are accepted, but practical implementation details require further negotiation and compromise. The disagreements are constructive and focused on ‘how’ rather than ‘whether’ to proceed, which is positive for advancing the digital identity interoperability agenda.


Partial agreements

Partial agreements

Similar viewpoints

Both Nigerian and Benin representatives demonstrate successful national digital identity implementations that integrate across multiple sectors and enable regional mobility, serving as models for broader African integration.

Speakers

– Abisoye Coker Adusote
– Kossi Amessinou

Arguments

Nigeria has successfully integrated national identity numbers with banking and telecommunications systems


Benin’s “It’s Me” card enables visa-free travel within ECOWAS and could serve as a model for broader African integration


Topics

Digital identities | Infrastructure | Development


Both speakers acknowledge the complex balance between privacy protection and security concerns in digital identity systems, recognizing that surveillance risks exist from multiple sources and that user-centric approaches create new challenges.

Speakers

– Debora Comparin
– Tor Alvik

Arguments

Surveillance risks come from both government and private sector and need to be addressed in digital identity systems


Digital wallet models that are user-centric protect citizens but create challenges for preventing fraud and misuse


Topics

Privacy and data protection | Cybersecurity | Digital identities


Both speakers from Nigeria emphasize the need for government intervention and market incentives to address affordability and accessibility challenges in digital identity adoption.

Speakers

– Abisoye Coker Adusote
– Jimson Olufuye

Arguments

Government intervention through affordable mobile phones and digital literacy programs can help bridge the digital divide


Operators need incentives to make services more affordable and increase purchasing power of citizens


Topics

Development | Digital access | Economic


Takeaways

Key takeaways

Identity is fundamental to closing the digital divide and enabling inclusive digital transformation, as people without digital identity cannot access essential services


Cross-border digital identity interoperability requires meeting countries at their current level of technological readiness and infrastructure development


Successful implementation requires collaboration between multiple stakeholders including governments, private sector, research institutions, and standards bodies


Data sovereignty is critical – countries need appropriate laws to protect citizen data and maintain control over their digital assets, with preference for keeping data within regional boundaries


Technical architecture should be federated rather than centralized, allowing countries to maintain local data control while enabling cross-border sharing through APIs


User-centric design focusing on practical use cases (education, banking, healthcare, travel) is essential for successful adoption and public trust


Regional integration frameworks like ECOWAS and African Union digital interoperability initiatives provide pathways for scaling successful national implementations


Digital literacy and affordability remain significant barriers that require government intervention and private sector incentives to address


Resolutions and action items

Constitute a working group to identify gaps across sub-regions and regions, mapping trust frameworks and interoperability criteria


Complete policy mapping of different regulations and legislation across countries before the next IGF to show concrete results


Implement proof-of-concept pilots between neighboring countries (suggested pairings: Nigeria-Cameroon, Nigeria-Niger, Uganda-Kenya) to test cross-border interoperability


Develop actionable tools that encode legal and regulatory differences into digital identity systems to enable trust decisions


Continue CityHub’s technical working groups on use cases, policy frameworks, and technology standards with broader community participation


Leverage existing regional frameworks like AfCFTA and AU Digital Interoperability Framework for implementation


Establish intersessional work between IGF sessions to maintain momentum on practical implementation steps


Unresolved issues

How to effectively address the digital divide in rural areas with limited internet access and high connectivity costs


Balancing user privacy protection with the need to prevent fraud and misuse in digital identity systems


Resolving identity matching challenges when linking citizens across different national identity systems


Determining optimal governance models for cross-border data sharing while maintaining national sovereignty


Addressing surveillance concerns from both government and private sector actors in digital identity ecosystems


Scaling successful pilot programs to full regional or continental implementation


Harmonizing different legal frameworks and data protection regulations across countries to enable seamless interoperability


Suggested compromises

Federated database architecture that allows countries to maintain local data sovereignty while enabling cross-border sharing through agreed-upon APIs and data categories


Phased implementation approach starting with pilot programs between ready countries while supporting capacity building in less developed nations


User-centric digital wallet models that protect citizen privacy while incorporating necessary fraud prevention mechanisms


Regional agreements that respect national sovereignty while establishing minimum trust framework standards for cross-border recognition


Hybrid approach combining government-led infrastructure development with private sector innovation and affordability initiatives


Flexible technical standards that accommodate different levels of technological maturity while maintaining interoperability goals


Thought provoking comments

Identity is critical to closing the digital divide, because if you cannot identify anybody, it means the person does not really exist. And we’re talking about inclusivity. We’re talking about multi-stakeholder. We’re talking about nobody being left behind.

Speaker

Dr. Jimson Olufuye


Reason

This comment reframes digital identity from a technical infrastructure issue to a fundamental human rights and inclusion issue. It establishes the philosophical foundation that digital existence requires digital identity, making it essential for participation in modern society.


Impact

This comment set the tone for the entire discussion by establishing digital identity as not just a convenience but as a prerequisite for digital citizenship. It influenced subsequent speakers to consistently return to themes of inclusion and ensuring no one is left behind in their technical implementations.


Meet each country state at their level of readiness. So there are countries that do not have the simple digital infrastructure. There is no data connectivity, and they have no energy or very little energy. There’s very little or no digital literacy awareness created.

Speaker

Abisoye Coker Adusote (DG Nigeria)


Reason

This comment introduced crucial pragmatic realism to the discussion, acknowledging that technical solutions must account for vastly different infrastructure capabilities across African nations. It shifted focus from idealized interoperability to practical implementation challenges.


Impact

This observation fundamentally changed the discussion’s approach from assuming uniform readiness to acknowledging the need for graduated, flexible solutions. It led other speakers to emphasize starting small, pilot programs, and building foundational capabilities before attempting complex interoperability.


It would be fantastic to maintain some of those properties [of physical documents]. First of all, when you all arrive here, you probably show some form of document… They cannot track my actions and my whereabouts, and I think this is also a very important property that we should keep in mind and maintain in the digital domain.

Speaker

Debora Comparin


Reason

This comment brilliantly used the familiar experience of physical documents to explain complex digital identity challenges. It introduced the critical concept that digitization shouldn’t sacrifice privacy properties that people expect from physical credentials.


Impact

This analogy made technical concepts accessible to policymakers and addressed surveillance concerns raised by online participants. It shifted the conversation from purely technical implementation to user experience and privacy preservation, influencing later discussions about federated vs. centralized systems.


We need to ensure that our data inside Africa. That is very important… Where will we put our data? Where do you have it? Is it in Africa? Is it outside Africa?

Speaker

Dr. Kossi Amessinou (Benin)


Reason

This comment introduced the critical dimension of data sovereignty and geographic data residency, connecting technical architecture decisions to national sovereignty concerns. It highlighted that interoperability cannot come at the cost of losing control over citizen data.


Impact

This comment elevated the discussion from technical interoperability to geopolitical considerations, influencing the conversation about federated architectures and API-based data sharing that keeps data locally while enabling cross-border functionality.


The linkage of the identity you are coming from one country to the identity you have in the country providing the service is one of the main challenges we actually focus. We can get a digital identity to function and understand. But trying to address this is something we are now trying to work on.

Speaker

Tor Alvik (Norway)


Reason

This comment revealed a sophisticated technical challenge that goes beyond basic authentication – the problem of identity continuity across jurisdictions. It showed that even technically advanced regions struggle with fundamental interoperability issues.


Impact

This insight from an advanced digital identity region provided sobering realism about the complexity of true interoperability. It influenced the discussion toward recognizing that technical standards alone are insufficient – policy frameworks and identity matching protocols are equally critical.


Surveillance can come from both government and private sector, so both need to be addressed and monitored. Especially in this digital identity field, where we have both private and public sector involved, then it makes room for reflection.

Speaker

Debora Comparin


Reason

This comment expanded the surveillance discussion beyond the typical government surveillance concerns to include private sector surveillance, recognizing the multi-stakeholder nature of digital identity ecosystems and the need for comprehensive privacy protections.


Impact

This observation broadened the policy discussion to consider comprehensive privacy frameworks that address all potential surveillance vectors, not just government overreach. It influenced the conversation about the need for balanced approaches that prevent fraud while protecting privacy.


Overall assessment

These key comments fundamentally shaped the discussion by establishing it as a multi-dimensional challenge requiring technical, policy, and social solutions. Dr. Jimson’s opening comment about identity being critical to digital inclusion set an inclusive, human-rights-focused tone that permeated the entire conversation. DG Abisoye’s pragmatic observation about meeting countries where they are shifted the discussion from theoretical interoperability to practical implementation strategies. Debora’s physical document analogy made complex technical concepts accessible while emphasizing privacy preservation, while Dr. Kossi’s data sovereignty concerns elevated the conversation to geopolitical considerations. Tor’s insights about identity linkage challenges provided sobering realism about implementation complexity even in advanced regions. Together, these comments transformed what could have been a purely technical discussion into a nuanced exploration of digital identity as a socio-technical system requiring careful balance of inclusion, sovereignty, privacy, and practical implementation considerations.


Follow-up questions

How to achieve cross-border interoperability while respecting different national data protection laws that currently restrict such interoperability?

Speaker

Abisoye Coker Adusote


Explanation

Nigeria’s data protection act restricts cross-border interoperability, and there’s a need for regional agreements based on data sovereignty and trust to modify acts across regions to allow cross-border functionality


How to address the challenge of identity matching and linking citizens across different national identity systems for cross-border services?

Speaker

Tor Alvik


Explanation

This was identified as one of the main technical challenges in the Nordic-Baltic cooperation, where linking identity from one country to services in another country remains complex


What constitutes ‘copy’ and ‘deliverable’ credentials in the digital realm, and how should they be managed differently from original credentials?

Speaker

Naohiro Fujie


Explanation

Unlike physical documents, digital credentials have no difference between copy and original, requiring new frameworks to define and manage different types of digital credentials


How to map and harmonize different legal frameworks and regulations across countries to enable trusted cross-border digital identity?

Speaker

Debora Comparin


Explanation

Different countries have varying rules for identity verification and trust levels, requiring comprehensive mapping and actionable frameworks to enable cross-border trust


How to bridge the digital divide in rural Africa with escalating costs of internet and energy while pursuing digital identity initiatives?

Speaker

Online participant (via Stephanie)


Explanation

This addresses the fundamental infrastructure challenges that could prevent widespread adoption of digital identity systems in underserved areas


How to balance fraud prevention and surveillance concerns in user-centric digital wallet systems?

Speaker

Tor Alvik


Explanation

Digital wallets protect citizens by design but raise questions about how to tackle misuse and fraud while maintaining privacy protections


Where should African countries store their digital identity data – within Africa or outside – and what are the sovereignty implications?

Speaker

Kossi Amessinou


Explanation

Data sovereignty is crucial for African nations, requiring decisions about data center locations and control over citizen data


How to establish pilot programs for cross-border digital identity interoperability between specific country pairs?

Speaker

Abisoye Coker Adusote


Explanation

Suggested creating test pilots between border countries like Nigeria-Cameroon or Uganda-Kenya to demonstrate cross-border functionality before scaling


How to ensure digital identity systems prevent surveillance by both government and private sector entities?

Speaker

Debora Comparin


Explanation

Surveillance risks come from multiple sources and need to be addressed in system design to maintain citizen privacy rights


What working group structure and measurement criteria should be established to identify gaps and champions across African sub-regions?

Speaker

Jimson Olufuye


Explanation

Need for systematic approach to assess readiness levels, trust frameworks, and interoperability criteria across different African countries and regions


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #92 Eyes on the Watchers Challenging the Rise of Police Facial

Day 0 Event #92 Eyes on the Watchers Challenging the Rise of Police Facial

Session at a glance

Summary

This discussion focused on facial recognition technology (FRT) used by police forces and its impact on civil liberties, presented by the International Network of Civil Liberties Organizations (INCLO). The speakers outlined how FRT works by comparing facial templates from images against reference databases, emphasizing that it is a probabilistic technology prone to errors and biases. INCLO developed 18 principles to govern police use of FRT after observing widespread problems across their 17 member organizations in different countries.


The presentation highlighted several concerning real-world cases demonstrating FRT’s dangers. Robert Williams from Detroit was wrongfully arrested after being misidentified as the ninth most likely match by an algorithm, despite two other algorithms failing to identify him. The speakers noted that documented cases of misidentification disproportionately affect Black individuals, and retail chain Rite Aid was banned from using FRT after thousands of wrongful accusations between 2012 and 2020.


Three detailed case studies illustrated the principles’ importance. In Argentina, CELS successfully challenged Buenos Aires’ FRT system in court, revealing that police had illegally accessed biometric data of over seven million people while claiming to search only for 30,000 fugitives. The court found the system unconstitutional due to lack of oversight, impact assessments, and public consultation. Hungary’s recent case demonstrated FRT’s weaponization against civil liberties, where the government banned Pride parades and threatened to use FRT to identify participants, creating a chilling effect on freedom of assembly.


The discussion concluded that these cases validate INCLO’s principles, which call for legal frameworks, impact assessments, public consultation, judicial authorization, and independent oversight to protect fundamental rights while acknowledging that some organizations advocate for complete bans on police FRT use.


Keypoints

**Major Discussion Points:**


– **INCLO’s 18 Principles for Police Use of Facial Recognition Technology**: The International Network of Civil Liberties Organizations developed comprehensive principles to mitigate harms from police FRT use, including requirements for legal basis, impact assessments, public consultation, independent oversight, and prohibition of live FRT systems.


– **Technical Limitations and Discriminatory Impacts of FRT**: Discussion of how facial recognition is a probabilistic technology prone to false positives/negatives, with documented cases of wrongful arrests disproportionately affecting Black individuals, and the arbitrary nature of algorithmic matching systems.


– **Argentina Case Study – Systematic Abuse of FRT Systems**: Detailed examination of Buenos Aires’ facial recognition system that was supposed to target only fugitives but illegally accessed biometric data of over 7 million people, leading to a court ruling the system unconstitutional due to lack of oversight and legal compliance.


– **Hungary’s Weaponization of FRT Against LGBTQ+ Rights**: Analysis of how the Hungarian government banned Pride events and expanded FRT use to identify participants in “banned” assemblies, demonstrating how facial recognition can be deliberately used to suppress freedom of assembly and peaceful protest.


– **Community Engagement and Advocacy Strategies**: Discussion of the need for creative grassroots education and awareness campaigns to inform the public about FRT risks, since many people are unaware these systems exist or understand their implications.


**Overall Purpose:**


The discussion aimed to present INCLO’s newly developed principles for regulating police use of facial recognition technology, using real-world case studies from Argentina and Hungary to demonstrate both the urgent need for such safeguards and the severe consequences when proper oversight and legal frameworks are absent.


**Overall Tone:**


The tone was serious and urgent throughout, with speakers presenting factual, evidence-based concerns about facial recognition technology’s impact on human rights. The tone became particularly grave when discussing the Hungary case, highlighting the immediate threat to LGBTQ+ rights and freedom of assembly. While maintaining an academic and professional demeanor, there was an underlying sense of alarm about the rapid deployment of these technologies without adequate safeguards.


Speakers

– **Olga Cronin**: Senior policy officer at the Irish Council for Civil Liberties, member of INCLO (International Network of Civil Liberties Organizations)


– **Tomas Ignacio Griffa**: Lawyer at Centro de Estudios Legales Sociales (CELS) in Argentina, also an INCLO member


– **Adam Remport**: Lawyer at the Hungarian Civil Liberties Union, also a member of INCLO


– **Audience**: Multiple audience members including Pietra from Brazil who is part of a project doing community activations about facial recognition in police use


– **June Beck**: Representative from Youth for Privacy


– **MODERATOR**: Workshop moderator (role/title not specified)


**Additional speakers:**


– **Victor Saavedra**: INCLO’s technologist (mentioned as joining online but no direct quotes in transcript)


– **Timalay N’Ojo**: Program manager of INCLO Surveillance and Digital Rights Pillar of Work, based at the Canadian Civil Liberties Association in Toronto (mentioned as joining online but no direct quotes in transcript)


Full session report

# INCLO Workshop on Facial Recognition Technology and Civil Liberties


## Executive Summary


This workshop at the Internet Governance Forum, presented by the International Network of Civil Liberties Organizations (INCLO), examined the threats that facial recognition technology (FRT) poses to fundamental human rights. The discussion featured presentations from civil liberties lawyers across three jurisdictions—Ireland, Argentina, and Hungary—who demonstrated how FRT systems are being abused by law enforcement agencies worldwide. The speakers presented INCLO’s newly developed 18 principles for governing police use of FRT, supported by case studies from Argentina and Hungary that illustrated both the urgent need for such safeguards and the consequences when proper oversight is absent.


## INCLO Network and Participants


Olga Cronin, Senior Policy Officer at the Irish Council for Civil Liberties, opened by introducing INCLO’s global network of 17 member organizations, including the ACLU in the United States, Egyptian Initiative for Personal Rights, Contras in Indonesia, CELS in Argentina, and the Hungarian Civil Liberties Union. The workshop included both in-person and online participants, with Victor Saavedra and Timalay N’Ojo joining virtually.


## Technical Foundation and Problems of Facial Recognition Technology


### How Facial Recognition Operates


Cronin explained that facial recognition is a biometric technology using artificial intelligence to identify individuals through facial features. The system creates mathematical representations from images, which are compared against reference databases. Crucially, FRT is fundamentally probabilistic rather than definitive, relying on threshold values that create trade-offs between false positive and false negative rates.


### The Robert Williams Case


The arbitrary nature of FRT was illustrated through the Detroit case of Robert Williams, who was wrongfully arrested after an algorithm identified him as the ninth most likely match for a shoplifting incident. However, there were two other algorithms run that produced different results—one returning 243 candidates that didn’t include Williams, and another returning no results. Despite these contradictory outputs, Williams was still arrested, demonstrating what Cronin called “the arbitrariness of this and it’s not this silver bullet solution that it’s often presented to be.”


### Documented Bias and the Rite Aid Case


The workshop highlighted that documented misidentification cases disproportionately affect Black individuals. Cronin referenced Rite Aid, a retail chain banned from using facial recognition after making thousands of wrongful accusations between 2012 and 2020, demonstrating the systemic nature of these problems.


## INCLO’s 18 Principles Framework


### Development and Core Requirements


INCLO developed 18 principles based on experiences across member organizations in different countries. The principles establish fundamental requirements including: sufficient legal basis through proper legislative processes, prohibition on using FRT to identify protesters or collect information on peaceful assemblies, and mandatory fundamental rights impact assessments prior to implementation.


A critical component is the clear prohibition on live FRT systems, which Cronin described as “a dangerous red line.” Live FRT involves real-time identification in public spaces, creating unprecedented mass surveillance capability.


### Oversight and Accountability


The principles mandate independent oversight bodies with robust monitoring powers and mandatory annual reporting. They also require comprehensive documentation of FRT use, including detailed records of deployments, database searches, and results obtained.


Cronin acknowledged jurisdictional differences, noting that while many INCLO members would prefer complete bans on police FRT use, “we know that that fight has been lost in certain jurisdictions,” necessitating strong safeguards where prohibition isn’t achievable.


## Argentina Case Study: Systematic Abuse


### Background and Scope Creep


Tomas Ignacio Griffa from Centro de Estudios Legales Sociales (CELS) presented Buenos Aires’ facial recognition system implemented in 2019. Initially claimed to target only 30,000 fugitives from justice, court proceedings revealed the system had actually conducted consultations about more than seven million people, with over nine million total consultations recorded.


This massive discrepancy demonstrated that “the Buenos Aires police and perhaps other offices were accessing this biometric data for other purposes, entirely different from searching for fugitives.” The system accessed databases including CONARC and RENAPER without proper authorization.


### Legal Violations and Constitutional Ruling


The system operated without proper legal authorization, lacked oversight mechanisms, and had no procedures for documenting or controlling access. Information was manually deleted, preventing audit trails. The Argentine court ultimately ruled the system unconstitutional, finding violations of fundamental rights and legal requirements.


An ongoing issue involves the government’s refusal to disclose technical details, claiming trade secrets, which prevents proper assessment of bias and discrimination in the system.


## Hungary Case Study: Targeting LGBTQ+ Communities


### Political Context and Deliberate Weaponization


Adam Remport from the Hungarian Civil Liberties Union described how Hungary’s FRT system, existing since 2016, was weaponized against LGBTQ+ communities. After passing legislation banning “LGBTQ+ propaganda,” the government banned Budapest’s Pride parade and expanded FRT use to cover all petty offences.


The government actively communicated that participants in banned assemblies would be identified through facial recognition and fined, creating a deliberate chilling effect. As Remport explained, FRT was “actively used to discourage people from attending demonstrations.”


### Lack of Transparency as a Weapon


Remport identified how “the lack of transparency, the lack of knowledge that people have of what is going to actually happen to them is also as discouraging as the outright threats of using FRT.” This strategic opacity creates self-censorship and suppresses democratic participation.


He noted that public awareness was minimal because “people never really cared about FRT, because they didn’t actually know that it existed, precisely because of the lack of communication on the government side.” By the time awareness emerged, “the system already exists, with the rules that we have now, and which can be abused by the police and the government.”


## Human Rights Implications


### Multiple Rights Affected


Speakers emphasized that FRT affects multiple human rights simultaneously: human dignity, privacy, freedom of expression, peaceful assembly, equality, and due process. Cronin described how the technology turns people into “walking licence plates,” creating unprecedented tracking capabilities.


### Targeting Marginalised Communities


A recurring theme was FRT’s systematic use against marginalised communities. Cronin noted the technology is being used against Palestinians, Uyghur Muslims, and protesters worldwide, while case studies demonstrated targeting of Black individuals and LGBTQ+ communities.


## Community Engagement and Advocacy


### Public Awareness Challenges


Speakers identified lack of public awareness as a significant challenge. The strategic use of opacity prevents communities from understanding surveillance systems affecting them.


### Creative Approaches


In response to Pietra from Brazil’s question about community activation regarding facial recognition in police use, Cronin emphasized the importance of creative grassroots approaches through local artists and community organizations for building public awareness and resistance.


### Jurisdictional Variations


Speakers acknowledged different jurisdictions require different advocacy strategies. Some organizations advocate for complete bans, others focus on strong regulatory frameworks where prohibition isn’t politically feasible.


## Audience Questions and Emerging Issues


June Beck from Youth for Privacy raised concerns about laws banning face masks in public spaces as responses to citizens protecting themselves from FRT surveillance, highlighting the “arms race” between surveillance technology and privacy protection measures.


Questions about effective community education strategies revealed ongoing uncertainty about building public awareness and resistance to FRT deployment.


## International Precedents


Cronin mentioned the Bridges case in the UK, where Liberty successfully challenged South Wales Police’s use of automatic facial recognition, demonstrating that legal challenges can succeed when proper procedures aren’t followed.


## Conclusions


The workshop demonstrated that FRT poses serious threats to fundamental human rights across diverse jurisdictions. The case studies from Argentina and Hungary validated INCLO’s 18 principles by showing real-world consequences when safeguards are absent. Success in Argentina and ongoing resistance in Hungary provide models for advocacy strategies, while INCLO’s principles offer frameworks for ensuring any FRT deployment respects basic human rights and democratic values.


The speakers conveyed urgency about addressing FRT deployment before systems become further entrenched, emphasizing that coordinated civil society action can achieve meaningful victories in protecting democratic freedoms.


Session transcript

Olga Cronin: Texas Methodist Church SPD United Methodist Foundation Examples of Methodists Center Bethesda, Texas Text on Screen Hi everyone and thanks a million for joining us here today and INCLO is very happy to be here and very grateful to the organizers of IGF. INCLO stands for the International Network of Civil Liberties Organizations and it’s a network of 17 national civil liberties and human rights organizations worldwide and with member organizations across the global north and south that work together to promote fundamental rights and freedoms. I won’t mention all 17 members and to save time but they include ACLU in the US, the Egyptian Initiative for Personal Rights in Egypt, Contras in Indonesia, the Association for Civil Rights in Israel and Liberty in the UK and we welcome two new members just this year, ALHAC based in the West Bank and Connect Us in Brazil. We also have member organizations in Ireland, Hungary and Argentina which is why we are here and today. My name is Olga Cronin, I’m a senior policy officer at the Irish Council for Civil Liberties and which is a member of INCLO and Adam Remport there, my far right there is a lawyer at the Hungarian Civil Liberties Union, also a member of INCLO and on my right is Manuel Tufro, a lawyer at the, excuse my Spanish, Centro de Estudios Legales Sociales in Argentina, otherwise known as CELS, also an INCLO member and we are also joined online by INCLO’s technologist Victor Saavedra and Timalay N’Ojo, the program manager of INCLO Surveillance and Digital Rights Pillar of Work who is based at the Canadian Civil Liberties Association in Toronto. Most people in this room probably already know what FRT is but just very, very briefly. Facial recognition is a biometric technology and it uses artificial intelligence to try and identify individuals through their facial features. Generally speaking, FRT works by comparing a face print or biometric template of a person’s face taken from an image that could be sourced from CCTV or social media or body-worn cameras and compares that template of a person unknown against a database of stored face prints or biometric templates of people whose identity is known. The image of the person unknown would generally be called a probe image and the database of stored biometric templates of known people would be generally called a reference database and if you’re wondering what kind of reference databases of stored biometric facial templates are used by police, you can think of passport databases or driver’s license databases or police mugshot databases. So these systems are built on the processing of people’s unique biometric facial data so the unique measurements of your face you can compare it to DNA or iris scans or your fingerprints biometric data. Very quickly, there’s three points I’d like to make about FRT in terms of the live and retrospective use of FRT but also the threshold values that are fixed for probable matches and the fact that it’s a probabilistic technology. So real-time or live facial recognition involves comparing a live camera feed of faces against a predetermined watch list to find a possible match that would generate an alert for police to act upon. Retrospective basically means comparing still images of faces of unknown people against a reference database to try and identify that person. Now the European Court of Human Rights and the Court of Justice of the European Union have views live and real-time use of FRT is more invasive than retrospective but it should be said that tracking a person’s movements over significant length of time can be as invasive if not more invasive than one instance of real-time identification. For an FRT system to work there’s a threshold value fixed to determine when the software will indicate that a match or a possible match has occurred. Should this be fixed too low or too high? Respectively it can create a high false positive rate or high false negative rate. There is no single threshold that eliminates all errors. So when you think about what a police officer will get in their hand afterwards, if they use FRT, they will essentially get a list of potential candidates. Person A with a percentage score next to them, a similarity score. Person B with another similarity score. How long this list could be anyone’s guess because it largely depends on the reference database and a number of other factors. Just very quickly I’ve just concluded this picture of a man called Robert Williams from Detroit. This is what’s called an investigative lead report from the Michigan State Police in respect to Robert Williams, a father of two who was wrongfully arrested and detained after he was misidentified as a shoplifter by FRT in January 2020. We could do a whole session on Robert’s case but I just thought it was interesting to show the probe image that was used in his case. You can see it there, it’s a CCTV still and the picture on the right is of Robert’s driver’s license picture. You’ll also see, forgive the slide, it’s just popped over different fonts, apologies, but basically it’s important to note that Robert was arrested after an algorithm identified him as the ninth most likely match for the probe image but there were two other algorithms run. One returned 243 candidates, Robert wasn’t on that list, and another returned no results at all and yet he was still arrested and detained. So really the point of this is just to show the arbitrariness of this and it’s not this silver bullet solution that it’s often presented to be. And there are increasing number of people who have been wrongly accused due to FRT and you’ll notice that all the people in these images are people who are black. They are all from the States, Sean Thompson and Sarah, not her real name, is from the UK and there are increasing numbers of these misidentifications happening all the time. In 2023 the US Federal Trade Commission banned the retail pharmacy agency, retail chain rather, Rite Aid from using FRT in shops because it was creating, between 2012 and 2020, there was thousands of people wrongfully basically accused of being shoplifters and told to leave stores, predominantly people who were black and this was all misidentifications. So with FRT there’s an immediate danger of misidentifications, it’s unreliable, it has this bias and discriminatory aspect but also there’s the larger and more long-term concerns, longer-term consequences and that is this mass surveillance concern. FRT allows police, it gives them a seismic shift in this kind of surveillance power, it does turn us into walking license plates and it tilts that power dynamic into the hands, further into the hands of police. So, you know, we know and we’ve heard of the use of FRT against Palestinians, we know and have heard of the use of FRT against Uyghur Muslims and protesters in Russia but the most, I suppose, most recent situation regarding the use of FRT, that’s been in the news at least, is the use of FRT this weekend at Pride in Hungary, which Adam will talk to in a bit. This is just a brief slide just to outline the different human rights that are affected by FRT at a minimum. The right to dignity, privacy, freedom of expression, peaceful assembly and association, equality and non-discrimination, rights of people with disabilities, the presumption of innocence and the right to effective remedy and the right to fair trial and due process. In CLO, as I said, we are members in 17 jurisdictions and we, over the last number of years, since we brought out a report about the emerging issues with FRT in 2021, we could see that this is becoming a significant issue. We knew about the biometric database of Palestinians, we could see our member organization Uyghur in Russia brought a complaint to the European Court of Human Rights, brought a case to the European Court of Human Rights over Russia’s use of FRT against protesters. There have been wrongful arrests in Argentina, which my friend Tomas will talk about. There was the famous Bridges case in the UK, the Clearview AI scandal in Canada, all of these various aspects and essentially what we decided, we stood back and we thought, in many of these jurisdictions there’s no legislation to underpin this use of FRT. In different jurisdictions they have different, you know, data protection rights or privacy rights or perhaps none at all and it was essentially, we could see how patchwork it was. Different organisations within our members were calling for different things, some were calling for bans, some were calling for moratoriums, some were calling for legislation and so what we decided to do was to come up with, create a set of principles for police use of FRT in the hope that it could mitigate some of these harms. I won’t stay too long on it but basically our methodology was we just created a steering group within the network, we met obviously throughout, we agreed what information we needed, we surveyed our members to find out what actually information there is available in their jurisdictions, we agreed on the harms and risks and we looked at the cases that were coming through, we looked at, you know, obviously media stories as well, not everything you know ends up in court and then we agreed upon a set of questions that we felt, we feel should always be asked when it comes to police use of FRT and essentially the principles are an answer, our attempt at answering those questions and we did have some expert, great expert feedback with a number of experts, academics and otherwise and we did that virtually and in person and essentially these are the principles, I don’t want to keep, take up all the time but essentially there are 18 principles and the first principle is about a legal basis and essentially what we’re saying here is that any interference with the right and FRT interferes with many rights as I mentioned earlier must have a legal basis and that legal basis must be of sufficient quality to protect against arbitrary interferences. We say that they cannot, we say that police cannot use FRT unless there is a legal basis. We also say that it should never be used in certain circumstances and that includes it should not be used to identify protesters or collect information on people attending peaceful assemblies which is very pertinent to what Adam is going to talk about. The second principle concerns mandatory fundamental rights impact assessment, so here we’re saying that the police need to carry out a series of impact assessments with respect to all fundamental rights prior to any new use of FRT and we’re saying that these assessments must include an assessment of the strict necessity and proportionality of the FRT use. We have copies of the principles here if anyone would wish to go through them in more detail, they are quite detailed, so I won’t go into detail of each of them, but obviously those assessments we’re saying they must explicitly outline the specific parameters of use, who will use it, who it will be used against, where it will be used, why it will be used and how it will be used, the rights impacted, the nature and extent of the risks, how those risks will be mitigated and a demonstrated justification for how and why the benefits of the deployment will outweigh the rights impacts and the remedy available to someone who is either misidentified or whose biometric data was processed when it should not have been, which will speak to Tomas’s point in a minute. Principle three is about the fundamental rights impact assessments that I just mentioned, having to be independent of the vendor assessment. It’s not enough for a vendor to say that this is X and this is Y and everything is OK and I’d like to mention here that Bridges case, the Court of Appeal case in the UK, which our colleagues Liberty took, because in that case the Court of Appeal held that the public sector equality duty under the Equality Act there requires public authorities to give regard to whether a policy could have a discriminatory impact and essentially in that case it was held that the South Wales Police had not taken reasonable steps to make inquiries as to whether or not the FRT algorithm the police was using risked bias or racial or sex grounds. And the court actually heard from a witness who was employed by a company specialising in FRT and he said that these kinds of details are commercially sensitive and cannot be released and we hear this a lot. But it was held in the end, the court held that while that was understandable, it wasn’t good enough and it determined the police never sought to satisfy themselves either directly or by way of independent verification that the software didn’t have an unacceptable bias. Principle four is no acquisition or deployment of any new FRT without a guarantee of future independence from the vendor. So this is about vendor lock-in, this risk that a customer would be at risk of not being able to transition to another vendor. Principle five is saying that all versions of all assessments must be made public before the deployment. Principle six is about the obligation of public consultation and here we’re saying that before any policing authority deploys FRT it must hold meaningful public consultation. Principle seven, authorities must inform the public how probe images are used in FRT operation. Principle eight is about the technical specifications of any FRT system and how they must be made public before any deployment. Principle nine is that live FRT should be prohibited. We do believe that live FRT is just too dangerous and should be banned, it is a red line. But as I said before, retrospective FRT can be just as dangerous. Principle ten is about mandatory prior judicial authorisation. Eleven is about record of use and here we’re saying that the police must document each and every FRT research performed and provide this documentation to the oversight body. I haven’t mentioned it yet but principle sixteen provides for an independent oversight body. Principle twelve ensures that an FRT result alone would not be sufficient basis for questioning. And then obligation to disclose, there should be mandatory disclosure of the details of the FRT operation applied against individuals. Principle fourteen, any FRT misidentification of a person must be reported and there should be mandatory annual reporting by authorities of those misidentifications in principle fifteen. Principle sixteen is the independent oversight body that I mentioned before. Under principle seventeen, that oversight body must publish annual reports. Principle eighteen is that the impact assessment must be made available to the oversight body before the system is employed. I need to move on very quickly to hand this over to Tomas but basically we hope that these principles, the aim of the principles is to both help reduce FRT harms but also empower civil society and the general population to kind of step forward and ask the right questions and push back and advocate for safeguards with a clear understanding of these technologies. We hope that the information can be used to voice our opposition but also as an advocacy tool when debating and discussing FRT with law and policy makers. So for now I will pass it over to Tomas who can speak to a situation in Argentina and how they met with the principles.


Tomas Ignacio Griffa: Thank you very much Olga, hello everyone. So I’m going to be talking a little bit about our experience in Argentina at CELS regarding the FRT. We’ve been working since 2019 in a litigation against the implementation of facial recognition technology in the city of Buenos Aires. I think this case provides a very interesting example regarding the importance of the ideas behind the principles that Olga was explaining just a moment ago. So very briefly I’m going to talk about how the facial recognition system in the city of Buenos Aires works and what its legal framework looks like. I’m going to talk about what the process in which we question the constitutionality of the system was like. I’m going to explain the principles that were set forth in the ruling by the local judges and finally I’m going to talk a little bit about how all this highlights the relevance of the principles that we were talking about. So first regarding the system, the fugitive facial recognition system in the city of Buenos Aires, or Sistema de Reconocimiento Facial de Prófugos in Spanish, was implemented in the city of Buenos Aires by a ministerial resolution on April 2019. According to the resolution the system was to be employed exclusively to look for fugitives, that is to say people with pending arrest warrants, and exceptionally for other tasks specifically mandated by judges in individual cases. The system worked with the National Fugitive Database, the CONARC in Spanish, which provided the identities of the people that had to be searched for, that is to say the fugitives. and with the National Identity Database, the Registro Nacional de las Personas or RENAPER in Spanish, which was supposed to provide the biometric data regarding the people that had to be searched for, the pictures of these people, the fugitives. The system was operated by the local police and in 2020 the local legislative branch sanctioned a statute that provided a legal basis for the system. So, regarding the case, it was a constitutional protection procedure or AMPARO in Spanish. It was started by the Argentinian Observatory of Informatic Rights, another Argentine NGO, and CELS also took part in the case. The case started with focus on that which research on facial recognition technology around the world has repeatedly shown, that is to say, the risk of mistakes and wrongful identifications, racial and gender biases and impacts on the right to privacy and so on. However, as the judge started gathering information about the system, it became quite clear that there was another big problem, which was its practical implementation. So, as I said, the facial recognition system was intended to work crossing data between a national database of fugitives and wanted people, which consists of maybe 30 to 30,000 names, and the biometric data gathered by the National Identity Database. So, the National Identity Database was supposed to provide the biometric data on those 30 to 30,000 people. However, when the judge asked the National Identity Database how many individual consultations the government of the city of Buenos Aires had made, it turned out that the government had made consultations about more than seven million people, more than nine million consultations in total regarding more than seven million people. So, clearly the Buenos Aires police and perhaps other offices were accessing this biometric data for other purposes, entirely different from searching for fugitives, and to this day we do not know exactly how and why this data was accessed. During the process, during the trial, a group of experts performed an audit of the system. They found that thousands of people had been searched by the facial recognition system without any legal basis, that is to say people who are not fugitives. They also found that information regarding the use of the system had been manually deleted in such a way that it was impossible to recover it, and they found also that it was impossible to trace which specific public officers had operated the system. So, with all this, the local judge ruled that the facial recognition system employed by the city of Buenos Aires was unconstitutional. She found that the system had been implemented without complying with the legal provisions for the protections of the constitutional rights of the citizens. She also, in the ruling, she details that the legislative commission that was supposed to oversee how the system worked had never been created, that the other local organism, the Defensoría del Pueblo in Spanish, which was supposed to audit the system as well, was not provided with the information it needed to perform this task, that there were no previous studies to ascertain the impact of the system on the human rights, and that there were no instances for public participation prior to the introduction of the system. The ruling also explained that, as the court appointed experts explained, it was proven that the system was illegally employed to search for people who did not have pending arrest warrants, and as I said before, local statutes provide that this was the only possibility, the only way the system could be employed. The ruling also held that local authorities had illegally accessed the biometric data of millions of people under the guise of employing this system. And finally, very briefly, the local chamber of appeals affirmed this decision and also added that the implementation of the system had to be preceded by a test performed by experts to ascertain if the software has a differential impact on people based on their race or gender. And finally, very briefly, I’m going to talk about the latest developments in the case. This order to perform the test to ascertain whether the system has a differential impact on people based on race or gender is still being carried out to this day. The government wanted to do a sort of black box test by selecting a number of people and testing the system on them. Our position here is that it is not enough to do a test of this kind and that it’s necessary for the government to disclose the technical details of the software and the datasets with which the software was trained. The government’s position is that this information is a trade secret belonging to the company that provides the software, so this is a debate that is still ongoing. And finally, going back to the principles, the case was prior to the principles, started in 2019 as I said, but I think it’s a very good example of the relevance of the ideas behind the principles and the possible consequences of ignoring them. I mean, the serious irregularities that the judge found on the implementation of the facial recognition system are, we could say, the exact opposite of what the principles stand for. So, very briefly, to give the floor to Adam, thousands of people were looked for employing the system without any legal basis, directly against the ideas set forth in principle number one. The system was implemented without any prior assessment of its impact on fundamental rights. This brings our attention to principle number two. There were supposed to be two oversight bodies according to the legislation, to the framework of the facial recognition system. This looked great in theory, however, as I said, in practice, one of them wasn’t even created and the other one was not provided with information it required to perform its function. This regards obviously to principle 16. No public consultation took place before introducing and employing the facial recognition system. This, of course, goes against principle number six. The use of the system wasn’t properly documented, information was manually deleted, it could not be recovered and it was not possible to tell which public officers had performed each operation. This, of course, regards to principle 11 and the latest developments that I was talking about regarding how this test ordered by the Chamber of Appeals will be carried out. I believe it highlights the importance of being able to access technical information regarding the system, such as the source code, the data that is employed to train the algorithm and so on, and this regards to principles 8 and 13. Thank you very much.


Olga Cronin: Thank you. I might just introduce, maybe just to say, thanks a million, thanks a million, Thomas. I think it’s safe to say that what you just described is exactly that. The principles, had they been seen to or complied with or known about beforehand, it could have been a different scenario and how things can go very wrong. Speaking about how things can go wrong, we’re now going to turn to Hungary and Adam’s going to talk about the recent legislative change there that effectively has banned Pride this weekend and also allows for the police to use FRT to identify people who have defied that ban. So over to you, Adam. Thank you very much for


Adam Remport: having me. I would like to present to you a case which may be good to demonstrate the practical problems with facial recognition, the ones that are formulated often quite abstractly but which have absolute real-life consequences, the case of the Hungarian government essentially banning the Pride parade. So the background of the case is that the Pride is not a new event, it has been held since 1995, but in February of this year the Prime Minister said that it would be banned because that was, in the government’s views, necessary for child protection. So new laws were enacted. They essentially banned any assembly quote-unquote displaying or promoting homosexuality and another law made it possible for facial recognition technology to be deployed for all petty offenses. Now I will tell you more about what petty offenses are in this context. So the legal background of the case is that Hungary has had a facial recognition technology act since 2016. It established a facial recognition database which consists of pictures of IDs, passports and other documents with facial images on them. There are specific authorities that can request facial analysis in certain specified procedures. The Hungarian Institute of Forensic Sciences, which is responsible for operating the facial recognition system. A novelty of Hungarian FRT use was that in 2024, FRT was made available for use in infraction procedures or so-called infraction procedures by the police, and in 2025, this included all infraction procedures. The reason why this is important is that participating in a banned event, an event that had been previously banned by the police, is an infraction. So if demonstrators gather at the Pride event after it had been banned by the police, it would mean that they would collectively commit infractions, probably in the tens of thousands. So let’s find out how the FRT system actually works in this scenario. The police are known to record demonstrations, and they can use CCTV and other available sources to gather photographs or images of certain demonstrations. If they find that there is an infraction happening, what they can do is that they initiate an infraction procedure, and in the course of that infraction procedure, send the facial images to the central FRT system, which then runs an algorithm and identifies the closest five matches, which are then returned to the police, and it’s the police officer operating this system who has to decide whether there is a match or not. I have to point out that this system has never been used en masse, so it has never been used against tens of thousands of people, and it is not known how this system itself will technically or the capacities of the judiciary and the police will operatively handle this kind of situation. So what we can tell about the case is that in the FRT principles, well, it’s the first principle that it must be ensured that FRT is not used to identify protesters or collect information on people attending peaceful assemblies. This is the first principle, and it is immediately violated by this kind of FRT use against peaceful demonstrators. Another principle is that there are certain uses which are banned according to the principles, such as that no FRT system will be used on live or recorded moving images or video data. We can see why it is a problem that the police record entire demonstrations. They don’t even necessarily have to follow a demonstration with live facial recognition. It is enough for the chilling effect to take place, to record everyone who is taking part in the demonstration, to then later systematically find everyone in the police’s recordings and then send fines to them, which is actually probably how this will play out in Hungary or at least how the government plans it to play out. It is an interesting case study of the lack of transparency around facial recognition. One of my conclusions will be that FRT in this present case is used actively to discourage people from attending demonstrations, but the lack of transparency, the lack of knowledge that people have of what is going to actually happen to them is also as discouraging as the outright threats of using FRT. In the case of the Hungarian system, we can tell that there was no public consultation whatsoever before the introduction of the entire system in 2016. The introduction of facial recognition as such was done in Hungary without any public consultation or without it being communicated to the public, which means that there is no public awareness or there hasn’t been up until now, when the situation has gotten worse, public awareness of even the existence of the FRT system. There was no consultation with the public, no data protection impact assessment, no consultation with the data protection authority before the present broadening of the scope of FRT, which would, of course, include this massive breakdown on the freedom to assembly. This also violates one of the articles of the protocols. It can also be said that there are no records of use, no statistics available that would tell you how the system works, how effective it is, when it is deployed against which kinds of infractions whatsoever, and persons prosecuted can almost never find out whether the FRT has been used against them or not. There are no impact assessments before the individual uses of the system, which means that the police can’t simply just initiate these searches without assessing the possible effects that it would have on someone. This is also against the principles. There is no vendor lock-in assessment either, which is also important because the Hungarian Institute of Forensic Sciences, which operates the system, explicitly said that they were only clients using this facial recognition algorithm, which raises the question of whether the data are being transferred to third countries or not. And of course, since there are no risk assessments, they haven’t been publicized either, which also goes against the protocols. So a lack of sufficient oversight is also what I would like to mention. There is no prior judicial authorization. This is important because, as I have told you, it is necessary to start an infraction procedure before FRT can even be deployed. It is not known because the law is not clear about against how many people at the same time can one infraction procedure be initiated. So this has never even really been used against more than three or four people, which makes sense, but it has never been used on a scale of tens of thousands of people. A prior judicial authorization could act as a check on this kind of massive surveillance if a judge could see whether it was necessary to surveil tens of thousands of people at the same time, but it’s not possible. There is no independent oversight body either, and of course there are no annual reports, no notification of the impact assessments to the oversight body, since the oversight body does not exist. So these all go against the provisions, I think in a very concrete manner, so that you can see that these provisions are not just abstract rules, but when they are not met with, that means actual harms in real life. My conclusion would be that what we can see is a weaponization of facial recognition technology, that instead of mitigating the risks, there is a deliberate abuse of FRT’s most invasive properties. Essentially, the government actively communicated that facial recognition would be used against people that they cannot hide because they will be found with facial recognition, and they will be found. It is inevitable. This of course has a massive chilling effect on the freedom of assembly, and we could also say that even the lack of transparency is… This is a kind of weaponized, because if there is a lack of information on the system, it is impossible for people to calculate the risks. This will have a chilling effect on them, because they won’t know whether it is true that they will actually all be found and fined. So, I would like to conclude here. Some possible next steps are legal avenues that can be taken, like the law enforcement directive in the EU or the AI Act. And enclosed principles can, I think, also be used in advocacy at international fora. Thank you.


Olga Cronin: Thanks, Adam. If you don’t mind, I might just ask a follow-up question. Given the situation that’s happening in Hungary, and given it’s so imminent this weekend, and it has got such international attention, how have the people in Hungary, how are they feeling? What’s the public opinion about the use of FRT? Has it changed? Did people care before, or how is it now?


Adam Remport: Well, people, I think, never really cared about FRT, because they didn’t actually know that it existed, precisely because of the lack of communication on the government side. So, the situation had to become this bad and severe for the people to start to even care about the problem. But now, the system already exists, with the rules that we have now, and which can be abused by the police and the government. So, many are concerned now, but proactive communication should have been necessary on the government’s part.


Olga Cronin: I just wonder if we have any questions.


Audience: Hi. Can you hear me?


Olga Cronin: Yes.


Audience: I’m from Brazil. My name is Pietra. The situation in Brazil with facial recognition is growing really fast. I think it’s very similar to what is happening in Argentina. But I was really shocked with the Hungarian case. And I’m part of a project that is trying to do some community activations about facial recognition in use in police. So, I wanted to hear from you if you’ve ever done something with community activation. And also, I wanted to ask if you believe that there is a way to use facial recognition, or if you think that it should be banned. Because in Brazil, we are discussing a lot about banning all systems that use facial recognition. So, I wanted to listen from you guys, what you think about it. Thank you.


Olga Cronin: Thank you. I can have a go at answering some of those questions. I think the idea of getting into communities and doing that education awareness piece, which is I think what you’re talking about, and maybe activating them or stirring them into taking action, is really, really important. Mainly because of the same issue that Adam just mentioned. You know, people don’t really understand it, don’t really know about it. And then when people are talking about it, people in position of authority speak of it as a silver bullet solution. With this, you know, there’s no problems. It’s like control and F, there’s no issues. And just can absolutely downplay the risks. So, I think you have to get creative. I think you have to get creative with maybe local artists. ICCL created a mural with a local artist in Dublin to highlight the dangers of FRT. But it’s also kind of getting, looping in with other civil society organizations who might not work in this space. And getting down to that kind of grassroots level. I think you just have to kind of get imaginative. You’re trying to get the word out there. And I think, you know, use all the tools available to you that you would use in general for communications. I think when it comes to a ban, ICCL, or sorry, INCLO rather, like I said, we have 17 members in 17 different jurisdictions. There are already 17 different sets of kind of safeguards and protections there in place. Some people are calling for a ban. Some people are calling for a moratorium. And other people are calling for, or other groups are calling for legislation. It really is specific to the jurisdiction and what’s happening there. But what we do know is that the risks and the harms are present. They’re pressing. That mass surveillance risk and how this can be quickly deployed against us is clear and obvious. So from many of our perspectives in INCLO, we would call for a ban. We don’t wish the police to use it. But we know that that fight has been lost in certain jurisdictions. So this is an attempt to try and make it better, at least. I hope that helps.


June Beck: Hello, my name is June Beck from Youth for Privacy. I was wondering, since we’re talking about facial recognition technologies, there’s also been a lot of movement to penalize wearing masks in public as an attempt to protect yourself against facial recognition technology. So I was wondering if INCLO or any organization have thoughts or processes or any kind of discussions on how the ban of facial masks, for example, is also in conversation with FRT. I don’t wish to take over the conversation. You might have something to say. Not really pertaining to this. Maybe a little, but if you have to. We’re out of time. We’re out of time. I would say that that’s happening. More and more laws are being passed to ban face masks at protests. It’s on the cards in Ireland as well. It’s happening in England. It’s changing to be more restrictive in England. It is happening, and it’s impossible to see how that’s not a response to the use by police of FRT. And then the response of the public to cover their faces. So it’s not something that we’ve worked on specifically yet, but it’s absolutely something that we are working on individually, if you like. Thank you.


Olga Cronin: Thanks a million. Sorry, we’ve gone over time. We’re very happy that you joined us. We hope that you enjoyed it and that you found it insightful. Thanks very much. And we have copies of the principles and hard copies if you wish. Thank you. Thank you very much. Goodbye. Thank you. Goodbye.


MODERATOR: Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two. Workshop two.


O

Olga Cronin

Speech speed

161 words per minute

Speech length

3253 words

Speech time

1206 seconds

FRT is a biometric technology using AI to identify individuals through facial features by comparing face prints against databases

Explanation

Cronin explains that facial recognition technology works by comparing a face print or biometric template from an image (probe image) against a database of stored face prints of known people (reference database). The technology uses artificial intelligence to try and identify individuals through their unique facial features.


Evidence

Examples of reference databases include passport databases, driver’s license databases, or police mugshot databases. The system compares images from CCTV, social media, or body-worn cameras against these stored templates.


Major discussion point

Technical overview of FRT systems


Topics

Human rights | Legal and regulatory


FRT systems are probabilistic and prone to errors, with threshold values creating false positive or negative rates

Explanation

Cronin argues that FRT is not a perfect technology but rather probabilistic, meaning it provides probability scores rather than definitive matches. The threshold values set to determine matches can be problematic – if set too low they create high false positive rates, if set too high they create high false negative rates.


Evidence

Police officers receive a list of potential candidates with percentage similarity scores. There is no single threshold that eliminates all errors completely.


Major discussion point

Technical limitations and reliability issues


Topics

Human rights | Legal and regulatory


Agreed with

– Tomas Ignacio Griffa
– Adam Remport

Agreed on

FRT systems are inherently unreliable and prone to errors with serious consequences


The technology demonstrates arbitrariness, as shown by Robert Williams case where different algorithms produced different results

Explanation

Cronin uses the Robert Williams case to illustrate how arbitrary and unreliable FRT can be. Williams was wrongfully arrested despite being only the ninth most likely match, and other algorithms either didn’t include him in results or returned no results at all.


Evidence

Robert Williams was arrested after being identified as the ninth most likely match by one algorithm, but two other algorithms produced different results – one returned 243 candidates without Williams on the list, another returned no results at all.


Major discussion point

Unreliability and arbitrariness of FRT systems


Topics

Human rights | Legal and regulatory


Agreed with

– Tomas Ignacio Griffa
– Adam Remport

Agreed on

FRT systems are inherently unreliable and prone to errors with serious consequences


FRT has immediate dangers of misidentifications and bias, particularly affecting Black individuals disproportionately

Explanation

Cronin argues that FRT systems demonstrate clear bias and discrimination, with Black individuals being disproportionately affected by misidentifications. This creates immediate dangers for these communities who are wrongfully accused and face consequences.


Evidence

All the people shown in images of wrongful FRT identifications are Black individuals. The US Federal Trade Commission banned Rite Aid from using FRT in 2023 because it wrongfully accused thousands of people, predominantly Black individuals, of shoplifting between 2012 and 2020.


Major discussion point

Racial bias and discrimination in FRT


Topics

Human rights


FRT affects multiple human rights including dignity, privacy, freedom of expression, peaceful assembly, equality, and due process

Explanation

Cronin presents a comprehensive view of how FRT impacts various fundamental human rights. She argues that the technology doesn’t just affect privacy but has broader implications across multiple areas of human rights protection.


Evidence

Specific rights mentioned include: right to dignity, privacy, freedom of expression, peaceful assembly and association, equality and non-discrimination, rights of people with disabilities, presumption of innocence, right to effective remedy, and right to fair trial and due process.


Major discussion point

Comprehensive human rights impact


Topics

Human rights


Agreed with

– Tomas Ignacio Griffa
– Adam Remport

Agreed on

FRT violates multiple fundamental human rights and requires comprehensive legal safeguards


The technology enables mass surveillance and gives police seismic shift in surveillance power, turning people into “walking license plates”

Explanation

Cronin argues that beyond immediate misidentification risks, FRT creates broader long-term concerns about mass surveillance. The technology fundamentally shifts the power dynamic by giving police unprecedented surveillance capabilities over the general population.


Evidence

Examples of mass surveillance use include FRT against Palestinians, Uyghur Muslims, and protesters in Russia. The technology allows tracking people’s movements over significant lengths of time.


Major discussion point

Mass surveillance capabilities and power imbalance


Topics

Human rights | Legal and regulatory


FRT is being weaponized against marginalized groups including Palestinians, Uyghur Muslims, and protesters

Explanation

Cronin argues that FRT is not just a neutral technology but is being actively used as a tool of oppression against vulnerable and marginalized communities. This demonstrates the broader political and social implications of the technology.


Evidence

Specific examples include use of FRT against Palestinians, Uyghur Muslims, protesters in Russia, and the recent use at Pride in Hungary.


Major discussion point

Weaponization against marginalized communities


Topics

Human rights


Agreed with

– Adam Remport

Agreed on

FRT is being weaponized against marginalized communities and protesters


Any FRT use must have sufficient legal basis and should never be used to identify protesters or collect information on peaceful assemblies

Explanation

This is the first principle in INCLO’s framework, establishing that FRT use requires proper legal authorization and explicitly prohibiting its use against people exercising their right to peaceful assembly. Cronin argues this is fundamental to protecting democratic rights.


Evidence

This principle is directly relevant to the Hungary case where FRT is being used against Pride participants.


Major discussion point

Legal basis and protection of assembly rights


Topics

Human rights | Legal and regulatory


Agreed with

– Tomas Ignacio Griffa
– Adam Remport

Agreed on

FRT violates multiple fundamental human rights and requires comprehensive legal safeguards


Mandatory fundamental rights impact assessments must be conducted prior to any new FRT use

Explanation

Cronin argues that before deploying FRT, authorities must conduct comprehensive assessments of how the technology will impact fundamental rights. These assessments must include strict necessity and proportionality analysis and outline specific parameters of use.


Evidence

Assessments must explicitly outline who will use it, who it will be used against, where, why, and how it will be used, the rights impacted, the nature and extent of risks, how risks will be mitigated, and justification for why benefits outweigh rights impacts.


Major discussion point

Prior impact assessment requirements


Topics

Human rights | Legal and regulatory


Agreed with

– Tomas Ignacio Griffa
– Adam Remport

Agreed on

Lack of transparency and public consultation enables FRT abuse


Live FRT should be prohibited as it represents a dangerous red line

Explanation

Cronin takes a strong position that real-time or live facial recognition technology is too dangerous and should be completely banned. While acknowledging that retrospective FRT can also be dangerous, she argues live FRT crosses a red line that should not be crossed.


Evidence

The European Court of Human Rights and Court of Justice of the European Union view live FRT as more invasive than retrospective use.


Major discussion point

Complete prohibition of live FRT


Topics

Human rights | Legal and regulatory


Independent oversight bodies must be established with mandatory annual reporting requirements

Explanation

Cronin argues that proper oversight is essential for any FRT deployment, requiring independent bodies that can monitor use and publish regular reports. This creates accountability and transparency in the system.


Evidence

Principles 16 and 17 specifically address the need for independent oversight bodies and their obligation to publish annual reports.


Major discussion point

Independent oversight and accountability


Topics

Legal and regulatory


Creative community activation through local artists and grassroots organizations is essential for public awareness

Explanation

In response to a question about community engagement, Cronin argues that raising public awareness about FRT requires creative approaches including working with local artists and grassroots organizations. She emphasizes the need to get imaginative in communications efforts.


Evidence

ICCL created a mural with a local artist in Dublin to highlight the dangers of FRT. She suggests looping in civil society organizations who might not work in this space and getting down to grassroots level.


Major discussion point

Community engagement strategies


Topics

Sociocultural


Different jurisdictions require different approaches – some calling for bans, others for moratoriums or legislation

Explanation

Cronin acknowledges that INCLO’s 17 member organizations across different jurisdictions have varying approaches to FRT regulation. While many would prefer a complete ban, the reality is that some jurisdictions have already implemented systems, requiring different strategic approaches.


Evidence

INCLO has 17 members in 17 different jurisdictions with different existing safeguards and protections. Some groups call for bans, others for moratoriums, and others for legislation.


Major discussion point

Jurisdictional differences in regulatory approaches


Topics

Legal and regulatory


Disagreed with

– Audience

Disagreed on

Regulatory approach – ban versus regulation with safeguards


T

Tomas Ignacio Griffa

Speech speed

170 words per minute

Speech length

1383 words

Speech time

486 seconds

Buenos Aires implemented FRT system in 2019 supposedly only for fugitives but accessed biometric data of over 7 million people illegally

Explanation

Griffa explains that while the Buenos Aires FRT system was officially designed to search for fugitives using a database of 30,000 people, investigation revealed that authorities had actually accessed biometric data of over 7 million people through more than 9 million consultations. This massive overreach violated the system’s stated purpose and legal framework.


Evidence

The system was supposed to work with the National Fugitive Database (CONARC) of about 30,000 names, but when the judge investigated, it was discovered that the government had made over 9 million consultations regarding more than 7 million people.


Major discussion point

Massive scope creep and illegal data access


Topics

Human rights | Legal and regulatory


Agreed with

– Olga Cronin
– Adam Remport

Agreed on

FRT systems are inherently unreliable and prone to errors with serious consequences


The system was ruled unconstitutional due to lack of legal compliance, missing oversight bodies, and no human rights impact studies

Explanation

Griffa describes how the local judge found the FRT system unconstitutional because it was implemented without proper legal safeguards. The ruling highlighted that required oversight bodies were either never created or not provided with necessary information, and no prior human rights impact studies were conducted.


Evidence

The legislative commission supposed to oversee the system was never created, the Defensoría del Pueblo was not provided with information needed to audit the system, there were no previous studies on human rights impact, and no instances for public participation prior to introduction.


Major discussion point

Constitutional violations and lack of safeguards


Topics

Human rights | Legal and regulatory


Agreed with

– Olga Cronin
– Adam Remport

Agreed on

FRT violates multiple fundamental human rights and requires comprehensive legal safeguards


Thousands were searched without legal basis, information was manually deleted, and it was impossible to trace which officers operated the system

Explanation

Griffa explains that expert audits revealed systematic violations including searching people who weren’t fugitives, deliberate destruction of evidence, and lack of accountability mechanisms. This demonstrates how FRT systems can operate without proper controls or oversight.


Evidence

Expert audits found thousands of people searched without legal basis (people who were not fugitives), information regarding system use was manually deleted in a way that made it impossible to recover, and it was impossible to trace which specific public officers had operated the system.


Major discussion point

Systematic violations and evidence destruction


Topics

Human rights | Legal and regulatory


Agreed with

– Olga Cronin
– Adam Remport

Agreed on

FRT systems are inherently unreliable and prone to errors with serious consequences


Government claims technical details are trade secrets, preventing proper assessment of bias and discrimination

Explanation

Griffa describes ongoing legal battles over transparency, where the government refuses to disclose technical details of the FRT software, claiming they are trade secrets belonging to the vendor. This prevents proper assessment of whether the system has discriminatory impacts based on race or gender.


Evidence

The Chamber of Appeals ordered a test to determine if the system has differential impact based on race or gender. The government wants to do a black box test, but CELS argues it’s necessary to disclose technical details of the software and training datasets. The government claims this information is a trade secret.


Major discussion point

Transparency vs. trade secrets in bias assessment


Topics

Human rights | Legal and regulatory


Agreed with

– Olga Cronin
– Adam Remport

Agreed on

Lack of transparency and public consultation enables FRT abuse


A

Adam Remport

Speech speed

123 words per minute

Speech length

1493 words

Speech time

727 seconds

Hungarian government banned Pride parade and expanded FRT use to all petty offenses, enabling mass surveillance of demonstrators

Explanation

Remport explains how the Hungarian government used child protection as justification to ban Pride parades and simultaneously expanded FRT capabilities to cover all petty offenses. Since participating in a banned event constitutes a petty offense, this creates a legal framework for mass surveillance of LGBTQ+ demonstrators.


Evidence

In February, the Prime Minister said Pride would be banned for child protection. New laws banned assemblies ‘displaying or promoting homosexuality’ and made FRT available for all petty offenses. Participating in a banned event is an infraction, so demonstrators would collectively commit infractions in the tens of thousands.


Major discussion point

Legal framework enabling mass surveillance of LGBTQ+ community


Topics

Human rights


Agreed with

– Olga Cronin

Agreed on

FRT is being weaponized against marginalized communities and protesters


The system violates multiple INCLO principles by targeting peaceful protesters and lacking transparency, consultation, or oversight

Explanation

Remport systematically demonstrates how Hungary’s FRT use violates numerous INCLO principles, including the fundamental prohibition on using FRT against peaceful demonstrators, lack of public consultation, absence of impact assessments, and missing oversight mechanisms.


Evidence

Violations include: using FRT against peaceful demonstrators (violates principle 1), no public consultation before system introduction, no data protection impact assessment, no consultation with data protection authority, no records of use or statistics available, no impact assessments before individual uses, no vendor lock-in assessment, no prior judicial authorization, no independent oversight body, and no annual reports.


Major discussion point

Systematic violation of FRT principles


Topics

Human rights | Legal and regulatory


Agreed with

– Olga Cronin
– Tomas Ignacio Griffa

Agreed on

FRT violates multiple fundamental human rights and requires comprehensive legal safeguards


FRT is being deliberately weaponized with government actively communicating that participants will be found and fined

Explanation

Remport argues that rather than trying to mitigate FRT risks, the Hungarian government is deliberately exploiting the technology’s most invasive properties as a weapon against LGBTQ+ rights. The government actively threatens that facial recognition will inevitably find and punish participants.


Evidence

The government actively communicated that facial recognition would be used against people and that they cannot hide because they will be found with facial recognition – it is inevitable. This creates a massive chilling effect on freedom of assembly.


Major discussion point

Deliberate weaponization of FRT against LGBTQ+ community


Topics

Human rights


Agreed with

– Olga Cronin

Agreed on

FRT is being weaponized against marginalized communities and protesters


Lack of public awareness about FRT existence due to no consultation or communication from government

Explanation

Remport explains that the Hungarian public was largely unaware that FRT systems even existed because the government implemented them without any public consultation or communication. This lack of transparency itself becomes a tool of oppression, as people cannot assess risks or make informed decisions.


Evidence

There was no public consultation when the FRT system was introduced in 2016, and it wasn’t communicated to the public, meaning there was no public awareness of the system’s existence until the current crisis. People never cared about FRT because they didn’t know it existed.


Major discussion point

Weaponized lack of transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Olga Cronin
– Tomas Ignacio Griffa

Agreed on

Lack of transparency and public consultation enables FRT abuse


A

Audience

Speech speed

134 words per minute

Speech length

137 words

Speech time

61 seconds

Need for community activation and discussion about whether FRT should be completely banned or regulated

Explanation

An audience member from Brazil asks about community engagement strategies and whether FRT should be completely banned or regulated. This reflects broader civil society concerns about how to effectively organize against FRT and what the ultimate policy goal should be.


Evidence

The questioner mentions being part of a project doing community activations about facial recognition in police use, and notes that in Brazil they are discussing banning all FRT systems.


Major discussion point

Community organizing strategies and policy goals


Topics

Human rights | Sociocultural


Disagreed with

– Olga Cronin

Disagreed on

Regulatory approach – ban versus regulation with safeguards


J

June Beck

Speech speed

181 words per minute

Speech length

206 words

Speech time

68 seconds

Growing concern about laws banning face masks at protests as response to public attempts to avoid FRT surveillance

Explanation

Beck raises the issue of how governments are responding to people’s attempts to protect themselves from FRT by wearing masks, with increasing laws that penalize mask-wearing at protests. This creates a concerning dynamic where people lose the ability to protect their privacy.


Evidence

More laws are being passed to ban face masks at protests, including proposed changes in Ireland and England making restrictions more severe.


Major discussion point

Erosion of privacy protection methods


Topics

Human rights | Legal and regulatory


M

MODERATOR

Speech speed

51 words per minute

Speech length

46 words

Speech time

54 seconds

Workshop session transition and organization

Explanation

The moderator announces the transition to workshop two multiple times at the end of the session. This represents the organizational structure of the conference and the need to manage multiple concurrent sessions.


Evidence

Repeated announcements of ‘Workshop two’ to signal the end of the current session and transition to the next workshop


Major discussion point

Conference organization and session management


Topics

Sociocultural


Agreements

Agreement points

FRT systems are inherently unreliable and prone to errors with serious consequences

Speakers

– Olga Cronin
– Tomas Ignacio Griffa
– Adam Remport

Arguments

FRT systems are probabilistic and prone to errors, with threshold values creating false positive or negative rates


The technology demonstrates arbitrariness, as shown by Robert Williams case where different algorithms produced different results


Buenos Aires implemented FRT system in 2019 supposedly only for fugitives but accessed biometric data of over 7 million people illegally


Thousands were searched without legal basis, information was manually deleted, and it was impossible to trace which officers operated the system


Summary

All speakers agree that FRT technology is fundamentally unreliable, with Cronin demonstrating this through the Robert Williams case, Griffa showing massive overreach in Buenos Aires, and Remport highlighting lack of transparency in Hungary’s system


Topics

Human rights | Legal and regulatory


FRT violates multiple fundamental human rights and requires comprehensive legal safeguards

Speakers

– Olga Cronin
– Tomas Ignacio Griffa
– Adam Remport

Arguments

FRT affects multiple human rights including dignity, privacy, freedom of expression, peaceful assembly, equality, and due process


Any FRT use must have sufficient legal basis and should never be used to identify protesters or collect information on peaceful assemblies


The system was ruled unconstitutional due to lack of legal compliance, missing oversight bodies, and no human rights impact studies


The system violates multiple INCLO principles by targeting peaceful protesters and lacking transparency, consultation, or oversight


Summary

All speakers agree that FRT has broad human rights implications and requires strong legal frameworks with proper oversight, impact assessments, and safeguards to prevent abuse


Topics

Human rights | Legal and regulatory


Lack of transparency and public consultation enables FRT abuse

Speakers

– Olga Cronin
– Tomas Ignacio Griffa
– Adam Remport

Arguments

Mandatory fundamental rights impact assessments must be conducted prior to any new FRT use


Government claims technical details are trade secrets, preventing proper assessment of bias and discrimination


Lack of public awareness about FRT existence due to no consultation or communication from government


Summary

All speakers emphasize that governments are implementing FRT systems without proper public consultation, transparency, or impact assessments, which enables systematic abuse and prevents accountability


Topics

Human rights | Legal and regulatory


FRT is being weaponized against marginalized communities and protesters

Speakers

– Olga Cronin
– Adam Remport

Arguments

FRT is being weaponized against marginalized groups including Palestinians, Uyghur Muslims, and protesters


The technology enables mass surveillance and gives police seismic shift in surveillance power, turning people into ‘walking license plates’


Hungarian government banned Pride parade and expanded FRT use to all petty offenses, enabling mass surveillance of demonstrators


FRT is being deliberately weaponized with government actively communicating that participants will be found and fined


Summary

Both speakers agree that FRT is not a neutral technology but is being actively used as a tool of oppression against vulnerable communities, particularly LGBTQ+ individuals and political protesters


Topics

Human rights


Similar viewpoints

Both speakers emphasize the critical importance of independent oversight mechanisms for FRT systems, with Cronin advocating for this in INCLO principles and Griffa showing the consequences when such oversight is absent in Argentina

Speakers

– Olga Cronin
– Tomas Ignacio Griffa

Arguments

Independent oversight bodies must be established with mandatory annual reporting requirements


The system was ruled unconstitutional due to lack of legal compliance, missing oversight bodies, and no human rights impact studies


Topics

Legal and regulatory


Both speakers strongly oppose the use of FRT against peaceful protesters and demonstrators, viewing this as a fundamental violation of democratic rights and freedoms

Speakers

– Olga Cronin
– Adam Remport

Arguments

Any FRT use must have sufficient legal basis and should never be used to identify protesters or collect information on peaceful assemblies


The system violates multiple INCLO principles by targeting peaceful protesters and lacking transparency, consultation, or oversight


Topics

Human rights


Both acknowledge the ongoing debate about whether FRT should be completely banned or regulated, recognizing that different jurisdictions may require different approaches based on their specific circumstances

Speakers

– Olga Cronin
– Audience

Arguments

Different jurisdictions require different approaches – some calling for bans, others for moratoriums or legislation


Need for community activation and discussion about whether FRT should be completely banned or regulated


Topics

Human rights | Legal and regulatory


Unexpected consensus

Trade secrets cannot justify lack of transparency in bias assessment

Speakers

– Olga Cronin
– Tomas Ignacio Griffa

Arguments

Mandatory fundamental rights impact assessments must be conducted prior to any new FRT use


Government claims technical details are trade secrets, preventing proper assessment of bias and discrimination


Explanation

Both speakers, drawing from different legal contexts (UK Bridges case and Argentina case), reach the same conclusion that commercial trade secret claims cannot override the need for transparency in assessing discriminatory impacts of FRT systems


Topics

Human rights | Legal and regulatory


Creative community engagement is essential for FRT awareness

Speakers

– Olga Cronin
– Audience

Arguments

Creative community activation through local artists and grassroots organizations is essential for public awareness


Need for community activation and discussion about whether FRT should be completely banned or regulated


Explanation

There was unexpected consensus on the need for creative, grassroots approaches to public education about FRT, moving beyond traditional advocacy to include artistic and community-based methods


Topics

Sociocultural


Overall assessment

Summary

There is strong consensus among all speakers that FRT poses serious threats to human rights, is being systematically abused by governments, and requires comprehensive legal safeguards. All speakers agree on the technology’s unreliability, its disproportionate impact on marginalized communities, and the need for transparency and oversight.


Consensus level

Very high level of consensus with no fundamental disagreements. The speakers complement each other’s arguments with concrete examples from different jurisdictions (Ireland/UK, Argentina, Hungary) that all support the same conclusions about FRT dangers. This strong consensus strengthens the case for international coordination on FRT regulation and suggests broad civil society agreement on the need for restrictive approaches to FRT deployment.


Differences

Different viewpoints

Regulatory approach – ban versus regulation with safeguards

Speakers

– Olga Cronin
– Audience

Arguments

Different jurisdictions require different approaches – some calling for bans, others for moratoriums or legislation


Need for community activation and discussion about whether FRT should be completely banned or regulated


Summary

While Cronin acknowledges that INCLO members have varying approaches (some calling for complete bans, others for regulation), and mentions that ‘from many of our perspectives in INCLO, we would call for a ban,’ she also recognizes that ‘we know that that fight has been lost in certain jurisdictions’ requiring a pragmatic approach with safeguards. The Brazilian audience member specifically asks whether FRT should be banned completely, highlighting this strategic disagreement within the civil liberties community.


Topics

Human rights | Legal and regulatory


Unexpected differences

Overall assessment

Summary

The speakers show remarkable alignment on the fundamental problems with FRT and the need for strong protections, with only minor strategic disagreements about regulatory approaches


Disagreement level

Very low level of disagreement. The speakers are essentially presenting a unified front against current FRT practices, with their different case studies (international principles, Argentina’s legal victory, Hungary’s weaponization) all supporting the same core argument that FRT poses serious threats to human rights and requires either prohibition or very strict regulation. The only meaningful disagreement is strategic – whether to pursue complete bans or work within existing systems to implement strong safeguards. This low level of disagreement actually strengthens their collective message but may indicate a need for more diverse perspectives in the discussion to fully explore the complexities of FRT regulation.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of independent oversight mechanisms for FRT systems, with Cronin advocating for this in INCLO principles and Griffa showing the consequences when such oversight is absent in Argentina

Speakers

– Olga Cronin
– Tomas Ignacio Griffa

Arguments

Independent oversight bodies must be established with mandatory annual reporting requirements


The system was ruled unconstitutional due to lack of legal compliance, missing oversight bodies, and no human rights impact studies


Topics

Legal and regulatory


Both speakers strongly oppose the use of FRT against peaceful protesters and demonstrators, viewing this as a fundamental violation of democratic rights and freedoms

Speakers

– Olga Cronin
– Adam Remport

Arguments

Any FRT use must have sufficient legal basis and should never be used to identify protesters or collect information on peaceful assemblies


The system violates multiple INCLO principles by targeting peaceful protesters and lacking transparency, consultation, or oversight


Topics

Human rights


Both acknowledge the ongoing debate about whether FRT should be completely banned or regulated, recognizing that different jurisdictions may require different approaches based on their specific circumstances

Speakers

– Olga Cronin
– Audience

Arguments

Different jurisdictions require different approaches – some calling for bans, others for moratoriums or legislation


Need for community activation and discussion about whether FRT should be completely banned or regulated


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

Facial Recognition Technology (FRT) poses significant risks to human rights including privacy, freedom of assembly, and equality, with documented bias against Black individuals and potential for mass surveillance


INCLO developed 18 principles for police use of FRT to mitigate harms, including requirements for legal basis, impact assessments, independent oversight, and prohibition of live FRT


Real-world case studies from Argentina and Hungary demonstrate how FRT can be misused when proper safeguards are absent – Argentina’s system illegally accessed 7+ million people’s data while Hungary weaponized FRT against LGBTQ+ Pride participants


FRT systems are inherently unreliable and probabilistic, prone to false positives/negatives, with different algorithms producing contradictory results as shown in the Robert Williams case


Public awareness and community engagement are crucial since many people are unaware FRT systems exist or how they operate in their jurisdictions


The technology enables governments to weaponize surveillance against marginalized groups and peaceful protesters, creating chilling effects on freedom of assembly


Resolutions and action items

INCLO created and published 18 principles for police use of FRT as an advocacy tool for civil society organizations


Legal challenges can be pursued through EU Law Enforcement Directive and AI Act provisions


Community activation through creative means like local artists and grassroots organizations should be implemented to raise public awareness


Hard copies of INCLO principles were made available to workshop participants for further distribution and use


Unresolved issues

Whether FRT should be completely banned versus regulated varies by jurisdiction – no consensus reached on universal approach


Technical details of FRT systems remain hidden behind trade secret claims, preventing proper bias assessment


How to effectively handle mass deployment of FRT against large groups (tens of thousands) remains technically and operationally unclear


The relationship between laws banning face masks at protests and FRT deployment needs further examination


Ongoing legal battle in Argentina over government’s refusal to disclose technical specifications claiming trade secrets


Long-term effectiveness of community engagement strategies for FRT awareness remains to be determined


Suggested compromises

INCLO’s 18 principles represent a compromise approach – recognizing that complete bans may not be achievable in all jurisdictions while establishing minimum safeguards


Allowing retrospective FRT use while prohibiting live FRT as a middle-ground approach, though noting retrospective can be equally invasive


Requiring independent technical assessments rather than relying solely on vendor claims about bias and accuracy


Thought provoking comments

Robert was arrested after an algorithm identified him as the ninth most likely match for the probe image but there were two other algorithms run. One returned 243 candidates, Robert wasn’t on that list, and another returned no results at all and yet he was still arrested and detained. So really the point of this is just to show the arbitrariness of this and it’s not this silver bullet solution that it’s often presented to be.

Speaker

Olga Cronin


Reason

This comment is deeply insightful because it exposes the fundamental unreliability and arbitrariness of FRT through a concrete example. It demonstrates how the same person can be identified differently by different algorithms, yet law enforcement still acted on inconclusive results. This challenges the common perception of FRT as infallible technology.


Impact

This comment established the foundation for the entire discussion by immediately dismantling the myth of FRT reliability. It shifted the conversation from theoretical concerns to concrete evidence of systemic failures, setting up the framework for all subsequent case studies and principles discussed.


However, when the judge asked the National Identity Database how many individual consultations the government of the city of Buenos Aires had made, it turned out that the government had made consultations about more than seven million people, more than nine million consultations in total regarding more than seven million people. So, clearly the Buenos Aires police and perhaps other offices were accessing this biometric data for other purposes, entirely different from searching for fugitives.

Speaker

Tomas Ignacio Griffa


Reason

This revelation is particularly thought-provoking because it exposes the massive scope creep from the stated purpose (30,000 fugitives) to actual implementation (7+ million people). It demonstrates how FRT systems can be systematically abused beyond their intended scope without proper oversight.


Impact

This comment fundamentally shifted the discussion from technical accuracy issues to systemic abuse and mission creep. It provided concrete evidence for why the INCLO principles around oversight, documentation, and legal frameworks are essential, making the abstract principles tangible through real-world consequences.


It is an interesting case study of the lack of transparency around facial recognition. One of my conclusions will be that FRT in this present case is used actively to discourage people from attending demonstrations, but the lack of transparency, the lack of knowledge that people have of what is going to actually happen to them is also as discouraging as the outright threats of using FRT.

Speaker

Adam Remport


Reason

This insight is particularly profound because it identifies how uncertainty itself becomes a weapon. The comment reveals that the chilling effect doesn’t require actual deployment – the mere possibility, combined with lack of transparency, creates self-censorship and suppresses fundamental rights.


Impact

This comment elevated the discussion to examine the psychological and societal impacts of FRT beyond direct misidentification. It introduced the concept of ‘weaponized uncertainty’ and connected technical surveillance capabilities to broader democratic freedoms, deepening the conversation about systemic effects on civil liberties.


Well, people, I think, never really cared about FRT, because they didn’t actually know that it existed, precisely because of the lack of communication on the government side. So, the situation had to become this bad and severe for the people to start to even care about the problem. But now, the system already exists, with the rules that we have now, and which can be abused by the police and the government.

Speaker

Adam Remport


Reason

This comment is insightful because it highlights a critical democratic deficit – how surveillance systems can be implemented without public awareness or consent, and by the time people become aware, the infrastructure for abuse is already in place. It reveals the strategic nature of opacity in surveillance deployment.


Impact

This response crystallized the importance of proactive transparency and public engagement principles discussed earlier. It connected the technical principles to democratic governance, showing how the lack of early intervention creates fait accompli situations where rights are harder to protect retroactively.


I was wondering, since we’re talking about facial recognition technologies, there’s also been a lot of movement to penalize wearing masks in public as an attempt to protect yourself against facial recognition technology. So I was wondering if INCLO or any organization have thoughts or processes or any kind of discussions on how the ban of facial masks, for example, is also in conversation with FRT.

Speaker

June Beck


Reason

This question is thought-provoking because it identifies the emerging ‘arms race’ between surveillance technology and privacy protection measures. It reveals how FRT deployment creates secondary policy responses that further erode privacy rights, creating a compounding effect on civil liberties.


Impact

This question expanded the scope of the discussion beyond FRT itself to examine the broader ecosystem of surveillance and counter-surveillance measures. It highlighted how FRT creates cascading policy effects that multiply its impact on civil liberties, adding another layer of complexity to the regulatory challenges discussed.


Overall assessment

These key comments fundamentally shaped the discussion by moving it through distinct phases: from establishing the technical unreliability of FRT, to demonstrating systematic abuse in practice, to revealing the strategic use of opacity as a tool of social control, and finally to examining the broader ecosystem of surveillance and counter-surveillance measures. The comments created a progression from individual harms to systemic abuse to democratic deficits, making the abstract principles concrete through real-world consequences. The discussion evolved from a technical presentation to a nuanced examination of how surveillance technology intersects with democratic governance, civil liberties, and social control. Each comment built upon previous insights, creating a comprehensive picture of how FRT threatens not just individual privacy but the broader fabric of democratic society.


Follow-up questions

How to effectively conduct community activation and education about facial recognition technology risks

Speaker

Pietra (audience member from Brazil)


Explanation

This is important because many people are unaware that FRT systems exist or understand their risks, making community education crucial for building awareness and resistance to harmful implementations


Whether facial recognition technology should be completely banned or if there are acceptable use cases

Speaker

Pietra (audience member from Brazil)


Explanation

This represents a fundamental policy question that different jurisdictions are grappling with, as some advocate for total bans while others seek regulatory frameworks


How laws banning face masks in public relate to and interact with facial recognition technology deployment

Speaker

June Beck (Youth for Privacy)


Explanation

This highlights an emerging area where governments may be restricting protective measures against surveillance, creating a concerning dynamic between FRT use and citizens’ ability to protect their privacy


How the Hungarian FRT system will technically and operationally handle mass surveillance of tens of thousands of people simultaneously

Speaker

Adam Remport


Explanation

This is critical because the system has never been tested at this scale, and the capacity limitations could affect both the effectiveness and the human rights impacts of such mass deployment


What specific purposes the Buenos Aires police used to access over 7 million people’s biometric data beyond searching for fugitives

Speaker

Tomas Ignacio Griffa


Explanation

This represents a significant violation of the system’s stated purpose and legal framework, and understanding these unauthorized uses is crucial for accountability and preventing similar abuses


Whether technical details of FRT software and training datasets should be disclosed for bias testing versus protecting trade secrets

Speaker

Tomas Ignacio Griffa


Explanation

This ongoing legal debate in Argentina highlights the tension between transparency needed for accountability and vendors’ claims of proprietary information, which affects the ability to properly audit these systems for bias


Whether data from the Hungarian FRT system is being transferred to third countries given the vendor relationship

Speaker

Adam Remport


Explanation

This raises important questions about data sovereignty and international data transfers that could have significant privacy and security implications for Hungarian citizens


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Main Session 3

Session at a glance

Summary

This discussion focused on evaluating the impact of the Internet Governance Forum (IGF) on the Information Society over its 20-year history and exploring its future role in digital governance. The session was moderated by Avri Doria and featured distinguished panelists from various sectors including government, technical organizations, civil society, and academia, along with contributions from both in-person and online participants.


The panelists consistently emphasized that the IGF has been transformative in establishing multi-stakeholder governance as the norm for internet policy-making globally. Ambassador Betanga Ndemo from Kenya highlighted how the IGF’s consultative approach, though initially challenging for policymakers, ultimately made policy implementation much easier and more effective. Several speakers noted that the IGF has served as a crucial platform for capacity building, knowledge exchange, and fostering understanding between different stakeholder groups who might not otherwise interact.


The discussion revealed that the IGF’s impact extends far beyond the annual meetings, encompassing a living ecosystem of national and regional IGFs, dynamic coalitions, best practice forums, and policy networks. Participants shared concrete examples of the IGF’s influence, including the development of internet exchange points in Africa, the successful IANA transition, and the creation of progressive internet legislation in countries like Brazil. The forum has been particularly valuable for Global South participants, providing access to global policy discussions and enabling local issues to reach international attention.


Looking toward the future, there was strong consensus on the need for a permanent mandate for the IGF to ensure its continued stability and effectiveness. Speakers emphasized the importance of better integration with other UN processes, particularly the World Summit on the Information Society (WSIS) framework and the Global Digital Compact (GDC). The discussion concluded with recognition that the IGF remains essential for maintaining people-centered, inclusive digital governance in an increasingly complex and fragmented digital landscape.


Keypoints

## Major Discussion Points:


– **Personal and Professional Impact of IGF**: Participants shared how the Internet Governance Forum has personally transformed their understanding of multi-stakeholder processes, enabled cross-sector dialogue, and created lasting professional relationships. Many emphasized how IGF taught them to listen to different stakeholder perspectives rather than just defending their own positions.


– **IGF’s Role in Developing Multi-stakeholder Governance Culture**: The discussion highlighted how IGF pioneered and normalized multi-stakeholder approaches to internet governance globally, with this model being adopted at national and regional levels. Speakers noted how this collaborative approach has become “the norm” for policy-making in many countries.


– **Concrete Achievements and Infrastructure Development**: Participants cited specific successes including the development of Internet Exchange Points (IXPs) in Africa, submarine cable infrastructure projects, the successful IANA transition, and policy frameworks like Brazil’s Internet Civil Rights Framework. The forum was credited with facilitating knowledge transfer between regions, particularly benefiting Global South countries.


– **Need for Permanent Mandate and Better Integration**: A recurring theme was the urgent need to secure a permanent mandate for IGF rather than operating on temporary renewals. Speakers also emphasized better integration with other UN processes, particularly the WSIS framework and the Global Digital Compact (GDC), to avoid fragmentation of digital governance discussions.


– **Future Challenges and Inclusivity**: The discussion addressed emerging issues like AI governance, cybersecurity, and digital divides, while emphasizing the need to maintain IGF’s people-centered approach. Participants stressed the importance of including more voices from the Global South, youth perspectives, and ensuring the forum remains accessible to newcomers while celebrating its successes more effectively.


## Overall Purpose:


The discussion aimed to assess the 20-year impact of the Internet Governance Forum on digital governance and the information society, while exploring how IGF should evolve to contribute to future digital cooperation frameworks, particularly the WSIS goals and Global Digital Compact implementation.


## Overall Tone:


The tone was overwhelmingly positive and celebratory, with participants expressing genuine affection for and commitment to the IGF process. The discussion was notably personal and emotional, with many speakers sharing transformative experiences. While acknowledging challenges and areas for improvement, the tone remained constructive and forward-looking. There was a sense of urgency around securing IGF’s future through a permanent mandate, but this was expressed with determination rather than anxiety. The atmosphere was collegial and inclusive, reflecting the multi-stakeholder values being discussed.


Speakers

**Speakers from the provided list:**


– **Avri Doria** – Moderator of the session, has been involved with IGF since its conception and birth


– **Bitange Ndemo** – Professor Ambassador of Kenya to Belgium and the European Union, former policymaker in Kenya


– **Hans Petter Holen** – Leader of RIPE NCC (one of the regional internet registries), involved in critical internet infrastructure


– **Renata Mielli** – Chair of CGI.br (Brazilian Internet Steering Committee)


– **Funke Opeke** – Founder of Main1 in Nigeria, entrepreneur in digital infrastructure, retiring from the company


– **Qusai Al Shatti** – Representative of the Arab IGF


– **Chat Garcia Ramilo** – Associated with APC (Association for Progressive Communications), civil society advocate


– **Luca Belli** – Professor at FGV Law School in Rio de Janeiro, academic and dynamic coalition convener


– **Isabelle Lois** – Vice Chair of the CSTD (Commission on Science and Technology for Development), representative of Swiss government


– **Anriette Esterhuysen** – Former APC member, former MAG Chair, civil society advocate


– **Online moderator** – Jim Sunilufe, remote moderator managing online participants


– **Online participant 1** – Dr. Robinson Sibbe, CEO and Lead Forensic Examiner of Digital Footprints Nigeria, cybersecurity expert


– **Online participant 2** – Emily Taylor, researcher from the UK who conducted impact studies on IGF


**Additional speakers:**


– **Rolf Meijer** – CEO of SIDN, registry for the .nl country code top-level domain


– **Juan Fernandez** – Ministry of Communication of Cuba representative


– **Stephanie Perrin** – From Canada, former Canadian government telecom sector worker


– **Auka Aukpals** – IGF attendee for past ten years


– **Markus Kummer** – Long-time IGF participant, former IGF secretariat member, involved since inception


– **Jasmine Maffei** – Youth participant from Hong Kong


– **Piu** – Young participant from Myanmar


– **Raul Echeverria** – From Uruguay, MAG member


– **Bertrand Lachapelle** – Executive Director of the Internet and Jurisdiction Policy Network


– **Nathan Latte** – From IGF Côte d’Ivoire (online participant)


– **Mark Gave** – From the UK (online participant)


Full session report

# Comprehensive Report: Evaluating the Internet Governance Forum’s 20-Year Impact on Digital Governance


## Executive Summary


This extensive discussion, moderated by Avri Doria, brought together distinguished representatives from government, technical organisations, civil society, and academia to assess the Internet Governance Forum’s (IGF) transformative impact over two decades. The session was structured around three key questions: how the IGF has changed participants personally and professionally, what concrete impacts it has achieved, and what challenges it faces moving forward.


The overarching consensus emerged that the IGF has fundamentally transformed internet governance by establishing multi-stakeholder consultation as the global norm, creating lasting infrastructure and policy impacts particularly in the Global South, and fostering a unique ecosystem of collaboration. However, participants also identified critical challenges requiring immediate attention, particularly the urgent need for a permanent mandate and better integration with emerging digital governance frameworks.


## Personal and Professional Transformations


### Converting Sceptics to Advocates


The discussion revealed profound personal transformations across all stakeholder groups. Juan Fernandez from Cuba’s Ministry of Communication offered a remarkably candid admission: “When this began, and I went to the first one, I was very sceptical. I think, well, this is just, we’re giving some breadcrumbs to the civil society because they were shunned out of the process… But I was proven wrong.” His conversion from sceptic to advocate demonstrates the IGF’s capacity to change minds even among initially resistant government officials.


Ambassador Bitange Ndemo of Kenya, Ambassador to Belgium and the European Union, provided crucial historical perspective, explaining how the IGF introduced a revolutionary approach to policy-making at a time when “there was no Google” and “we didn’t know what exactly internet will do.” He emphasised that whilst multi-stakeholder consultation was initially “painful” for policymakers accustomed to traditional top-down approaches, it ultimately made policy implementation “much, much easier” because stakeholders had been genuinely consulted.


### Building Multi-Stakeholder Mindsets


Anriette Esterhuysen, former APC member and MAG Chair, described how the IGF created “impatience for non-multi-stakeholder forums” and fundamentally changed how she approached policy work by connecting policymakers with implementers. Luca Belli from FGV Law School, attending his 15th IGF, emphasised how the forum enabled understanding of different stakeholder perspectives and built trust through relationships rather than mere transactional interactions.


The youth perspective was particularly compelling, with participants from Hong Kong and Myanmar emphasising the IGF’s unique bottom-up approach where young people can easily communicate with senior leaders. Piu from Myanmar raised the important question of how to “officially recognise youth voices as part of the multi-stakeholder model.”


Avri Doria, as moderator, provided personal context as someone who was “here at its conception” and “birth” and has attended most IGF meetings, offering a unique longitudinal perspective on the forum’s evolution.


## Establishing Multi-Stakeholder Governance as the Global Norm


### Systemic Change in Policy-Making


Perhaps the most significant achievement identified was the IGF’s role in normalising multi-stakeholder governance globally. Qusai Al Shatti, representing the Arab IGF, noted that “multi-stakeholder process became the norm in policy making and regulation” following the IGF’s introduction. This transformation extended beyond internet governance to influence broader policy-making approaches.


Marcus Kummer provided important historical context by referencing the WGIC definition of Internet Governance and the four original themes that structured early discussions. This foundation helped establish the framework that would influence global approaches to digital governance.


### The IGF Ecosystem


Multiple speakers emphasised that the IGF’s impact extends far beyond annual meetings to encompass a comprehensive ecosystem. This includes national and regional IGFs, dynamic coalitions, best practice forums, policy networks, and intersessional work that creates year-round engagement. Renata Mielli, Chair of CGI.br, highlighted how this ecosystem inspired Brazilian internet governance community development and directly influenced policy creation, including the São Paulo multi-stakeholder guidelines launched the day before this session.


Hans Petter Holen from RIPE NCC provided analytical clarity by distinguishing three interconnected layers: internet coordination (keeping systems running through stable, interoperable infrastructure), internet governance (shaping usage through shared norms and policies), and digital governance (guiding social transformations). He positioned the IGF as “a rare and essential arena where technical realities meet policy aspirations.”


## Concrete Infrastructure and Policy Achievements


### Infrastructure Development Impact


The discussion revealed substantial concrete impacts, particularly in infrastructure development. Ambassador Ndemo provided striking context about Africa’s transformation, noting that when IGF discussions began, “Africa had only one gig capacity for the entire continent.” The forum’s role in facilitating knowledge transfer and best practice sharing contributed to dramatic improvements in connectivity.


Funke Opeke, founder of Main1 in Nigeria, emphasised how the IGF enabled pioneers in the Global South to learn best practices for building digital ecosystems. She noted that in her company’s service area, internet penetration grew from “close to 10%” to “close to 50%,” demonstrating the practical impact of knowledge sharing facilitated through IGF networks.


Emily Taylor, participating online from the UK, referenced research demonstrating the IGF’s contribution to the emergence of Internet Exchange Points (IXPs) in Africa, which reduced latency and costs whilst improving service quality.


### Policy Development and Legislative Impact


The IGF’s influence on policy development was demonstrated through multiple concrete examples. Renata Mielli detailed how IGF discussions directly influenced the creation of Brazil’s Internet Civil Rights Framework and Data Protection Law. These achievements demonstrate the forum’s capacity to translate dialogue into actionable policy outcomes.


Luca Belli highlighted how dynamic coalitions’ work influenced various regulatory frameworks, though he noted frustration that these successes were not well-publicised on the IGF website. Chat Garcia Ramilo from APC emphasised the IGF’s role in establishing human rights principles online and addressing gender-based violence through initiatives like the Digital Justice Now Coalition.


The successful IANA transition was repeatedly cited as a major achievement facilitated by IGF discussions, representing a critical moment in internet governance where multi-stakeholder processes proved their effectiveness.


## Global South Participation and Capacity Building


### Amplifying Marginalised Voices


A recurring theme was the IGF’s crucial role in amplifying voices from the Global South that are often marginalised in digital governance conversations. Chat Garcia Ramilo emphasised how the forum provided platforms for perspectives that might otherwise be excluded from international policy discussions.


Funke Opeke described the IGF as enabling Global South stakeholders to “have a seat at the table in a polarised world,” providing access to policy discussions and best practices that would otherwise be unavailable. The forum’s open access approach allowed grassroots initiatives and young people to participate globally.


Dr. Robinson Sibbe, participating online from Nigeria, highlighted the IGF’s role in contextualising cybersecurity challenges and demonstrating the value of inclusive governance for problem-solving, though he noted ongoing challenges in translating global discussions into local action.


### Knowledge Transfer and Learning


Isabelle Lois, Vice Chair of the CSTD representing the Swiss government, emphasised the IGF’s role as a capacity building platform with “tremendous learning opportunities.” This educational function was seen as essential for developing the expertise needed for effective digital governance at national and regional levels.


The capacity building dimension was particularly emphasised by Global South participants, with the IGF serving as a platform where regions could learn from each other’s experiences and adapt successful practices to local contexts.


## Contemporary Challenges and Critical Needs


### The Urgent Need for Permanent Mandate


Perhaps the strongest consensus emerged around the urgent need for a permanent mandate for the IGF. Hans Petter Holen articulated this clearly: “IGF needs permanent mandate to focus on matters rather than securing future meeting place.” The current system of temporary renewals was seen as creating unnecessary uncertainty and diverting energy from substantive work.


Multiple speakers emphasised that permanent mandate would provide the stability needed for long-term planning and more robust funding mechanisms. This institutional security was seen as essential for the IGF to fulfil its potential role in implementing emerging frameworks like the Global Digital Compact.


### Integration with Emerging Digital Governance Frameworks


A critical challenge identified was the need for better integration with other UN processes and emerging digital governance frameworks. Renata Mielli emphasised that the IGF should be empowered as the main focal point for Global Digital Compact implementation.


Anriette Esterhuysen provided a crucial distinction, noting that whilst many new forums like the Global Digital Compact and Artificial Intelligence Dialogue “put the emphasis on the technology, not on the society, and not on the people,” the IGF maintains its people-centred approach.


### Addressing Implementation Gaps


Ambassador Ndemo introduced a critical perspective on the IGF’s methodology, arguing for a shift from siloed discussions to system-wide thinking. He advocated for approaches that could help people benefit from emerging technologies like artificial intelligence, suggesting that the IGF needed to evolve beyond its traditional sectoral approach.


Dr. Robinson Sibbe supported this view, suggesting the IGF should move closer to implementation with localised action and technical working groups. This reflected a broader tension between the IGF’s traditional role as a dialogue forum and growing expectations for more actionable outcomes.


## Accessibility and Participation Challenges


### Barriers to Inclusive Participation


Luca Belli highlighted significant barriers to Global South participation, noting the high financial costs of attending IGF meetings in expensive locations. This accessibility challenge was seen as undermining the multi-stakeholder model by limiting diverse participation to those with sufficient resources.


Raul Echeverria from Uruguay raised concerns about the IGF’s complexity, suggesting the need to “simplify IGF to make it easier for newcomers to become meaningfully involved.” This accessibility challenge was seen as potentially limiting the forum’s ability to attract new participants and maintain its vitality.


### Visibility and Recognition Issues


A significant theme was the IGF’s poor track record in publicising its achievements. Luca Belli expressed frustration that IGF success stories were not well-publicised, limiting the forum’s ability to demonstrate its value to sceptics and secure continued support.


Chat Garcia Ramilo emphasised the need for “better celebration of successes and making achievements more visible,” suggesting that the IGF’s modest culture, whilst admirable, was hindering recognition of its substantial contributions.


## Structural Evolution Proposals


### Working Group for Institutional Reform


Bertrand Lachapelle from the Internet and Jurisdiction Policy Network provided concrete proposals for institutional evolution. He suggested establishing a working group similar to the WGIG from 2004-05 to address two critical areas: the evolution of the IGF’s mandate and focus, and the institutionalisation of its structure.


This proposal gained support from multiple speakers who recognised that whilst the IGF had proven its value, it needed structural evolution to remain relevant and effective. The working group approach was seen as a way to conduct this evolution through the same multi-stakeholder processes that had made the IGF successful.


### Measurement and Evaluation Challenges


Stephanie Perrin from Canada raised the fundamental question of “how do we measure the impacts of the IGF?” noting the difficulty in quantifying the success of multi-stakeholder innovation. This measurement challenge was seen as both a practical problem for securing continued support and a conceptual challenge in understanding the IGF’s diverse impacts.


The discussion revealed tension between the IGF’s non-decision-making nature and expectations for measurable outcomes, with speakers recognising that the forum’s value often lay in relationship building, agenda setting, and cultural change—impacts that are real but difficult to quantify.


## Regional and National IGF Development


The discussion highlighted the remarkable growth of national and regional IGFs as perhaps the most concrete demonstration of the IGF’s impact. Nathan Latte from IGF Côte d’Ivoire and other regional representatives emphasised how national IGFs had adapted the global model to local contexts whilst maintaining core principles.


This organic expansion was seen as evidence that the multi-stakeholder model was meeting real needs at various levels of governance, creating a multi-level governance system that could address challenges at appropriate scales.


## Conclusion: Securing the IGF’s Future


The discussion revealed an IGF at a critical juncture in its development. After twenty years of remarkable success in establishing multi-stakeholder governance as the global norm and creating concrete impacts in infrastructure development and policy creation, the forum faces both unprecedented opportunities and significant challenges.


The overwhelming consensus on the IGF’s value and the urgent need for permanent mandate provides a strong foundation for institutional reform. The testimonies shared demonstrate the IGF’s transformative impact on individuals, institutions, and global governance practices, while highlighting three critical areas requiring immediate attention:


1. **Institutional Strengthening**: Achieving permanent mandate and stable funding to provide the security needed for long-term planning and effectiveness.


2. **Enhanced Integration**: Better positioning within the evolving digital governance landscape, particularly as the focal point for Global Digital Compact implementation while maintaining the IGF’s distinctive people-centred approach.


3. **Improved Accessibility**: Addressing financial barriers, simplifying processes for newcomers, and ensuring continued inclusive participation from all stakeholder groups and regions.


The path forward requires balancing celebration of past achievements with critical analysis of areas needing improvement. As the digital governance landscape continues to evolve rapidly, the IGF’s role as a stable platform for multi-stakeholder dialogue and collaboration becomes increasingly valuable, making the case for permanent mandate and institutional strengthening more urgent than ever.


Session transcript

Avri Doria: So welcome and thank you for coming. And let me get out my little speech here. This is a very interesting setup. It was far more formal than I expected. But anyway, welcome to this session of the IGF on the impact of the IGF on the Information Society. My name’s Avri Doria. That’s nice. They already said it. And I’m the moderator of the session. In this session, I’m going to ask the distinguished panelists who will be joining us soon and you all, the participants, for personal and professional viewpoints on the question. To start in my speaking, let me talk about who I am and my personal perspective on this. The IGF is very important to me. And I expect it is to many of you. I hope it is. But then again, there’s not all that many of you here. Sometimes I’m in love with it. Truly in love with it. And sometimes, well, not so much. So it’s been a long relationship with many different feelings and such. I was here at its conception. I was here at its birth. I’ve been here through many of its ups and downs. Managed to make it to most all of the meetings and have a very strong sort of personal relationship to it. I also have felt that it has helped me understand sort of the various viewpoints of the various populations to information, information society, the needs and such. So for me, it has been a critical information source, a critical ability to meet people, talk to people, understand people who have a different view than I do and such. So that for me has been very strong and has sort of entered every bit of my work as I’ve gotten involved in other parts of the community. Going to ask all the panelists for their view. We’re going to have a set of three questions that I’ll go through in a bit. And then I’m going to go to you, the participants. And I’m very glad to see that you’re lit and not sitting in the dark watching a show. But I really want this to be something where after everybody on stage, all the participants on stage, the speakers have had their chance to speak to the questions that all of you do. So when I ask the questions, I’m going to be asking them not only of the panelists, but of those of you who are here and those online as well. The IGF has been important. I think it’s been important to your businesses, to your research, your learning, your teaching, to those of you that are involved in a political life. Over the 20 years, that importance has shown up in many places at many times in many ways. It’s part of what I’m hoping that we can capture. It has provided a home for an evolving set of objectives over the years. You know, there’s been many efforts. It’s sort of developed the national and regional initiatives, you know, in developing internet governance policy. The best practice efforts, the policy networks, the aptly named dynamic coalitions that work all year round that have basically looked into a number of different issues, exciting issues and important issues. Just as internet governance has evolved with the new issues, the IGF now stands at a milestone period of its evolving role in the coordination of digital governance spaces. To improve the dialogue and the links among people, the links among governments, the links among institutions, ways of looking at the internet and data, you know, how does it evolve? How do they all interact? How do we all interact when this is happening? It is a place for many issues that we haven’t even discovered yet, those that are emerging issues and those that will emerge, hopefully, over the next couple of years. So this main session basically has two objectives. The first objective is to share the experiences of you, the stakeholders, that demonstrate the IGF’s usefulness and to illustrate its concrete and meaningful impact on the evolution of the digital ecosystem in different national and regional contexts and in different sectors of the economy of society. The second purpose is to discuss how the IGF should continue to contribute to the achievement of a people-centered, inclusive and development-oriented information society, of how it should play its central role in the ongoing WSIS and in the beginnings of the GDC processes. These are the questions that I will ask each of the participants, and these are the questions that I will ask you to consider when we open up microphones, and I’m told that there will be two microphones at the edge of the hallway, but that’s, I guess, when we get to that part of the discussion. So first I will ask them of the panelists, and then I will ask them of all of you. Also there will be a Mentimeter poll on the first question as that goes through, and I guess at some point that will get displayed with the QR code that you all need to go to if you’re going to use the Mentimeter and give your opinions, and I hope you do. So the first question that I will ask is what has the IGF meant to you, and what do you want it to mean to you in the future? So it’s looking both at retrospective and also forward, and how is that connected? What does it mean? What should it mean? What can it mean? How has the IGF multi-stakeholder model, one among many, and its realizations, whether it’s in the IGF meetings, the NRIs, the dynamic coalitions, the policy networks, the best practice forums, how has that worked? Is some part of that really been resonant with you and what was important and such? How has it made an impact on your organization, your internet issues, your country, your region? However it’s made an impact, it’s important for us to hear. How can the IGF play a more impactful role to contribute to the implementation of WSIS goals in the GDC? So the third question then comes down to how can it be better? How can it become more impactful? How can it achieve its results better? As I said, first I’ll ask the panelists, and then I plan to come to all of you. So think about your answers to the questions. Think about whether you want to answer just one, just two, or all three of them. Everybody will get sort of the same three minutes that each of the panelists can get. We have a very fine set of panelists, of speakers. And so I’m going to ask you to welcome them all. And we have Hans-Petter Hollen of RIPE NCC. We have Professor Ambassador Betanga Ndemo, who’s Ambassador of Kenya to Belgium and the European Union. We have Renata Mielli, Chair of CGI.br, the Brazilian Internet Steering Committee. We have Funke Opeke, Founder of Main1 in Nigeria. We have Qusai Al Shatti of the Arab IGF. We have Garcia Murmillo of the Association for Progressive Communications, APC. We have Luca Belli, a professor at FGV Law School in Rio de Janeiro. And we have Isabel Lois, Vice Chair of the CSTD. At this point, I ask you to welcome them all as they come in and find their seats. Thank you. Thank you. Please. Please welcome. Okay. The first thing I’m going to do, and I’ve given sort of the questions to all you. have been reading the questions for days now, and I’m going to ask each of them, basically, to go through their set of answers in sort of the view they have of the three questions. And first I’d like to start, sorry, I’m stumbling over my words, I apologize. So first I’d like to ask you, Bitange and Demo, to please give us your view of the three questions and their answers.


Bitange Ndemo: Thank you for the opportunity, and it’s a wonderful thing to discuss IGF over the past several years. The first encounter I had with IGF, I had just become a policymaker in my country, Kenya. And policymakers then would make policy and you make sure it’s implemented. And this new method of consultations with the stakeholders was very new and very painful. But once you’ve gone through the whole process with the stakeholders, implementation became much, much easier. For those who are younger, at the time, there was no Google. I think there was Netscape, AltaVista, that’s what was there. We didn’t know what exactly internet will do. But thank God it went the IGF way, otherwise it would have been a private sector company selling its services to the people. There was no infrastructure. In Kenya, for example, not just Kenya, but the whole of Africa, Africa had only one gig of capacity for the whole continent through Intelsat. And the moment we started talking about building the infrastructure, even those who are funding would ask questions like, who is going to use the capacities that you want to bring through the undersea? I remember once in a World Bank meeting, I said, you must watch the movie Field of Dreams, that if you build it, they will come. And they said, is that the answer that you are giving us? I said, yes, this is what we think that is going to change or liberalize these new technologies that are coming. To cut the story short, the infrastructure came, then building the regulatory mechanisms and building the way we are now using the internet. But that came through the several consultations or early meetings of IGF, where we came to learn precisely what we needed to do to make this technology work for the people. I think I would stop there.


Avri Doria: Thank you. It’s a very good start. It’s a very good perspective to start with. Now Hans Petter, I’d like to go to you. And from your perspective, as the leader of one part of the critical resources that we rest this internet on, could you please give us your view of the three?


Hans Petter Holen: Yes, thank you very much for that, Avri. I mean, this is not about me. This is about the RIPE NCC, the institution I’m in charge of, which is then supporting its community, the RIPE community, which is a part of the larger technical community. And the IGF has been a rare and essential arena where technical realities meet policy aspirations. And as digital governance agendas accelerate globally, the IGF must remain the venue that promotes clarity, ensure that we don’t jeopardize the internet’s core infrastructure while trying to regulate the services built atop it where necessary. So this means we need to protect the internet coordination, which keeps it running through stable interoperable systems. And we need to strengthen internet governance, which shapes how we use it through shared norms and policies. And we need to guide digital governance, which shapes what it becomes in terms of social transformations. And I think it’s important to be aware that the IGF, as we see it here, is not the only thing. It’s not just a single event. It’s a living ecosystem that includes national and regional IGFs, dynamic coalitions, best practice forums, policy networks. So it’s not only about the once a year IGF. It’s about this whole ecosystem around it. And our engagement starts in the rooted belief that we should start at the local level and scale it upwards. We support IGFs and network operators groups, both financially and with speakers, to help the communities organize and identify emerging issues and understand stakeholder needs so we can bring them to the regional and global level. And as one of the five regional internet registries, the RIPE NCC is tasked with ensuring long-term scalability, resilience, and security of the internet. So scalability we do to allocation of and registration of IP addresses, number of resources, the ones that you don’t see but your computers need in order to communicate. IPv6 is an important thing here. That’s what we need for scalability for the future. We also need to strengthen routing security by implementing something called resource public key infrastructure, where the providers can sign their routing announcements so we know that packets flow where they should. And resilience is also important where we assign or register ASN numbers which enables multi-homing and peering and creation of robust interconnection. All of this is needed for the new and fancy applications we need on top of the network to work. And how can we do this in the future? We need to continue to deepen the collaboration with governments and between governments. We need to empower the national and regional IGFs and they’re crucial for localizing the global discussions and fostering bottom-up approaches.


Funke Opeke: And we need to enhance inclusivity and accessibility and we need to translate this dialogue here into tangible process. Important here is that we need to secure now a permanent mandate for the IGF so that we can have the focus on the matters at hand, not securing the future as a meeting place. I think that’s important. And I think one of the things we need to bear in mind here is that the internet is a public good and that must serve humanity, to quote the minister from earlier this week. And yeah, I think I’ll leave it there and then pass it on.


Avri Doria: Thank you. And I want to give one piece of advice that I often get. When you get too short a time, please be sure to speak slowly. It makes it harder for people to listen and it makes it harder for those to translate. But thank you very much. So I apologize for the short time zones, but please. So Renata, coming from a multi-stakeholder, an early multi-stakeholder instantiation at a national level, I wonder if you could give your view on the three questions.


Renata Mielli: Thank you, Avri. Thank you, everybody, for being here with us this morning. I’m going to speak as a chair of the Brazilian Internet Steering Committee. Although the multi-stakeholder model of the Brazil Internet, the CGI.br, predates the IGF, the IGF model as a space for multi-stakeholder dialogue and for bringing new relevant issues to the attention of the society has been very influential in inspiring the creation of the Brazilian IGF in 2011. The Brazilian IGF, in turn, has been essential also in the creation of a very robust Brazilian multi-stakeholder Internet governance community, because we are not talking about an event, we are not also talking about a body, but we are talking about an ecosystem. And this regional IGFs and the IGF, they put this ecosystem in moving, which is very, very important. And we have been very creative, both our IGF, we call it FIBI, because it’s Foro da Internet no Brasil, speaking in Portuguese, and it’s very important for us. In particular, because this space has been an essential part in the development of Internet policies in Brazil, inspired by the multi-stakeholder nature of the IGF, being extremely influential in the public debates that lead to the Brasil Marcos Civil da Internet, the Brazilian Internet Civil Rights Framework, in 2014, that was signed on the NET Mundial, and the Brazilian General Data Protection Law in 2018. But regarding the third question, about how strengthening the IGF, how interlinking IGF with the other process like GDC, I also believe that São Paulo multi-stakeholder guidelines that we launched yesterday, and it was developed during the NET Mundial plus 10 last year. So, I invite everybody to go to the CGI stand there and pick up your book, but bringing some aspects, the first step is to give the IJF a permanent mandate. I think this is very we are listening to this a lot here, which requires the IJF to have a more stable and robust funding, and second, we need to amplify the role of the IJF in the WSIS framework and better integration with the WSIS forum. The WSIS forum is the platform for following up the action lines, but it’s a space that’s more restricted to governments and facilitators representing the various U.S. and European countries. The WSIS forum is the platform for following up the action lines, but it’s a space that’s more restricted to governments and facilitators representing the various U.N. agencies responsible for each action lines. So, I think rethinking this governance structure is, in my view, one of the main challenges we have to face. We need also better integration with the WSIS forum, and particularly regarding GDC, I think the IJF has to be in power as a main focal point for the follow-up of the global digital compact implementation, and integrate it with the WSIS forum, as I said, according to modalities to be defined, and that the U.N. avoids any discrepancy in that regard, because, when we have a lot of spaces, different photos, it’s become very difficult to civil society, academia, and even governments from the global south to attain governments from the global south to attain all these different spaces, and this is a very important stakeholder model. Well, I think I’m going to stop here. Thank you, Avri, for the opportunity. Thank you.


Avri Doria: Thank you. And next I come to you, Funke, and I guess as an entrepreneur, as a founder in Nigeria in the development area, I wonder what your perspective, what your company’s perspective would be on these questions.


Funke Opeke: Thank you, Avri, and good morning, everyone. Really great to have this opportunity to speak and reflect as I’m coming to the end of one phase of my career, which is retiring from a company that I founded in West Africa, main one, to build digital infrastructure and close the digital divide. When we launched the submarine cable from Lagos to Portugal in 2010, internet penetration in our region was close to 10%. And as pioneers, we’ve had to lean on platforms like the IGF to learn best practices to really help build the ecosystem that’s been critical in growing that base to close to 50% across West Africa today. The cable does extend into five countries directly. We’re serving, providing data access to 10 countries across West Africa. And what, when I get to reflect at this stage of my career, what the IGF has meant, first, as pioneers in the global south, and I heard Bitange talk about access, Google didn’t exist, access to information, be it policy information or stakeholder in terms of civil society and the populations, how you really bridge that gap with regulators and government policies, partners, and working with the content providers, we realized early on that just building the infrastructure was not sufficient to close the gap. And so a lot of work had to be done, and hence the role of platforms like the IGF in enabling us to do that really understand the role of the ecosystem and all the players and the stakeholders, how we work with them, how we leverage those resources to build skill and capacity in our markets to build out IXPs and Internet connection point to address issues of data governance as that has become a more critical issue, as we got more access, cyber security. So it’s really given us a platform, and not just from a global south or large platform, global north, global south dialogue, but by having other global south partners, I know very early on in the journey I met Bitange when he was regulator in Kenya, Kenya was a couple of years ahead in terms of what they were doing, and we were able to learn from them and bring some of those practices to our markets. So that’s been the role. Sitting here at this point and looking at the issue of permanence, I agree, and I think with the polarized world that we have today, contemplating not having this kind of platform, multi-stakeholder platform, recognizing it’s not decision-making for the Internet, is actually a very chilling thought. What would it be if we did not have the global south, did not have a seat at the table? Now, there’s still a lot of work to be done, because I said just 50% penetration across large parts of Africa. So there’s still work to be done on digital inclusion, on building up access, addressing issues of skill and digital literacy and affordability to really get to the very last person so no one gets left behind. The challenge is the global north has a different set of issues with the acceleration of AI and the pace of digital transformation, and I look today, I’m not sure the divide is closer than it was when I started 15 years ago on this journey. So I think for the IGF, as you say, permanent status, engagement under the VISTAs agenda, more deeper engagement with decision-makers on a global scale to really drive that objective for 2030 so that no one is left behind. That’s what I look out for in the future.


Avri Doria: Thank you very much. Now I’d like to move on to Qusai, who has been part of this IGF scheme for as long as I remember and instrumental in Arab IGF. So from your perspective, from that perspective, how do you see the answers to the three questions?


Qusai Al Shatti: Thank you, Everly, and thank you. I’m so honored to be with you on this panel and with the distinguished panelists. I will look at this. I will say in 2005 there was 500 million users, internet users. Today we have above 4.5 billion users. Internet-related organizations today are more open, more inclusive, more bottom-up in its structure and its operation across the world. Diversity, multilingualism, wider audience, and tools and mechanism that is widely available for us is widely available to be used and take the benefit of. Broadband is widely available. Choice of access is widely available with lower costs compared to, let’s say, 20 years ago. Multi-stakeholder process became the norm of things, became a culture, became the fact that we used when we are engaged in policy dialogue or regulation or others. Digital economy, the enabling environment for innovation, and the greater role it’s playing in the GDPs of countries, that’s today a fact. Reflecting this, it became also the norm for us on regional and national level where we took the conduct and the practice of governance to our regions and countries. What that says to me? Governance works. 20 years after the inception of the IGF, the IGF or our IG works and is successful. And maybe it’s the most successful outcome of the WSIS. And it worked because who was behind it? The people who had the will and passion from all segments of stakeholders who wanted governance to work because they believed this is the best for the Internet. So being part of them, such a wonderful experience. Learning from them, exchanging knowledge, exchanging expertise, resolving our differences on point of views. yet united on agreeing that governance is the way to go. And that’s passion by itself. And this passion needs to continue. So moving forward, the IGF should continue because there is more work to do. And it is an ongoing work. And the best to do it is to couple it with digital cooperation where issues today like AI governance, cross-border data, and data governance as an example, not as all, need to be addressed within a platform like the IGF. And I’ll stop here.


Avri Doria: Thank you. Coming next to you, Chat, who with APC probably was my first introduction into the whole civil society part of this whole issue, having come out of the tech areas when I first got involved. So from that perspective, and you’ve also been there since the beginning, could you tackle the three questions, please?


Chat Garcia Ramilo: Thank you, Abby. And good morning, everyone. APC is part of the IGF tribe in the last 20 plus years. So it’s an honor to be here. And I want to start by recognizing that, yes, APC has been deeply engaged with IGF since its inception, from the early days of WSIS to the working group that helped actually form the IGF, which you were a part of. And I think many of probably people sitting here were part of. And we’ve worked alongside our fellow civil society stakeholders to shape its structure and content. For over 20 years, our contributions, we believe, have been integral to the IGF’s evolution. And in turn, this space has played a pivotal role in sharing our advocacy and our work. This is how we’ve learned, connected, and really engaged, because we’ve also changed how we think because of the connections that this space has offered us. It has been an invaluable platform, a place to listen deeply, to speak, and more importantly, to act. And it has amplified voices from the global south, which is something that we are really passionate about. This is what we bring to this space. Often, those who are marginalized in digital governance conversations. And this is really important because otherwise those perspectives cannot be heard. It’s also shifted the focus of these conversations to center in human rights. Inclusion, you referred to this, digital inclusion, and I think increasingly justice, values that are essential in shaping the digital future that we will all share. Through initiatives like, and I just want to share with you, I mean, to remind you, we have done quite a few things in relation to looking at, really, what has IGF achieved? And we’ve contributed in looking at our Global Information Society Watch, really trying to, again, bring voices from the global south, local voices, to see how that really plays out locally because we have many members in different countries. We’ve also contributed in relation to really looking at the school of African, African School on Internet Governance, where it’s not only civil society that we’ve engaged in, we’ve helped build capacity of governments, regulators, civil society, grassroots, to engage more meaningfully because really, it’s really, what makes IGF is the people behind it, and I do think it’s important to really build that capacity. For us at APC, the IGF, as everyone has said here, is not just an annual event. It’s like a, I think people refer to it as an ecosystem, a living ecosystem. It’s a network of regional initiatives, intersessional work, and relationships that lead to tangible, real-world change. This platform, the IGF, has allowed us to advocate for policies that prioritize, for example, community-centered connectivity and challenge market models that leave entire communities without access, and I think that’s what you referred to in terms of the challenge, more than 50%. We’ve also, I remember, and part of that, in Kenya, for example, we have worked with the regulators to have a license for community-centered connectivity. I think it’s this kind of connections that really add value to the work that we all do. The IGF has also been crucial in helping us address issues that are critical, for example, like gender-based violence and defending sexual and reproductive rights in digital spaces, issues that you wouldn’t think should be part of the discussion of infrastructure. I think other people referred to it. And more than a decade ago, many of us here have been involved in this. We helped put forward a key principle that human rights must apply online as they do offline. 2012 was when this really important declaration at the Human Rights Council, and I’m sure many of us have been involved in that. What were once considered fringe issues has now been globally recognized and acted upon. And I want to just end to refer to, okay, what is it that we need to do more of? And that last question of yours, Avri, that, so last year, one of the big issues now is crisis and wars. And when I say crisis and wars, it’s the wars, but it’s also crisis, environmental crisis, et cetera. And I think I heard this also at the opening and also at other sessions, that it’s really important for us to look at the more difficult issues, the one that challenges us. And here, for last year, we helped organize a main session on securing access to the internet during times of war and crisis. And we will have, because of that, there will be a main session on where we will be discussing the norms and responsibilities of this multi-stakeholder internet community, particularly in relation to shutdown. So we’re speaking about critical resources that was alluded to by Hans. Part of it is really that there’s destruction of communication infrastructure, and we need to look at that. What are the kinds of communication, what are the norms that can help us really defend that and protect the infrastructure? It’s not only about access, it’s also about, in fact, the discretion because of this in countries like Ukraine, Palestine, Sudan, and Myanmar. As we stand here today, 20 years after WSIS, the vision of the people-centered, inclusive, digital future feels more urgent than ever. That’s how we feel. And to some extent, more under siege because of corporate power, state control that are driving the issues that we’ve been looking at. So one of the things we’re doing as part of our engagement with WSIS is being part of the Digital Justice Now Coalition, and we’ve launched a campaign here in IGF. It’s a global society movement to reclaim digital power for the people. We call on this multi-stakeholder international community to take bold action, which we need at this time, so that we can continue to shape a democratic digital future for all of us.


Avri Doria: Thank you. Come to you, Luca. Since you’ve come here, you’ve been among the most prolific of the volunteers, the people working with the book that just came out on community networks and such. So please, I’d like to turn to you from your hard-working perspective on all of this and where it goes. Please.


Luca Belli: Thank you very much, Avri, for the kind introduction. Thank you very much to all the friends that have organized this, particularly Olga and all other friends that have invited me. I want to start by explaining what has been the impact on me and then how this has been a real engine for change also in policymaking and how this is an untold story to some extent. So this is my 15th IGF. The first one, I was a young PhD student analyzing how multi-stakeholder processes can build better quality of regulation. And then I met a young Marcus Kummer that in his last month of tenure as a secretary of the IGF brought me in as an intern and allowed me to interact and meet a lot of the stakeholders that we know that became mentors and then colleagues and most of all friends. And I think that this allowed to construct and I think that is also an excellent output of the IGF, constructing trust amongst stakeholders. This is not something that you can artificially construct. You can build it only with relations. And this allowed me to be a very hyperactive convener of dynamic coalitions over the past years. And I think, again, these have been extremely powerful engines of cooperation and meaningful impact. I had the pleasure to start and help organizing four, one on net neutrality, one on platform responsibility, one on community connectivity that has been extremely successful and the last one on data and AI governance. And as I am an academic myself, We have always tried to include this academic approach, doing research to help people explain what we are doing, and try to propose new policy solutions. And this is what we have been doing for the past 15 years. And many organizations have used this. The Council of Europe used both the report and recommendation on net neutrality and platform responsibility for their own recommendations. Multiple regulators in the Americas, in Mexico, Argentina, Brazil, or even in Africa and Kenya used the work that we did with the Canadian Commission on Community Connectivity to explain what are community networks and regulate them in a better way. And then there is also a little bit of frustration in my 15 years of experience, because all of this is not very visible. And as Marcus always tells very well, the IGF is not very good at making its success visible. Anyone can also think that some stakeholders want the IGF to be irrelevant, I’ll let you understand what is your answer to this question. But I think that we may easily do a lot more to make the IGF, to prove that the IGF is relevant, not only talking about things, but as the very same IGF mandate in paragraphs and 27G of the Tunisia Agenda states, to make issues relevant to the public and to all stakeholders, visible and relevant, and to make all the reports and recommendations that we have elaborated over the past 20 years visible on the IGF website would be a very good first step, I believe. Thank you very much.


Avri Doria: Thank you. It’s a refrain that I’ve heard a couple of times over the course of the last couple of days, is we need to become. Coming to you, Isabelle, you occupy kind of a critical position in this whole process that we’re going now, in your role at the CSTD. So I’m wondering how do you look at this, both from the perspective of what is the case and what could be the case?


Isabelle Lois: Thank you so much, Avri, for the question and for having me on this panel. It’s an honor to be able to speak on behalf of the CSTD, as Vice-Chair of the CSTD, but also as a representative of the Swiss government. I’ve not been in this space as long as most of my co-panelists here, but I have seen the value and the tremendous work that has been done throughout the years at the IGF. And I can only highlight whatever all of you said before on the successes and the many, many examples of what we’ve been able to do, take out of these discussions. I think the IGF is the primary space where we have discussions on digital governance or on digital issues. It is the agenda-setting, the issue-spotting place, and that is essential to be able to feed into the rest of the system. If we want to regulate something or create a policy, we need to first identify what are the key issues, what are the different actors and stakeholders thinking, what are they concerned about, and this is the place where we can do it. So that role is truly essential and something we absolutely must preserve, highlight, and strengthen. So I think that’s maybe my first point I want to raise, and it echoes what all of you have been saying right before as well. I think the other important point to see a bit, what can we do next, what would the IGF look like after this year? And I think there are many different ideas. If we keep this focus of having the issue-spotting place at the IGF, and we see all the examples of how this has been able to be used, I think we need to connect it more throughout the entire WSIS system. I mean, the IGF has its own ecosystem with the different parts, the policy networks, the dynamic coalitions, and all of that great work, so that interconnectness is one part, and then connecting to the rest of the system. And by that I mean bringing the messages, the outcomes of each IGF session, of the policy network, of all of the NRIs, all of the work that is being done, to UNGIS, bringing it to the CSTD, and this is where my role maybe comes in as one of the points, bringing the main issues that we see the community, all of the stakeholders are concerned about, and then making sure that we’re feeding that into the rest of the UN system. And I think this is where we could definitely do better. There is potential, there is space, now with the WSIS Plus 20 review we have the opportunity to look at that, to make sure that the entire system and architecture is well connected. And I think this is something at the CSTD we’re trying to do, there’s a few ideas in this in the CSTD resolution from this year, but also the Swiss government is pushing as one of the main points. So I think that’s my main answer for you.


Avri Doria: Thank you. And thank you all. So you’ve given us all a good sampling of both the values, the personal gain, the personal importance and such, and some of what we can go further. Now going to this next stage, and I’ll be coming back to you all later, probably just go in the reverse order, so you’ll get to be last this time, but basically at this point I’d really like to go to the participants that are sitting in the chair, but we also need to have the Mentimeter put up. We have a Mentimeter that is going, I was told that they would put the URL, oh there it is. So people should join it and basically go through, it’s focusing on the answer to the first question, but go through. I’d also like to first of all go to the remote moderator, Jim Sunilufe, to see whether there is any commentary online from those that are joining us on Zoom, et cetera, to come up with. Jim Sunilufe, is there anything yet?


Online moderator: Yes, Avery, thank you very much. We have a lot of comments on the remote platform, and first and foremost, Dr. Robinson Sibbe will be providing his own virtual intervention, and after that I will read out the comments and the questions. Dr. Sibbe, can you go ahead for your two to three minutes?


Online participant 1: Yeah, thank you very much, Jameson. I am Dr. Robinson Sibbe, Cybersecurity and Digital Forensic Expert, the CEO and Lead Forensic Examiner of Digital Footprints Nigeria. Thanks for the opportunity. So I will be making my intervention from a perspective as a private sector player in cybersecurity ecosystem in Nigeria. In practical terms, IGF has helped contextualize the challenges we face in Nigeria and in Africa, and there are quite a lot of them, challenges like the rising caseloads of cybercrime and the complexities of investigating, and when I mean complexities, that would include global and jurisdictional complexities which we face, building trust in digital platforms or navigating the challenges of information governance and data protection in an environment where many are still digitally excluded. Now, as a cybersecurity and digital forensics company, our work quite often sits at the intersection of policy, support for law enforcement and technology, and the IGF model has been instrumental in showing us the value of inclusive and collaborative governance, not just as an ideal, but as a practical tool for problem solving. Now, over the years, these interactions by the IGF have deepened our understanding of the global dimensions of Internet policy, whether it be data protection and trust frameworks to digital inclusion and resilience. More importantly, it has humanized these issues by showing how policies crafted in one part of the world would potentially affect both lives and systems in others, including that of Africa, where infrastructure gaps and policy disconnects quite often amplify these vulnerabilities. These lessons have been crucial in our practice, both from a proactive defense point of view to an investigative point of view, that’s from a forensic point of view. Now, looking ahead, I want the IGF to move even closer to implementation. I would love to see more localized action, for instance, where IGF outcomes are being translated into toolkits for cybercapacity building in African countries or technical working groups formed to address specific regional challenges like Internet shutdowns or ransomware targeting public institutions or public-centric processes like elections. I believe the IGF should continue to be a bridge between regions, between policy and practice and between aspirations and actions. For people like me on the front lines in Nigeria, this kind of impact is not just valuable. It’s absolutely necessary. Thank you very much.


Avri Doria: Thank you very much. I’m very happy to have a voice from the online participants. So thank you. I’m going to reread the questions because I don’t see anybody jumping up wanting to make a comment on such. Oh, I do have one in the back there. But first, let me reread the questions for you all, just so everyone remembers. The first question was, what has the IGF meant to you and what do you want it to mean to you in the future? Two. How has the IGF multi-stakeholder model and its realizations, for example, the IGF meeting, NRIs, DCs, BPFs, policy networks, et cetera, how have they made an impact in your organization or Internet issues in your country or region? And three, how can the IGF play a more impactful, expanded role to contribute to the implementation of the WSIS goals and the GDC? I see the spotlight shining on the first one, and please make sure, Henriette, that you introduce yourself.


Anriette Esterhuysen: Thanks, Avri. I’m Anriette Esterhuysen, was with APC, still work with APC, Association for Progressive Communications, and a MAG Chair in the past. I mean, very briefly, I think what it has meant for me personally is it’s created an impatience for me in all other forums that are not linked to the WSIS, and I think that is because the IGF is so uniquely connects working with policymakers and implementers. I now find civil society-only spaces, for example, deeply frustrating, because I feel I’m surrounded by colleagues, but I’m not sure how I’m going to have impact. I think what this space gives is both the ability to be with like-minded actors, but also with those that are different, might have different perspectives, but together you can bring about change. But what I want to say for the future, I really want to echo what Isabel said. I think the IGF, and its links to the WSIS, creates a link to people-centered development and to people. I think we live with so much fragmentation in how we talk about digital, and I think so many of the new fora, Global Digital Compact, for example, Artificial Intelligence Dialogue, puts the emphasis on the technology, not on the society, and not on the people, and I think that’s the power of the WSIS, that’s what the WSIS has given the IGF, and that’s what the IGF gives to dialogue about digital governance and cooperation. So just to reinforce what Isabel said, a future IGF must retain this link to people-centered development and to society. Information society is not used much as a term anymore, but please let’s not lose that, because that’s ultimately what we want to change. We want more equal, more inclusive society. Thank you.


Avri Doria: And I don’t see anyone else at the mic. At first I thought I had seen several other people there. Hopefully I know you have another online one, good. Hopefully I know many of you, and I know most of you aren’t shy and have opinions. So please take advantage of this opportunity to express those opinions on those three questions. But in the meantime, I’ll go back to you, Jimson, for an online contribution.


Online moderator: Okay. Thank you, Avri. We have this question from Nathan Latte from IGF Côte d’Ivoire, saying what is the concrete impact of the IGF on Internet governance in developing countries? And we have a number of interesting comments. Someone said, for me, the IGF has been a very impactful platform for learning, connection, advocacy. Going forward, I hope it becomes a place and space where more actionable policy outcomes are shaped through inclusive multistakeholder dialogue, as it is being envisioned in Africa, in Africa. Mark Gave, yes, from the UK, also has a very interesting expression, says he agrees that the multistakeholder approach has become the norm in many countries since the WSIS. In the UK government, after the first IGF in Athens, we decided, one, to work with the UK CCTLD registry in setting up a national IGF, the UK IGF, to prepare for the next global IGF. And also, two, to establish a multistakeholder advisory group in our ministry, government ministry, that we will meet with at a regular interval. So maybe I’ll stop here for now.


Avri Doria: Thank you. I heard a couple questions in there, but I had trouble really parsing some of them. I want to go to the microphone, but then I want to come back to you, and if you could just sort of pick out the questions that were there, and then I’ll look to someone up here to give an answer. But please, at the microphone, introduce yourself and give us your comment. This microphone that is lit at the moment.


Audience: Thank you, Avri, and thank you for being persistent. My name is Rolf Meijer. I’m the CEO of SIDN, the registry for the .nl country contact level domain. I don’t go to IGFs for myself, so I find the first question a bit confusing. But I think, in my opinion, the most important things that the IGF has brought us, and I’ve been to roughly 15 IGF meetings, is a platform where we can, in a kind of a global context, discuss topics that are important for the Internet, but what it really brings is it makes the multistakeholder model function, because it helps us explaining our stakes or interests, and listening to the explanation of the other stakeholders explaining their interests. And I think that that was lacking in the period before the IGF, that we were all, as organisations and even sometimes as individuals, defending our own stakes, and we were not listening well to the other stakeholders explaining theirs. And I think the only way that the multistakeholder model can function is if we understand and respect the stakes of the other stakeholders. And I think that’s one of the big outcomes of this whole process. If I compare it with the first one in 2006 and the last ones I’ve followed over the recent years, there’s much more understanding in all participants of the stakes of the other stakeholders. On the last question, what should change or what should improve to make it function even better than it does, in the Netherlands IGF, so our national IGF, a couple of years ago we produced a document, a one-page document on that, and I’ll make sure that you get it after this session.


Avri Doria: Thank you, thank you. Before I go back to Jimson for the questions, I see we also have one at the thing, please introduce yourself and make your comment.


Audience: Well, my name is Juan Fernandez, I’m from the Ministry of Communication of Cuba, but I want to tell here what it means personally for me has meant the IGF. I think that for me it had some very special impact, because it proved that no matter how old you are, no matter how learned you are, you always have to keep an open mind, because you could be wrong and you could be proved wrong very easily. You know that I participate, and many people here know me from, I participated on the negotiation of both WSIS outcome documents, I was part of the WGIC, and we discussed the creation of a forum such as this. So I can tell you that when this began, and I went to the first one, I was very sceptical. I think, well, this is just, we’re giving some breadcrumbs to the civil society because they were shunned out of the process, so this is just for that, you know. But I was proven wrong. Over the years, I’ve been, in a way, learning the value and feeling the value of having this conversation, as the previous speaker said, that I think it’s not only important also as Andrea said, in terms of the personal capacity or the institutional capacity that each of us has here, but also the personal enrichment, at least for me, I as a person has been enriched by having a personal relationship and dialogue with persons from other points of view, from other realities, and that’s, for me, I think I have been enriched by the IGF. That’s what I want to tell you, Avri. Thank you.


Avri Doria: Thank you. And definitely been enriched by knowing you. I’ll go to this one, which is, Stephanie, please, and introduce yourself.


Audience: Hi, my name is Stephanie Perrin. I’m from Canada. I’m a sporadic attendee at the IGF. I was, however, at the initial WSIS, and I echo the previous speaker’s remarks. I was a little, I was working for the Canadian government in the telecom sector at the time. I was a little cynical about the potential outputs from the IGF. But I think I was wrong. I agree with Henriette that the impact is on the people and we have to come back to that. Now, one of our colleagues is even more cynical than I was. Milton Mueller has issued his, more or less, call for remarks on his IGP blog saying the time has come, you know, it’s over, time to move on to something new. So this has prompted me all week to say, well, hang on, how would we measure? How do we measure the impacts? And because it has been such a success as a multi-stakeholder innovator and enricher and stimulant and, you know, basically it comes up with the dialogue that we want. It’s a parliament of a multi-stakeholder effort. How do you measure that? I’ve been thinking about metrics all week and I think it’s quite hard but something we should focus on because the difference in the impact between the Nigerian cybercrime industry, I don’t mean the bad guys, I mean the good guys fighting it, to local initiatives, to bringing broadband to underserved areas. All those are different things to measure but it would be worth it to do it. So that’s a comment. Thank you.


Avri Doria: And I’ll go to, first of all, Jimson, did you have the questions that I kind of, I sometimes have trouble hearing what’s going, being said there, so please, if there were a specific question. Okay.


Online moderator: We have Emily Taylor. Emily Taylor is going to intervene maybe for two minutes, then I will read some comments later. Emily.


Online participant 2: Thank you so much, Jimson, for giving me the floor and for this very interesting discussion. I just wanted to highlight a study, an evidence-based study that we did for the UK government. It was published last year and I’ve put the link in the chat. It was an evidence-based exploration of the impact of the IGF and really looking from the perspective of the global south, if I can put it in that way. And we found after 48 expert interviews and also a large-scale text analysis enriched by AI and ML, we looked at thematic dynamism of the IGF over a long period and came out with a series of both direct and indirect impacts for those communities. The direct impacts are the spontaneous emergence of national and regional IGFs and the youth movement. These were never anticipated in the Tunis agenda, and yet they happened. And they’ve both, they’ve brought young people in particular to the IGF and had an onboarding ramp for new policy makers who then go back to their home countries and bring and receive messages of that link between the local and the international. And another aspect sort of on the direct impact is the emergence of internet exchange points in Africa. There’s a very clear line from the IGF to the people who built out that network, and that has a really concrete impact in reducing latency, reducing costs, and improving speed of connectivity within Africa. Indirect impacts, this is a forum where discussions happen first on emerging issues, and you can really see that in the thematic dynamism. You can really see that prior to 2017, no one was talking about disinformation and fake news. Prior to 2018, there was not very much on artificial intelligence, and then it really exploded in the 2023 arena, and you really see that from the thematic analysis. And one perhaps controversial point is, and I think it goes back to what Roulof was saying in his intervention, is that many of our interviewees attributed the IANA, the successful IANA transition to dialogue in the IGF, where the IGF had been a forum which really reduced the temperature of a very polarizing issue prior to that, laid the groundwork, allowed stakeholders to understand what was at stake for others, not just themselves, and laid the groundwork, in their opinion, for a successful IANA transition. Much more to say, but I’ll leave it there. Thank you for giving me the floor, Avri.


Avri Doria: Thank you very much, and very good to see you. Okay. At the moment, I think we’ve got like five people in the lines, so that’s probably where I’ll try to cut the line. So after encouraging you all to get in line, I do have a deadline, and I am going to want to give at least a minute or so to each of our panelists who are sitting here to be able to respond. I guess I go next to go to this line. Please introduce yourself and give us your comment. Very clever. Yes. Please give us your comment.


Audience: Thank you very much. Yes, my name is Auka Aukpals, and I’ve been attending for the past ten years the IGF, and what has brought with me is that I’ve had so many interesting discussions with all of you, mainly in sessions, but mostly in the corridor chats, which are even more valuable than the main sessions happening here in the workshop rooms. So the network and the way we can interact on this forum is really valuable for me and also for the work that we are doing. My wish for the future is that we can engage in workshops more of a discussion because currently my observation is still in the past ten years that it’s mostly one way and not that many discussions taking place, and that’s something that really needs to be improved to make this even more valuable for me but also for the others. Thank you.


Avri Doria: Thank you. Go to Markus on this line. Please introduce yourself.


Audience: I’m Markus Kummer, and I’ve been around for a while. My name has been mentioned by Luca, yes, I was here since the inception like you, Avri, and I will not answer your questions. I’d rather go back to the inception of the IGF and back to the working group on Internet Governance when we actually came up with the idea and we conceived the IGF as a platform for dialogue. As we all know, the WGIC also came up with a definition on Internet Governance, and there are still a lot of misunderstandings around. We did have a highly academic definition, working definition, which not everybody understands, but we also came up with a practical description of what Internet Governance means, and that was it relates to the physical infrastructure, that is the cables, nuts and bolts, the logical infrastructure, that is the domain name system, the Internet addresses, but it also relates to the use and abuse of the Internet. So the IGF, right from the beginning, relied on this groundwork from the WGIC and dealt with issues mainly related to the use and abuse of the Internet, and I do recall the four themes we set for the very first meeting, openness, inclusion, diversity, access, and inclusion, I think, yes, but they were in essence a mixture between technical and societal issues, and that’s what the IGF has been dealing with all along, and the notion that the IGF is more of a technical issue as it is described within the GDC is blatantly wrong. It creates an artificial dichotomy between digital cooperation and Internet Governance. The IGF is the prime organisation dealing with Internet Governance, but then there are much broader issues, and the IGF has always dealt with these much broader issues, such as has been mentioned, artificial intelligence, and, and, and, and, and as to the impact, Emily said it very nicely, I think also the various IXPs have been set up as a direct consequence are a tremendous success stories of the IGF, and I also agree with Emily that the IANA transition would not have been possible without the IGF, because the IGF taught people to talk with each other, and that has been mentioned by many of the speakers, not just talk, but also to listen to each other, and that actually showed that there is merit in having this mixture between multi-stakeholders that people meet who would not otherwise meet under the same roof, and they talk to each other and learn from each other, and I hope that this will continue, and as far as the impact is concerned, yes, the IGF was as a platform for dialogue, not as a decision-making body but as a body that can shape decisions that are taken elsewhere. And then we do actually produce outcomes. I mean, Luca mentioned his dynamic coalition, there are other dynamic coalitions and they really produce tangible outcomes but they are not so-called official IGF outcomes but they can be used and they have been used, taken up. There is also other dynamic coalitions, there is one on rights and principles that produced a very good paper on rights and principles that has been taken up in other organizations. So there is clearly something to build on and I think, as Luca already said, we are not good actually at celebrating success and promoting our success and I think making this okay, just look at what we are doing and be open and proud about it and show the world that we actually are more than just a talking shop. That’s all I have to say. Thank you.


Avri Doria: Thank you very much. Very good to hear from you in line. Please to the next microphone and please make them relatively brief, I’m sorry to do that to you.


Audience: My name is Jasmine Maffei. So this is Jasmine. I actually didn’t expect this will be very personal and emotional for myself and I actually come, you know, struggle to come to speak because in front, before me there were a lot of amazing great senior leader in an industry so I feel like a nobody at the beginning still. As a youth and as someone who grew up in capital incentive and hierarchy society, I truly value this platform IGF as, you know, bottom up and a place that you could easily talk to people from even with senior role and with many years of experience, I truly appreciate that that I could also have a chance to speak much of my feeling because in Hong Kong where I’m from it’s totally challenging to even just communicate and reach out to senior people and leaders like you guys. But another thing that I really value is something that we could bring an issue from local, bring it to the table here at global level, we can exchange good practice. At the same time, any issue that’s brought by the other local community can also bring back to my local community for good references. So I, myself is, I really put effort to talk about what IGF is to my local community, it is challenging because no one know what is it and no one understand it. I feel like there are still many misunderstanding but I think that what you all have been sharing the success and also quantitative cases are very good reference for me to also create my own local good practice. How does the IGF make positive impact to our local community and how do we create value that speak to our local stakeholder that makes sense of and really make it relevant and thank you very much.


Avri Doria: Thank you. And I must say how happy I am to see younger people speaking because if it was just us old ones, wouldn’t be that much hope. Please.


Audience: Hello everyone, this is Piu from Myanmar. I think that IGF mean a lot to us because according to its principles and its model, we could, you know, input some what way after delivering the summary of the UN initiative at the global level in some what way. Frankly speaking, it’s not very easy to, you know, share our input at the global level organization like the UN, especially from the grassroot initiative like young people, you know, have a lot of challenge in terms of the eastern effect, political effect, economic effect but so far, like initiative like the UN IGF, you know, could approach to the young people to try as align with the approach like a very open way to collect their input while respecting their vices and then bridging their vices at the global level. I think that was very effective approach that we could do at the IGF and even the Vannevar community vices needed to have in not only at the IGF but also at the other aspects of the global level policy making but so far, IGF itself open for all people. Everyone can attend the conference, you know, you don’t need to be worried about like you are expected to be invited to attend the conference. I think this approach is very meaningful to us to participate as a young person and import our vices at the IGF. So, the future we would like to see as a young person is that please include us in the future of the IGF and please officially recognize our vices as a part of the multi-stakeholder to be meaningfully engaged and participate at the IGF. Thank you.


Avri Doria: Thank you very much. And okay, I’ve got two left, two of the original great ones and please they’ll be very brief because I want to give at least everybody up here on stage the chance to sort of perhaps give us your biggest takeaway from the discussion that we’ve heard in terms of what’s ahead of us. So, no. I think it’s… Are you guys doing the old gentleman thing, you go first, no, you go first. Let’s save time. Yes.


Audience: Here I go. My name is Raul Echeverria, I’m from Uruguay, MAG member, I have been around for many years and I think that IGF is the most innovative experience I have seen in my life in international governance, not only related to internet but in general. It has inspired me in the way that we work, I think we have developed a culture of dialogue and deal in a civilized manner with our differences. I have seen sessions in this IGF like in others where people that have positions that oppose to each other discuss in a very positive or constructive manner, so when we work on the ground we realize that not everybody work in the same manner, so I think it’s very important what we are doing here and the impact that it can have in the rest of the world. It’s my wish to the future, I would like to see a more simplified IGF where it is more simple to become involved and we should facilitate newcomers to become involved meaningfully. I would like also to see an IGF that is better connected with other existing processes and mechanisms. I think that the most important is an IGF that connects better with policy making at the global and regional level, that is at the end of the day where things really happen. So thank you very much.


Avri Doria: Thank you. Bertrand, last word from the microphones.


Audience: Thank you, Avery. My name is Bertrand Lachapelle, I’m the Executive Director of the Internet and Jurisdiction Policy Network. Two quick comments, one for me the thing that I’m most happy about in the last few exercises of the IGF is this recognition of distinction between the issue framing, agenda setting and decision shaping versus the decision making. This is the core function of the IGF and it is particularly important because in all international multilateral processes, putting something on the agenda takes usually many years because there’s always a lack of unanimity and somebody objects to the issue being on the agenda. The goal of the IGF should be increasingly to be early on and facilitate the common picture of the key topics. The second thing quickly, in the discussion on the WSIS Plus 20 review, we should not focus only on the renewal of the mandate of the IGF, whatever the duration. We need to reach a new step and we need to do what we did with the WGIG in 2004-05, i.e. having a group that discusses, one, the evolution of the mandate and the focus and scope of the IGF along the lines that I just mentioned and, second, the institutionalization of the structure. We have all the components at the moment but it’s like a car that is limited in its speed. although it has all the capacities to do much, much more. So I hope that in 2026, a group of sorts will be set up in a multi-stakeholder fashion to address those two issues. Thank you.


Avri Doria: Thank you very much. And so now I’m gonna start with you, Isabel, and then basically just move across the line, give us a quick, if you can, and certainly what you would take away from what we’ve heard, from what all was said.


Isabelle Lois: Thank you, thank you, Avri, and there has been a lot of good points and comments said, so it’s difficult to be quick, but I will try to be. I think one of the main points I would like to highlight was the taking the time to listen. I think that’s an important point. We have a lot of good information, a lot of knowledge that is shared throughout the sessions at the IGF. So taking the time to listen, taking the time to write down what has been written, read the reports, even if there is no consensus, use the information that has been given here. We can use it as capacity building. I’ve learned tremendously throughout the years attending the IGF and participating in the intersessional work. So I think this is really something that we have to take advantage of. And it goes in line with what I said earlier in using the IGF and the information that we get here throughout the space, connecting it with the rest of the WSIS structure. It also goes with the listening to all stakeholders and listening to all positions. So I think I’ll keep it at that.


Avri Doria: Thank you, Isabel. Luca, please.


Luca Belli: Yeah, I think that after 20 years, it’s clear that the IGF is an excellent forum. It’s like a forge of ideas. And the stakeholders are convened here every year to speak up their mind. There is a relatively low barrier of entry. It is high for global South countries that have to pay to come to Norway or other exotic places. And this could be solved with a little bit more of help. But it’s working very well in terms of allowing people to speak their mind freely. And also, it’s a very good engine for multi-stakeholder cooperation because you find here potential partners to implement your initiatives, not only in terms of policy. Again, the example of community networks is very good. People here have found friends and partners to create the internet, to create new community network and give access to people. So I think that there are a lot of things that may be idealistic, like having a very well-funded IGF, but a lot of things that could be very pragmatic and simple, like giving more space between an IGF and the other so that people can organize themselves, or try not to limit that much the number of sessions because if this is a very open forum that gives people the possibility to speak, then if you cut down the number of session, less people will have an incentive to come here and speak. So I think that those are very few little pragmatic things that we could do to help people exploit better the potential of the IGF.


Avri Doria: Fantastic, thank you, Luca. Chat, please.


Chat Garcia Ramilo: What I’ve heard here is that there’s really nothing else we can say about the, everything has been said about the importance of IGF. I think that is a, that’s not, to me, it’s not a debate. It’s sort of like, it is a reality here. I think the celebration for me is something that we could do more of. In the feminist circles, we do say that we need to have joy, in what we do, and I think this is part of what we need. It’s not a, it’s a very difficult time for everyone, and I think that part of celebration and saying and recognizing that we need that for ourselves and also for our community is so, so important. But having said that, I think the second thing I wanna say is that to be able to make it more robust, as you say, Luca, is that the entry, is that who we bring into the, in here, because, yes, there’s much more that we can bring. More people, more perspectives, I think that is something that we can continue to do, because it will provide that energy, it will provide that connectivity for this community.


Avri Doria: Thank you, Chat.


Qusai Al Shatti: I will join my colleagues in that. Do you still have that microphone? I will join my colleagues in the fact that there is, I have nothing to add to what was said on the floor, but from the perspective of developing countries, internet governance, one of the most important aspects of it is introducing the multi-stakeholder process in our part of the world, where policy making or regulation is fully engaged in a multi-stakeholder process, where the decision maker or the policy maker believes now in a fixed mind shift that if you want to introduce effective regulation or policy making, he needs to engage in the multi-stakeholder so that’s one of the most important aspects for us when it comes to IGF in our part of the world, and I’ll stop here.


Avri Doria: Thank you. Funke, please.


Funke Opeke: I think a lot has been said. For me personally, what resonates is the multi-stakeholder participation from the global south with other global stakeholders in shaping the future governance of the internet.


Avri Doria: Renata, please.


Renata Mielli: Everything has been said, but just to close my participation saying that anybody said that IGF is not about technology or digital or internet issues. IGF is about its impact on people and society and it’s about building connections. IGF is about dialogue, and is about multiplying worldwide the conviction and the inspiration that multi-stakeholder process can build better policies and can contribute to develop a less unequal future for everybody. So I think that’s the thing that I want to say, and I hope we can continue to this conversation for permanent years in this sense, to point that IGF has to be a mandate, a permanent mandate. Thank you very much.


Avri Doria: Thank you so much. Hans-Peter, please.


Hans Petter Holen: Yeah, thank you. So as being one of the techies here, I remember being one of the young guys in the room and I realized I’m not anymore, this was 30 years ago. And I think one of the things I picked up from the floor here was that this national engagement and regional engagement was not envisioned when this was started. I think this is really important to recruit the next generation and train the next generation. We’ve talked about regional or national IGFs, but I also want to do a shout out for the schools that have been developed into the governance schools, summer schools, that actually trains professionals that have a subject that may be interesting to take to the global governance scene. And I think if we want to achieve what Norway’s Prime Minister, Jonas Gjertstedt, has said in his opening speech, the internet should not be governed by the few, but by the world, we really need to continue this path of IGF. Thank you.


Avri Doria: Thank you. Betanga, please.


Bitange Ndemo: Thank you. My only regret I have is that all the years we have taught in silos. We deal with the infrastructure, we deal with violence, we deal with… Now, looking forward, I would want to see discussions in IGF focusing in what I call we think system-wide. This technology has come. How can it be used to solve the problems? AI has come, most consequential technology ever. We need now to not just talk about how the people get it, how do they benefit from this technology, and what is it needed to make sure that the people benefit. For example, if we took it to the farmers, educate them on how to use AI on agriculture, the productivity would improve. So if we bring in the systems approach to everything, from now on, 30 years from now, we will talk about a different world. Thank you.


Avri Doria: Thank you so much. Thank you all, panelists. Thank you all, stakeholders, participants sitting in the house there, online, offline. I just want to say that I really do love the multi-stakeholder model and all its variants, and really do hope that the IGF continues and that we continue to talk together. I think it took a long time to get us talking, and now it’s over, but thank you all. Thank you.


B

Bitange Ndemo

Speech speed

123 words per minute

Speech length

461 words

Speech time

223 seconds

IGF introduced multi-stakeholder consultation model that made policy implementation easier despite initial resistance

Explanation

As a policymaker in Kenya, Ndemo initially found the new method of consultations with stakeholders very painful, but once the process was completed with stakeholders, implementation became much easier. This represented a shift from traditional top-down policymaking to inclusive consultation.


Evidence

Personal experience as policymaker in Kenya where traditional approach was to make policy and ensure implementation, but IGF introduced stakeholder consultation process


Major discussion point

Personal Impact and Value of IGF


Topics

Legal and regulatory | Development


Agreed with

– Qusai Al Shatti
– Online moderator
– Audience

Agreed on

Multi-stakeholder model became the norm through IGF influence


IGF discussions helped build internet infrastructure when Africa had only one gig capacity for entire continent

Explanation

When IGF began, Africa had minimal internet infrastructure with only one gigabit of capacity through Intelsat for the entire continent. IGF meetings provided crucial learning about what was needed to make internet technology work for people and helped justify infrastructure investments.


Evidence

Africa had only one gig of capacity for whole continent through Intelsat; World Bank meetings where he referenced ‘Field of Dreams’ movie to justify infrastructure investment


Major discussion point

Infrastructure Development and Technical Impact


Topics

Infrastructure | Development


IGF needs to focus on system-wide thinking to help people benefit from technologies like AI

Explanation

Ndemo regrets that discussions have been conducted in silos, dealing separately with infrastructure, violence, and other issues. He advocates for a systems approach where new technologies like AI are discussed in terms of how they can solve problems and benefit people directly.


Evidence

Example of using AI in agriculture to improve farmer productivity


Major discussion point

Addressing Contemporary Challenges


Topics

Development | Economic


Disagreed with

– Other speakers

Disagreed on

Approach to IGF discussions – systems thinking vs. specialized focus


H

Hans Petter Holen

Speech speed

160 words per minute

Speech length

622 words

Speech time

233 seconds

IGF serves as arena where technical realities meet policy aspirations for internet coordination

Explanation

The IGF provides a rare and essential platform where technical infrastructure requirements intersect with policy goals. This is crucial as digital governance agendas accelerate globally, ensuring that internet coordination systems remain stable while enabling appropriate regulation of services built on top.


Evidence

RIPE NCC’s role in IP address allocation, IPv6 implementation, routing security through resource public key infrastructure, and ASN registration for multi-homing


Major discussion point

Infrastructure Development and Technical Impact


Topics

Infrastructure | Legal and regulatory


IGF ecosystem includes national/regional IGFs, dynamic coalitions, and intersessional work beyond annual meetings

Explanation

The IGF is not just a single annual event but a living ecosystem that includes national and regional IGFs, dynamic coalitions, best practice forums, and policy networks. This comprehensive approach starts at the local level and scales upward.


Evidence

RIPE NCC supports IGFs and network operator groups both financially and with speakers to help communities organize and identify emerging issues


Major discussion point

Multi-stakeholder Model and Its Realizations


Topics

Legal and regulatory | Development


Agreed with

– Renata Mielli
– Chat Garcia Ramilo

Agreed on

IGF is an ecosystem beyond annual meetings


IGF needs permanent mandate to focus on matters rather than securing future meeting place

Explanation

To be effective, the IGF requires a permanent mandate so that participants can concentrate on substantive issues rather than spending energy on securing the forum’s continued existence. This stability is essential for long-term planning and impact.


Major discussion point

Future Improvements and Permanent Mandate


Topics

Legal and regulatory


Agreed with

– Renata Mielli
– Funke Opeke

Agreed on

IGF needs permanent mandate for stability and effectiveness


IGF must remain venue that protects internet infrastructure while enabling necessary service regulation

Explanation

The IGF must maintain its role in promoting clarity between internet coordination (which keeps the internet running through stable systems) and digital governance (which shapes social transformations). This distinction is crucial to avoid jeopardizing core infrastructure while regulating services appropriately.


Major discussion point

Addressing Contemporary Challenges


Topics

Infrastructure | Legal and regulatory


IGF governance schools and summer schools train professionals for global governance engagement

Explanation

Beyond national and regional IGFs, governance schools and summer schools play a crucial role in training professionals who can contribute meaningfully to global governance discussions. These educational initiatives help recruit and prepare the next generation of internet governance participants.


Evidence

Recognition that he was once among the young participants 30 years ago but is no longer, emphasizing the need for continuous recruitment of new generations


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Development | Sociocultural


O

Online participant 2

Speech speed

136 words per minute

Speech length

416 words

Speech time

183 seconds

IGF directly contributed to emergence of Internet Exchange Points in Africa, reducing latency and costs

Explanation

Research showed a clear line from the IGF to the people who built out the network of Internet Exchange Points across Africa. This infrastructure development had concrete impacts in reducing latency, lowering costs, and improving connectivity speed within the continent.


Evidence

Evidence-based study for UK government with 48 expert interviews and large-scale text analysis using AI and ML


Major discussion point

Infrastructure Development and Technical Impact


Topics

Infrastructure | Development


Evidence-based studies show direct and indirect impacts including spontaneous emergence of national IGFs

Explanation

Comprehensive research documented both direct impacts (like spontaneous emergence of national/regional IGFs and youth movement) and indirect impacts (like early discussions of emerging issues). The study revealed thematic dynamism showing how new issues like disinformation and AI emerged in IGF discussions before becoming mainstream.


Evidence

Study published for UK government using 48 expert interviews and AI/ML-enhanced text analysis; thematic analysis showing emergence of disinformation discussions pre-2017 and AI discussions explosion in 2023


Major discussion point

Recognition and Visibility of Impact


Topics

Legal and regulatory | Development


Q

Qusai Al Shatti

Speech speed

130 words per minute

Speech length

500 words

Speech time

229 seconds

Multi-stakeholder process became the norm in policy making and regulation after IGF introduction

Explanation

Over 20 years, the multi-stakeholder approach evolved from an experimental concept to become standard practice in policy dialogue and regulation. This cultural shift represents one of the most significant achievements of the IGF, making collaborative governance the expected norm rather than the exception.


Evidence

Growth from 500 million internet users in 2005 to over 4.5 billion today; internet organizations becoming more open, inclusive, and bottom-up; broadband availability with lower costs; digital economy’s greater role in national GDPs


Major discussion point

Multi-stakeholder Model and Its Realizations


Topics

Legal and regulatory | Development


Agreed with

– Bitange Ndemo
– Online moderator
– Audience

Agreed on

Multi-stakeholder model became the norm through IGF influence


R

Renata Mielli

Speech speed

115 words per minute

Speech length

672 words

Speech time

347 seconds

Multi-stakeholder approach inspired Brazilian Internet governance community and policy development

Explanation

Although Brazil’s multi-stakeholder model predated the IGF, the IGF model significantly influenced the creation of the Brazilian IGF in 2011 and helped build a robust Brazilian internet governance community. The IGF created an ecosystem that put multi-stakeholder governance in motion beyond just events or bodies.


Evidence

Creation of Brazilian IGF (FIBI – Foro da Internet no Brasil) in 2011; development of Brazilian Internet Civil Rights Framework in 2014 signed at NET Mundial; Brazilian General Data Protection Law in 2018


Major discussion point

Multi-stakeholder Model and Its Realizations


Topics

Legal and regulatory | Human rights


Agreed with

– Hans Petter Holen
– Chat Garcia Ramilo

Agreed on

IGF is an ecosystem beyond annual meetings


IGF influenced creation of Brazilian Internet Civil Rights Framework and Data Protection Law

Explanation

The multi-stakeholder nature of the IGF was extremely influential in public debates that led to major Brazilian internet legislation. The IGF model provided the framework for inclusive policy development that resulted in landmark digital rights and data protection laws.


Evidence

Brazilian Internet Civil Rights Framework (Marco Civil da Internet) in 2014 signed at NET Mundial; Brazilian General Data Protection Law in 2018


Major discussion point

Policy Development and Governance Impact


Topics

Legal and regulatory | Human rights


IGF should be empowered as main focal point for Global Digital Compact implementation

Explanation

The IGF should serve as the primary platform for following up on Global Digital Compact implementation, integrated with the WSIS forum according to modalities to be defined. This would avoid discrepancies and make it easier for civil society, academia, and Global South governments to participate meaningfully.


Evidence

São Paulo multi-stakeholder guidelines launched during the session, developed during NET Mundial plus 10


Major discussion point

Future Improvements and Permanent Mandate


Topics

Legal and regulatory | Development


Agreed with

– Hans Petter Holen
– Funke Opeke

Agreed on

IGF needs permanent mandate for stability and effectiveness


IGF requires better integration with WSIS forum and more stable funding

Explanation

The IGF needs a permanent mandate requiring more stable and robust funding, and better integration with the WSIS forum. Currently, the WSIS forum is more restricted to governments and UN agency facilitators, making it difficult for diverse stakeholders to participate across multiple spaces.


Evidence

WSIS forum being restricted to governments and facilitators representing various UN agencies responsible for action lines


Major discussion point

Future Improvements and Permanent Mandate


Topics

Legal and regulatory | Development


Agreed with

– Hans Petter Holen
– Funke Opeke

Agreed on

IGF needs permanent mandate for stability and effectiveness


I

Isabelle Lois

Speech speed

192 words per minute

Speech length

717 words

Speech time

223 seconds

IGF serves as issue-spotting and agenda-setting place essential for policy development

Explanation

The IGF functions as the primary space for digital governance discussions, serving crucial agenda-setting and issue-identification roles. This function is essential because it takes years to get issues on agendas in traditional multilateral processes due to lack of unanimity, while IGF can identify emerging concerns early.


Major discussion point

Policy Development and Governance Impact


Topics

Legal and regulatory


IGF serves as capacity building platform with tremendous learning opportunities

Explanation

The IGF provides extensive learning opportunities through its sessions and intersessional work, serving as an important capacity building mechanism. Participants gain knowledge that can be used even without consensus, and the information sharing function is valuable for professional development.


Evidence

Personal experience of learning tremendously through years of IGF attendance and participation in intersessional work


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Development | Sociocultural


O

Online moderator

Speech speed

109 words per minute

Speech length

255 words

Speech time

139 seconds

IGF helped establish UK national IGF and multi-stakeholder advisory groups in government ministries

Explanation

After the first IGF in Athens, the UK government made two key decisions: working with the UK ccTLD registry to establish a national IGF for preparation for global IGFs, and establishing multi-stakeholder advisory groups within government ministries for regular consultation.


Evidence

UK government decisions following first IGF in Athens to create UK IGF and establish multi-stakeholder advisory groups in government ministries


Major discussion point

Multi-stakeholder Model and Its Realizations


Topics

Legal and regulatory | Development


Agreed with

– Bitange Ndemo
– Qusai Al Shatti
– Audience

Agreed on

Multi-stakeholder model became the norm through IGF influence


O

Online participant 1

Speech speed

134 words per minute

Speech length

389 words

Speech time

173 seconds

IGF contextualized cybersecurity challenges and showed value of inclusive governance for problem-solving

Explanation

From a cybersecurity perspective in Nigeria, the IGF helped contextualize challenges like rising cybercrime, jurisdictional complexities in investigations, and building trust in digital platforms. The IGF model demonstrated that inclusive collaborative governance is not just an ideal but a practical problem-solving tool.


Evidence

Rising caseloads of cybercrime in Nigeria; global and jurisdictional complexities in investigations; challenges of information governance and data protection in digitally excluded environments


Major discussion point

Capacity Building and Knowledge Transfer


Topics

Cybersecurity | Legal and regulatory


IGF provided platform for learning, connection, and advocacy with hope for more actionable policy outcomes

Explanation

The IGF has been an impactful platform for learning, making connections, and conducting advocacy work. Looking forward, there is hope that it will become a space where more actionable policy outcomes are shaped through inclusive multi-stakeholder dialogue.


Major discussion point

Personal Impact and Value of IGF


Topics

Legal and regulatory | Development


IGF provides open access for grassroots initiatives and young people to participate globally

Explanation

The IGF’s open approach allows grassroots initiatives and young people to participate without needing formal invitations, making it accessible for those facing political, economic, and other challenges. This inclusive approach enables meaningful engagement from diverse voices in global policy-making.


Evidence

Challenges faced by young people including political, economic effects; IGF’s open conference attendance policy


Major discussion point

Global South Participation and Inclusion


Topics

Development | Sociocultural


Agreed with

– Funke Opeke
– Chat Garcia Ramilo
– Audience

Agreed on

IGF enabled Global South participation and capacity building


IGF should move closer to implementation with localized action and technical working groups

Explanation

The speaker wants to see the IGF move beyond discussion toward more concrete implementation, including localized action where IGF outcomes are translated into practical toolkits for cyber-capacity building and technical working groups addressing specific regional challenges.


Evidence

Examples of translating IGF outcomes into toolkits for cyber-capacity building in African countries; technical working groups for issues like internet shutdowns or ransomware targeting public institutions


Major discussion point

Addressing Contemporary Challenges


Topics

Cybersecurity | Development


A

Audience

Speech speed

135 words per minute

Speech length

2682 words

Speech time

1186 seconds

IGF makes multi-stakeholder model function by helping stakeholders understand each other’s interests

Explanation

The IGF’s most important contribution is creating a platform where stakeholders can explain their interests and listen to others explain theirs in a global context. This mutual understanding and respect for different stakeholder positions is essential for making the multi-stakeholder model work effectively.


Evidence

Comparison between first IGF in 2006 and recent meetings showing much more understanding among participants of other stakeholders’ positions; personal experience attending roughly 15 IGF meetings


Major discussion point

Multi-stakeholder Model and Its Realizations


Topics

Legal and regulatory


Agreed with

– Bitange Ndemo
– Qusai Al Shatti
– Online moderator

Agreed on

Multi-stakeholder model became the norm through IGF influence


IGF proved skeptics wrong and provided personal enrichment through dialogue with diverse viewpoints

Explanation

Initially skeptical about the IGF’s value, viewing it as merely giving ‘breadcrumbs to civil society,’ the speaker was proven wrong over the years. The IGF provided personal enrichment through relationships and dialogue with people from different perspectives and realities.


Evidence

Personal experience participating in WSIS outcome document negotiations and WGIC; initial skepticism about IGF’s purpose; years of personal enrichment through IGF participation


Major discussion point

Personal Impact and Value of IGF


Topics

Sociocultural | Legal and regulatory


IGF offers bottom-up approach where youth can easily communicate with senior leaders

Explanation

As someone from a hierarchical society (Hong Kong), the speaker values the IGF’s bottom-up approach that allows easy communication with senior leaders and experienced professionals. This accessibility is particularly valuable for youth who face challenges reaching senior people in their home contexts.


Evidence

Personal experience from Hong Kong where it’s challenging to communicate with senior people and leaders; contrast with IGF’s accessible environment


Major discussion point

Personal Impact and Value of IGF


Topics

Sociocultural | Development


IGF represents most innovative experience in international governance with culture of civilized dialogue

Explanation

The IGF is described as the most innovative experience in international governance, not just for internet-related issues but for governance in general. It has developed a culture of dialogue and civilized discussion of differences, even among people with opposing positions.


Evidence

Observation of sessions where people with opposing positions discuss constructively; contrast with how people work on the ground outside IGF


Major discussion point

Personal Impact and Value of IGF


Topics

Legal and regulatory | Sociocultural


IGF allows bringing local issues to global table and exchanging good practices

Explanation

The IGF enables participants to bring local community issues to the global level and exchange good practices. Issues raised by other local communities can also be brought back to one’s own local community for reference, creating valuable cross-pollination of ideas and solutions.


Evidence

Personal efforts to explain IGF value to local Hong Kong community despite misunderstandings and lack of awareness


Major discussion point

Global South Participation and Inclusion


Topics

Development | Sociocultural


Agreed with

– Funke Opeke
– Chat Garcia Ramilo
– Online participant 1

Agreed on

IGF enabled Global South participation and capacity building


IGF needs better connection with policy making at global and regional levels

Explanation

While appreciating the IGF’s dialogue culture and multi-stakeholder innovation, there’s a need for better connection with actual policy-making processes at global and regional levels, since that’s where real change ultimately happens.


Major discussion point

Future Improvements and Permanent Mandate


Topics

Legal and regulatory


IGF should be simplified to facilitate newcomer involvement and meaningful participation

Explanation

The IGF should become more accessible and simplified to make it easier for newcomers to become meaningfully involved. This would help facilitate broader participation and engagement from those who are new to internet governance discussions.


Major discussion point

Future Improvements and Permanent Mandate


Topics

Development | Sociocultural


IGF produces tangible outcomes through dynamic coalitions that have been adopted by other organizations

Explanation

While the IGF was designed as a platform for dialogue rather than decision-making, it actually produces tangible outcomes through mechanisms like dynamic coalitions. These outputs, while not official IGF outcomes, have been adopted and used by other organizations, demonstrating real impact.


Evidence

Dynamic coalition on rights and principles that produced papers taken up by other organizations; various dynamic coalitions producing usable outcomes


Major discussion point

Recognition and Visibility of Impact


Topics

Legal and regulatory | Human rights


Agreed with

– Luca Belli
– Chat Garcia Ramilo

Agreed on

IGF needs better visibility and recognition of its achievements


C

Chat Garcia Ramilo

Speech speed

153 words per minute

Speech length

1191 words

Speech time

466 seconds

IGF amplified voices from global south often marginalized in digital governance conversations

Explanation

The IGF has served as an invaluable platform that has specifically amplified voices from the Global South, which are often marginalized in digital governance conversations. This inclusion of diverse perspectives is essential because otherwise these viewpoints cannot be heard in policy discussions.


Evidence

APC’s work through Global Information Society Watch bringing voices from global south and local perspectives; APC’s many members in different countries


Major discussion point

Global South Participation and Inclusion


Topics

Development | Human rights


Agreed with

– Funke Opeke
– Online participant 1
– Audience

Agreed on

IGF enabled Global South participation and capacity building


IGF helped establish human rights principles online and addressed gender-based violence in digital spaces

Explanation

The IGF has been crucial in addressing issues like gender-based violence and defending sexual and reproductive rights in digital spaces. More than a decade ago, the IGF community helped establish the key principle that human rights must apply online as they do offline, which has now become globally recognized.


Evidence

2012 Human Rights Council declaration establishing that human rights apply online as offline; work on gender-based violence and sexual reproductive rights in digital spaces


Major discussion point

Policy Development and Governance Impact


Topics

Human rights | Legal and regulatory


IGF addressed critical issues like internet shutdowns, wars, and crisis communication infrastructure

Explanation

The IGF has evolved to address more difficult and challenging issues, including securing access to internet during times of war and crisis. This includes addressing the destruction of communication infrastructure and developing norms and responsibilities for the multi-stakeholder internet community regarding shutdowns.


Evidence

Main session organized on securing internet access during war and crisis; upcoming main session on norms regarding shutdowns; examples from Ukraine, Palestine, Sudan, and Myanmar


Major discussion point

Addressing Contemporary Challenges


Topics

Cybersecurity | Human rights


IGF needs better celebration of successes and making achievements more visible

Explanation

There is a need for more celebration and recognition of IGF achievements, including bringing joy to the work being done. This celebration is important not just for the community itself but also for demonstrating the IGF’s value and impact to the broader world.


Evidence

Reference to feminist circles emphasizing the need for joy in work; recognition that it’s a difficult time for everyone


Major discussion point

Recognition and Visibility of Impact


Topics

Sociocultural


Agreed with

– Luca Belli
– Audience

Agreed on

IGF needs better visibility and recognition of its achievements


F

Funke Opeke

Speech speed

129 words per minute

Speech length

740 words

Speech time

343 seconds

IGF enabled pioneers in global south to learn best practices for building digital ecosystem

Explanation

As pioneers building digital infrastructure in West Africa, the IGF provided a crucial platform for learning best practices and understanding the ecosystem needed to grow internet penetration from close to 10% to 50% across the region. The IGF helped bridge gaps between different stakeholders including regulators, government, and content providers.


Evidence

Main One submarine cable from Lagos to Portugal launched in 2010; internet penetration growth from ~10% to ~50% across West Africa; cable extending to 5 countries directly and serving 10 countries


Major discussion point

Infrastructure Development and Technical Impact


Topics

Infrastructure | Development


Agreed with

– Chat Garcia Ramilo
– Online participant 1
– Audience

Agreed on

IGF enabled Global South participation and capacity building


IGF enabled global south stakeholders to have seat at the table in polarized world

Explanation

In today’s polarized world, the thought of not having a multi-stakeholder platform like the IGF is chilling, especially considering what would happen if the Global South did not have a seat at the table. The IGF provides essential representation for developing regions in global internet governance discussions.


Evidence

Recognition that despite progress, only 50% penetration across large parts of Africa means significant work remains on digital inclusion


Major discussion point

Global South Participation and Inclusion


Topics

Development | Legal and regulatory


L

Luca Belli

Speech speed

157 words per minute

Speech length

789 words

Speech time

300 seconds

IGF enabled understanding of different stakeholder perspectives and built trust through relationships

Explanation

The IGF allowed for the construction of trust among stakeholders through personal relationships and interactions, which cannot be artificially created but must be built through sustained engagement. This trust-building enabled meaningful cooperation and understanding across different stakeholder groups.


Evidence

Personal experience over 15 IGFs starting as PhD student; mentorship from Marcus Kummer; becoming convener of four dynamic coalitions on net neutrality, platform responsibility, community connectivity, and data/AI governance


Major discussion point

Personal Impact and Value of IGF


Topics

Legal and regulatory | Sociocultural


IGF work on dynamic coalitions influenced Council of Europe recommendations and multiple regulators

Explanation

The academic research and policy recommendations produced through IGF dynamic coalitions have been adopted by major organizations and regulators. The Council of Europe used reports on net neutrality and platform responsibility for their own recommendations, while multiple regulators used community connectivity work for better regulation.


Evidence

Council of Europe using net neutrality and platform responsibility reports; regulators in Mexico, Argentina, Brazil, Kenya using community connectivity work; collaboration with Canadian Commission


Major discussion point

Policy Development and Governance Impact


Topics

Legal and regulatory | Infrastructure


IGF success stories like IANA transition and infrastructure development are not well-publicized

Explanation

Despite significant achievements over 15 years, the IGF is not effective at making its successes visible to the broader public. There may be stakeholders who prefer the IGF to appear irrelevant, but better visibility of reports and recommendations on the IGF website would help demonstrate relevance.


Evidence

All reports and recommendations elaborated over 20 years not being visible on IGF website; Marcus Kummer’s observation that IGF is not good at making success visible


Major discussion point

Recognition and Visibility of Impact


Topics

Legal and regulatory


Agreed with

– Chat Garcia Ramilo
– Audience

Agreed on

IGF needs better visibility and recognition of its achievements


A

Anriette Esterhuysen

Speech speed

134 words per minute

Speech length

296 words

Speech time

132 seconds

IGF created impatience for non-multi-stakeholder forums and connects policymakers with implementers

Explanation

The IGF’s unique approach of connecting policymakers with implementers has created an impatience for other forums that lack this integration. Civil society-only spaces now feel frustrating because they lack the diversity needed to create real impact, while the IGF provides both like-minded actors and different perspectives working together.


Evidence

Personal experience finding civil society-only spaces frustrating due to lack of connection to implementers and policymakers


Major discussion point

Personal Impact and Value of IGF


Topics

Legal and regulatory | Sociocultural


Agreements

Agreement points

IGF needs permanent mandate for stability and effectiveness

Speakers

– Hans Petter Holen
– Renata Mielli
– Funke Opeke

Arguments

IGF needs permanent mandate to focus on matters rather than securing future meeting place


IGF should be empowered as main focal point for Global Digital Compact implementation


IGF requires better integration with WSIS forum and more stable funding


Summary

Multiple speakers emphasized that the IGF requires a permanent mandate to provide stability, enable focus on substantive issues rather than survival, and ensure robust funding for effective operation


Topics

Legal and regulatory


Multi-stakeholder model became the norm through IGF influence

Speakers

– Bitange Ndemo
– Qusai Al Shatti
– Online moderator
– Audience

Arguments

IGF introduced multi-stakeholder consultation model that made policy implementation easier despite initial resistance


Multi-stakeholder process became the norm in policy making and regulation after IGF introduction


IGF helped establish UK national IGF and multi-stakeholder advisory groups in government ministries


IGF makes multi-stakeholder model function by helping stakeholders understand each other’s interests


Summary

Speakers consistently agreed that the IGF successfully transformed multi-stakeholder consultation from an experimental approach to standard practice in policy-making and governance


Topics

Legal and regulatory | Development


IGF is an ecosystem beyond annual meetings

Speakers

– Hans Petter Holen
– Renata Mielli
– Chat Garcia Ramilo

Arguments

IGF ecosystem includes national/regional IGFs, dynamic coalitions, and intersessional work beyond annual meetings


Multi-stakeholder approach inspired Brazilian Internet governance community and policy development


IGF amplified voices from global south often marginalized in digital governance conversations


Summary

Speakers agreed that the IGF functions as a comprehensive ecosystem including national/regional IGFs, dynamic coalitions, and year-round activities, not just an annual event


Topics

Legal and regulatory | Development


IGF enabled Global South participation and capacity building

Speakers

– Funke Opeke
– Chat Garcia Ramilo
– Online participant 1
– Audience

Arguments

IGF enabled pioneers in global south to learn best practices for building digital ecosystem


IGF amplified voices from global south often marginalized in digital governance conversations


IGF provides open access for grassroots initiatives and young people to participate globally


IGF allows bringing local issues to global table and exchanging good practices


Summary

Multiple speakers emphasized how the IGF provided crucial platforms for Global South voices, capacity building, and knowledge exchange that would otherwise be marginalized


Topics

Development | Sociocultural


IGF needs better visibility and recognition of its achievements

Speakers

– Luca Belli
– Chat Garcia Ramilo
– Audience

Arguments

IGF success stories like IANA transition and infrastructure development are not well-publicized


IGF needs better celebration of successes and making achievements more visible


IGF produces tangible outcomes through dynamic coalitions that have been adopted by other organizations


Summary

Speakers agreed that despite significant achievements, the IGF is poor at publicizing its successes and needs better mechanisms to celebrate and showcase its impact


Topics

Legal and regulatory | Sociocultural


Similar viewpoints

All three speakers emphasized the critical role of IGF in bridging technical infrastructure development with policy needs, particularly in developing regions

Speakers

– Bitange Ndemo
– Hans Petter Holen
– Funke Opeke

Arguments

IGF discussions helped build internet infrastructure when Africa had only one gig capacity for entire continent


IGF serves as arena where technical realities meet policy aspirations for internet coordination


IGF enabled pioneers in global south to learn best practices for building digital ecosystem


Topics

Infrastructure | Development


These speakers shared the view that IGF serves as a crucial policy development platform that has directly influenced national legislation and international human rights frameworks

Speakers

– Renata Mielli
– Isabelle Lois
– Chat Garcia Ramilo

Arguments

IGF influenced creation of Brazilian Internet Civil Rights Framework and Data Protection Law


IGF serves as issue-spotting and agenda-setting place essential for policy development


IGF helped establish human rights principles online and addressed gender-based violence in digital spaces


Topics

Legal and regulatory | Human rights


These speakers emphasized the personal transformation and relationship-building aspects of IGF, highlighting how it changed their approach to governance and policy work

Speakers

– Luca Belli
– Anriette Esterhuysen
– Audience

Arguments

IGF enabled understanding of different stakeholder perspectives and built trust through relationships


IGF created impatience for non-multi-stakeholder forums and connects policymakers with implementers


IGF proved skeptics wrong and provided personal enrichment through dialogue with diverse viewpoints


Topics

Sociocultural | Legal and regulatory


Unexpected consensus

Former skeptics becoming strong advocates

Speakers

– Audience (Juan Fernandez)
– Audience (Stephanie Perrin)

Arguments

IGF proved skeptics wrong and provided personal enrichment through dialogue with diverse viewpoints


IGF represents most innovative experience in international governance with culture of civilized dialogue


Explanation

It was unexpected that speakers who were initially cynical or skeptical about the IGF’s potential became some of its strongest advocates, demonstrating the forum’s ability to convert doubters through direct experience


Topics

Sociocultural | Legal and regulatory


Technical community and policy makers agreeing on governance approach

Speakers

– Hans Petter Holen
– Bitange Ndemo
– Qusai Al Shatti

Arguments

IGF serves as arena where technical realities meet policy aspirations for internet coordination


IGF introduced multi-stakeholder consultation model that made policy implementation easier despite initial resistance


Multi-stakeholder process became the norm in policy making and regulation after IGF introduction


Explanation

The consensus between technical infrastructure providers and policy makers on the value of multi-stakeholder governance was unexpected, given traditional tensions between technical and policy communities


Topics

Infrastructure | Legal and regulatory


Agreement on need for system-wide thinking across different sectors

Speakers

– Bitange Ndemo
– Online participant 1
– Chat Garcia Ramilo

Arguments

IGF needs to focus on system-wide thinking to help people benefit from technologies like AI


IGF should move closer to implementation with localized action and technical working groups


IGF addressed critical issues like internet shutdowns, wars, and crisis communication infrastructure


Explanation

Unexpected consensus emerged around moving beyond siloed discussions to address complex, interconnected challenges requiring coordinated responses across different domains


Topics

Development | Cybersecurity


Overall assessment

Summary

The discussion revealed remarkably strong consensus across diverse stakeholders on the IGF’s fundamental value, its role in establishing multi-stakeholder governance as the norm, its function as a comprehensive ecosystem beyond annual meetings, and its critical importance for Global South participation. There was also broad agreement on the need for permanent mandate, better visibility of achievements, and evolution toward more implementation-focused outcomes.


Consensus level

Very high level of consensus with no fundamental disagreements identified. The implications are significant as this unified support from technical, policy, civil society, and government stakeholders provides strong foundation for IGF’s continuation and evolution. The consensus suggests the IGF has successfully proven its value across different communities and regions, creating a solid base for securing permanent mandate and expanding its role in global digital governance.


Differences

Different viewpoints

Approach to IGF discussions – systems thinking vs. specialized focus

Speakers

– Bitange Ndemo
– Other speakers

Arguments

IGF needs to focus on system-wide thinking to help people benefit from technologies like AI


Various speakers focusing on specialized aspects like infrastructure, policy, human rights


Summary

Ndemo advocates for moving away from siloed discussions toward integrated systems thinking, while other speakers continue to address specific domains and specialized issues


Topics

Development | Legal and regulatory


Unexpected differences

No significant unexpected disagreements identified

Speakers

Arguments

Explanation

The session was remarkably consensual, with speakers largely reinforcing each other’s points about IGF’s value and importance. Even potential areas of disagreement were presented as complementary perspectives rather than conflicting views


Topics

Overall assessment

Summary

The discussion showed minimal disagreement, with speakers largely reinforcing each other’s positive assessments of IGF’s impact. The few differences were more about emphasis and approach rather than fundamental disagreements about goals or values


Disagreement level

Very low level of disagreement. This appears to be a consensus-building session where speakers were celebrating IGF’s achievements and advocating for its continuation. The lack of significant disagreement may reflect either genuine consensus among IGF supporters or the session’s design as a celebratory rather than critical examination. This high level of agreement strengthens the case for IGF’s permanent mandate but may also indicate limited critical reflection on areas needing improvement


Partial agreements

Partial agreements

Similar viewpoints

All three speakers emphasized the critical role of IGF in bridging technical infrastructure development with policy needs, particularly in developing regions

Speakers

– Bitange Ndemo
– Hans Petter Holen
– Funke Opeke

Arguments

IGF discussions helped build internet infrastructure when Africa had only one gig capacity for entire continent


IGF serves as arena where technical realities meet policy aspirations for internet coordination


IGF enabled pioneers in global south to learn best practices for building digital ecosystem


Topics

Infrastructure | Development


These speakers shared the view that IGF serves as a crucial policy development platform that has directly influenced national legislation and international human rights frameworks

Speakers

– Renata Mielli
– Isabelle Lois
– Chat Garcia Ramilo

Arguments

IGF influenced creation of Brazilian Internet Civil Rights Framework and Data Protection Law


IGF serves as issue-spotting and agenda-setting place essential for policy development


IGF helped establish human rights principles online and addressed gender-based violence in digital spaces


Topics

Legal and regulatory | Human rights


These speakers emphasized the personal transformation and relationship-building aspects of IGF, highlighting how it changed their approach to governance and policy work

Speakers

– Luca Belli
– Anriette Esterhuysen
– Audience

Arguments

IGF enabled understanding of different stakeholder perspectives and built trust through relationships


IGF created impatience for non-multi-stakeholder forums and connects policymakers with implementers


IGF proved skeptics wrong and provided personal enrichment through dialogue with diverse viewpoints


Topics

Sociocultural | Legal and regulatory


Takeaways

Key takeaways

The IGF has successfully established multi-stakeholder governance as the norm in internet policy-making globally, transforming how governments, civil society, private sector, and technical community collaborate


The IGF ecosystem extends far beyond annual meetings to include national/regional IGFs, dynamic coalitions, policy networks, and capacity-building initiatives that create year-round engagement


The IGF has had concrete infrastructure impacts, particularly in the Global South, including the development of Internet Exchange Points in Africa and submarine cable infrastructure that increased internet penetration from 10% to 50% in West Africa


The IGF serves as a critical ‘issue-spotting’ and agenda-setting platform where emerging digital governance challenges are first identified and discussed before entering formal policy processes


The forum has successfully created trust and understanding between different stakeholder groups by providing a space for listening to and respecting different perspectives and interests


The IGF has directly influenced major policy developments including the Brazilian Internet Civil Rights Framework, data protection laws, and Council of Europe recommendations


The platform has been particularly valuable for Global South participation, providing access to policy discussions and best practices that would otherwise be unavailable


The IGF maintains focus on people-centered development and societal impact rather than purely technical or commercial considerations


Resolutions and action items

Secure a permanent mandate for the IGF to provide stability and enable focus on substantive issues rather than institutional survival


Establish more stable and robust funding mechanisms for the IGF


Better integrate the IGF with the WSIS forum and empower it as the main focal point for Global Digital Compact implementation


Improve visibility and documentation of IGF successes and outcomes on the IGF website


Enhance support for Global South participation through increased financial assistance and capacity building


Simplify IGF processes to facilitate meaningful participation by newcomers


Strengthen connections between IGF outcomes and policy-making processes at global and regional levels


Establish a working group similar to the 2004-05 WGIG to address IGF mandate evolution and institutional structure by 2026


Develop more interactive workshop formats to encourage discussion rather than one-way presentations


Create toolkits and technical working groups to address specific regional challenges like internet shutdowns and cybersecurity


Unresolved issues

How to effectively measure and quantify the IGF’s diverse impacts across different sectors and regions


The challenge of maintaining relevance as the digital divide persists despite infrastructure improvements, with Global North and South facing increasingly different sets of challenges


How to balance the open, inclusive nature of the IGF with the need for more actionable policy outcomes


The fragmentation of digital governance discussions across multiple forums (GDC, AI dialogues, etc.) and how to maintain coherence


How to address the growing polarization in global politics while maintaining the IGF’s multi-stakeholder character


The sustainability of volunteer-driven initiatives and dynamic coalitions within the IGF ecosystem


How to transition from system-wide thinking to practical implementation of emerging technologies like AI for societal benefit


The challenge of maintaining the IGF’s people-centered focus as technology-centric approaches dominate other forums


Suggested compromises

Recognize the distinction between the IGF’s role in issue-framing, agenda-setting, and decision-shaping versus actual decision-making, allowing it to maintain its dialogue function while feeding into formal policy processes


Balance the need for permanent mandate with flexibility to evolve the IGF’s scope and focus based on emerging challenges


Integrate IGF more closely with WSIS structures while maintaining its unique multi-stakeholder character and bottom-up approach


Address the tension between simplifying participation and maintaining the rich ecosystem of intersessional work and specialized initiatives


Find ways to celebrate successes and increase visibility without compromising the IGF’s non-decision-making nature


Balance global coordination with local relevance through stronger national and regional IGF networks


Maintain the IGF’s broad scope while developing more focused technical working groups for specific challenges


Thought provoking comments

But once you’ve gone through the whole process with the stakeholders, implementation became much, much easier. For those who are younger, at the time, there was no Google. I think there was Netscape, AltaVista, that’s what was there. We didn’t know what exactly internet will do. But thank God it went the IGF way, otherwise it would have been a private sector company selling its services to the people.

Speaker

Bitange Ndemo


Reason

This comment provides crucial historical context and frames the IGF’s role in preventing internet commercialization. It highlights how the multi-stakeholder approach was initially ‘painful’ but ultimately more effective than traditional top-down policy making.


Impact

This opening comment set the tone for the entire discussion by establishing the IGF’s foundational importance and its role in shaping internet governance away from pure commercialization. It provided a historical anchor that other speakers referenced throughout.


The IGF has been a rare and essential arena where technical realities meet policy aspirations… we need to protect the internet coordination, which keeps it running through stable interoperable systems. And we need to strengthen internet governance, which shapes how we use it through shared norms and policies. And we need to guide digital governance, which shapes what it becomes in terms of social transformations.

Speaker

Hans Petter Holen


Reason

This comment introduces a sophisticated three-layer framework distinguishing internet coordination, internet governance, and digital governance. It challenges the common conflation of these concepts and provides analytical clarity.


Impact

This framework helped structure subsequent discussions by providing clear conceptual boundaries. It influenced how other speakers approached the technical versus policy aspects of internet governance.


I think the IGF, and its links to the WSIS, creates a link to people-centered development and to people. I think we live with so much fragmentation in how we talk about digital, and I think so many of the new fora, Global Digital Compact, for example, Artificial Intelligence Dialogue, puts the emphasis on the technology, not on the society, and not on the people.

Speaker

Anriette Esterhuysen


Reason

This comment identifies a critical distinction between technology-centered and people-centered approaches to digital governance, challenging the direction of newer international forums.


Impact

This observation shifted the discussion toward examining the IGF’s unique value proposition compared to other digital governance forums, emphasizing its focus on societal impact rather than just technological advancement.


So I can tell you that when this began, and I went to the first one, I was very sceptical. I think, well, this is just, we’re giving some breadcrumbs to the civil society because they were shunned out of the process, so this is just for that, you know. But I was proven wrong.

Speaker

Juan Fernandez (Cuba Ministry of Communication)


Reason

This honest admission of initial skepticism followed by genuine conversion provides powerful testimony to the IGF’s effectiveness. Coming from a government representative, it carries particular weight.


Impact

This personal transformation narrative added emotional depth to the discussion and demonstrated the IGF’s ability to change minds even among initially skeptical government officials, lending credibility to claims about its impact.


I think the IGF is the most innovative experience I have seen in my life in international governance, not only related to internet but in general. It has inspired me in the way that we work, I think we have developed a culture of dialogue and deal in a civilized manner with our differences.

Speaker

Raul Echeverria


Reason

This comment positions the IGF as a broader innovation in international governance beyond just internet issues, suggesting it has created new models for global cooperation.


Impact

This elevated the discussion from focusing solely on internet governance to considering the IGF as a template for international cooperation more broadly, expanding the scope of its perceived significance.


The goal of the IGF should be increasingly to be early on and facilitate the common picture of the key topics… We need to reach a new step and we need to do what we did with the WGIG in 2004-05, i.e. having a group that discusses, one, the evolution of the mandate and the focus and scope of the IGF… and, second, the institutionalization of the structure.

Speaker

Bertrand Lachapelle


Reason

This comment provides a concrete roadmap for IGF evolution, distinguishing between agenda-setting and decision-making functions while proposing specific institutional reforms.


Impact

This intervention shifted the discussion from celebrating past achievements to concrete future planning, introducing specific proposals for structural evolution that other speakers could build upon.


My only regret I have is that all the years we have taught in silos. We deal with the infrastructure, we deal with violence, we deal with… Now, looking forward, I would want to see discussions in IGF focusing in what I call we think system-wide.

Speaker

Bitange Ndemo


Reason

This critique of siloed thinking challenges the IGF’s current approach and calls for more integrated, systems-thinking approaches to address complex technological challenges like AI.


Impact

This closing comment introduced a critical perspective on the IGF’s methodology, suggesting that despite its successes, it needs to evolve toward more holistic approaches to remain relevant for emerging technologies.


Overall assessment

These key comments shaped the discussion by creating a narrative arc from historical validation to future evolution. The conversation moved through several phases: establishing historical legitimacy (Ndemo, Fernandez), defining conceptual frameworks (Holen, Esterhuysen), demonstrating personal transformation (Fernandez, Echeverria), and proposing concrete reforms (Lachapelle, Ndemo’s closing). The most impactful comments challenged assumptions – whether about initial skepticism, the uniqueness of the IGF model, or the need for systemic thinking. Together, they created a rich dialogue that balanced celebration of achievements with critical analysis of future needs, ultimately reinforcing the IGF’s value while acknowledging areas for improvement.


Follow-up questions

What is the concrete impact of the IGF on Internet governance in developing countries?

Speaker

Nathan Latte from IGF Côte d’Ivoire


Explanation

This question seeks specific, measurable outcomes of IGF’s influence on policy and governance structures in developing nations, which is important for demonstrating the forum’s effectiveness and value.


How do we measure the impacts of the IGF?

Speaker

Stephanie Perrin


Explanation

She noted the difficulty in measuring IGF’s success as a multi-stakeholder innovator and suggested developing metrics to quantify different types of impacts, from local initiatives to global policy influence.


How can we make IGF outcomes and reports more visible on the IGF website?

Speaker

Luca Belli


Explanation

He expressed frustration that the IGF’s successes and policy recommendations from dynamic coalitions and other work are not well-documented or easily accessible, limiting their impact and visibility.


How can we better connect the IGF with policy making at global and regional levels?

Speaker

Raul Echeverria


Explanation

This addresses the need to strengthen the link between IGF discussions and actual policy implementation, which is where real-world change occurs.


How can we establish a working group to discuss the evolution of IGF’s mandate and institutionalization of its structure?

Speaker

Bertrand Lachapelle


Explanation

He suggested creating a group similar to the WGIG from 2004-05 to address IGF’s evolving role and structural improvements, which is crucial for the forum’s future development.


How can we reduce barriers to entry for Global South participation in IGF?

Speaker

Luca Belli


Explanation

He noted the high financial barriers for Global South countries to attend IGF meetings in expensive locations, which limits diverse participation and undermines the multi-stakeholder model.


How can we adopt a systems-wide approach to technology discussions in IGF?

Speaker

Bitange Ndemo


Explanation

He advocated for moving beyond siloed discussions to examine how technologies like AI can be systematically applied to solve real-world problems, such as improving agricultural productivity.


How can we officially recognize youth voices as part of the multi-stakeholder model?

Speaker

Piu from Myanmar


Explanation

This addresses the need for formal recognition and meaningful engagement of young people in IGF processes, ensuring intergenerational participation in internet governance.


How can we simplify IGF to make it easier for newcomers to become meaningfully involved?

Speaker

Raul Echeverria


Explanation

This focuses on improving accessibility and reducing complexity for new participants, which is essential for maintaining the forum’s relevance and expanding its community.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track

Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track

Session at a glance

Summary

This discussion focused on finalizing an output document from the parliamentary track at the Internet Governance Forum (IGF), which addressed information integrity, combating online harms, and protecting freedom of expression. Andy Richardson from the Inter-Parliamentary Union explained that the document captures key themes around three main areas: parliaments’ law-making roles, platforms’ responsibilities, and questions of power distribution between these entities. The central conclusion emphasized the fundamental importance of cooperation and dialogue between parliaments, public authorities, and various stakeholders including technical companies.


Participants provided feedback on the draft document, with suggestions to include references to environmental impacts of AI technologies, digital inclusion for marginalized groups, and balancing security with freedom of expression. Some proposed deletions regarding IGF participation and civil society references were rejected as they didn’t reflect the majority consensus. Several speakers raised concerns about moving beyond discussion to concrete action, with Anne McCormick questioning how to ensure the statement has credibility and weight through monitoring and follow-up mechanisms.


The Parliamentary Assembly of the Mediterranean expressed three main reservations: respect for national sovereignty in digital regulation, need for balance between security and freedom of expression, and equitable inclusion of Southern and Mediterranean countries. Other participants emphasized the urgency of developing specific instruments and frameworks, with suggestions for creating a digital governance radar similar to climate policy tracking systems. The session concluded with commitments to continue parliamentary exchanges, track AI policy developments, and organize future collaborative events, including a major parliamentary event on responsible AI to be hosted by Malaysia’s Parliament.


Keypoints

## Major Discussion Points:


– **Output Document Development and Consensus Building**: The discussion centered around finalizing an output document that captures the parliamentary track discussions, with participants providing feedback on additions (environmental impact of AI, digital inclusion, balancing security with freedom of expression) while some proposed deletions (references to IGF and civil society) were rejected due to lack of consensus.


– **Moving from Talk to Action**: Multiple participants emphasized the urgent need to transition from discussions to concrete implementation, with calls for monitoring mechanisms, progress tracking tables, and specific instruments like treaties or binding conventions to address digital governance challenges.


– **Three P’s Framework – Parliaments, Platforms, and Power**: The core thematic framework focused on the law-making role of parliaments (ensuring human rights compliance), platform responsibilities for information integrity, and the dynamic power relationships between these entities, with cooperation and dialogue identified as fundamental solutions.


– **National Sovereignty vs. Global Cooperation**: Tensions emerged between respecting national sovereignty in digital regulation (allowing countries to regulate according to their own frameworks) and the need for international cooperation, particularly regarding support for Global South and Mediterranean countries in capacity building.


– **Resource Sharing and Knowledge Exchange**: Participants discussed creating systematic ways to share legislative experiences, including proposals for a “digital governance radar” similar to climate policy platforms, and ongoing parliamentary exchanges on AI regulation and digital policy development.


## Overall Purpose:


The discussion aimed to finalize an output document from the parliamentary track of the Internet Governance Forum (IGF), capture key insights from parliamentary discussions on digital governance and AI regulation, and establish concrete next steps for international parliamentary cooperation on technology policy issues.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularly regarding national sovereignty concerns raised by the Parliamentary Assembly of the Mediterranean), the overall atmosphere remained cooperative. The tone became increasingly urgent and action-oriented as participants emphasized the need to move beyond discussions to concrete implementation, but this urgency was expressed constructively rather than critically. The closing remarks reinforced the positive, forward-looking nature of the collaboration.


Speakers

– **Andy Richardson**: Inter-Parliamentary Union collaborator, involved in creating output documents for parliamentary tracks at IGF


– **Mahabd Al-Nasir**: From Egypt, attending his third international IGF


– **Audience**: Multiple unidentified speakers including a colleague from Algeria representing the Parliamentary Assembly of the Mediterranean, and other participants


– **Celine Bal**: Organizer/coordinator of the parliamentary track, works with IGF initiatives


– **Amira Saber**: From Egypt, has experience developing draft bills on climate policy


– **Anne McCormick**: From EY (Ernst & Young)


Additional speakers:


– No additional speakers were identified beyond those in the provided speakers names list. All speakers in the transcript were either named individuals from the list or identified as “Audience” members.


Full session report

# Parliamentary Track Discussion: Finalising Output Document on Digital Governance and Information Integrity


## Executive Summary


This discussion focused on finalising an output document from the parliamentary track at the Internet Governance Forum (IGF), addressing information integrity, online harms, and freedom of expression. The session brought together parliamentarians and stakeholders to review policy recommendations and discuss implementation challenges in digital governance.


## Key Participants and Context


The discussion was facilitated by Andy Richardson from the Inter-Parliamentary Union. Participants included Mahabd Al-Nasir from Egypt (attending his third international IGF), Amira Saber from Egypt with experience in climate policy development, Anne McCormick from EY, and Celine Bal from the parliamentary track. Additional contributions came from audience members, including a representative from Algeria speaking on behalf of the Parliamentary Assembly of the Mediterranean. Celine Bal noted that there are 176 national and regional IGF initiatives globally.


## Core Framework: The Three P’s Approach


Richardson outlined the document’s central framework organised around three areas: Parliaments, Platforms, and Power. This structure addresses parliaments’ law-making roles in ensuring human rights compliance, platforms’ responsibilities for maintaining information integrity, and the power relationships between these entities. The framework emphasises that cooperation and dialogue between parliaments, public authorities, and stakeholders represents the fundamental solution to digital governance challenges.


## Document Development and Feedback Integration


The session focused on refining the output document based on participant feedback. Richardson detailed proposed additions, including references to the environmental impact of AI technologies, digital inclusion provisions for marginalised groups such as women, children, and people with disabilities, and the balance between security measures and freedom of expression.


Some participants had proposed deletions, particularly regarding references to IGF participation and civil society engagement. Richardson indicated these proposals might be set aside unless someone wished to pursue them further, as they did not reflect the majority consensus from the discussions.


## Implementation and Accountability Challenges


Anne McCormick raised questions about document credibility and implementation, asking about speed, concreteness and credibility. She challenged participants to consider committing to monitoring progress or reviewing achievements within six months, asking “How do we make this more than talk?”


Mahabd Al-Nasir echoed these concerns, expressing his desire for a year-over-year tracking system that would document actual achievements rather than simply producing documents without knowing their impact in parliaments, governments, or countries. He suggested creating progress tracking tables.


## Calls for Binding Instruments


An audience member highlighted the urgency expressed by parliamentarians present, calling for moving beyond strategies and frameworks towards “treaties or universally binding conventions.” This intervention elevated the discussion to questions about governance structures and the adequacy of current approaches.


## National Sovereignty and Regional Perspectives


The Parliamentary Assembly of the Mediterranean, represented by the Algerian delegate, outlined three main reservations presented as constructive and cooperative feedback: the importance of respecting national sovereignty and allowing parliaments to regulate digital space according to their own legal, cultural, and social frameworks; the need for balance between digital security and freedom of expression with clearer definitions of harmful content and disinformation; and the requirement for equitable inclusion of Global South and Mediterranean countries with strengthened capacity-building support. They also asked when the final document version would be available.


## Environmental Considerations


Multiple participants emphasised the need for attention to the environmental impact of AI technologies. An audience member called for greater emphasis on environmental impact, noting that AI technologies are particularly energy-intensive. This concern was integrated into the document revisions.


## Knowledge Sharing Proposals


Amira Saber proposed creating a “digital governance radar” platform similar to existing climate policy radar systems. This platform would be developed in collaboration with the Inter-Parliamentary Union and IGF secretariat to provide parliamentarians with comprehensive access to global legislative experiences and digital governance policies.


## Concrete Commitments and Next Steps


Several specific commitments emerged from the discussion:


– The IPU committed to integrating feedback into the output document and tracking parliamentary actions on AI policy across jurisdictions, with an invitation for parliaments to share their AI policy work


– The Parliament of Malaysia committed to hosting a parliamentary event on responsible AI development at the end of November in partnership with the IPU, UNDP, and Commonwealth Parliamentary Association


– Upcoming collaborative sessions were announced, including AI regulation discussions (2:45 to 3:45 in the afternoon in Studio N) and sessions on youth policy visions and indigenous language technology barriers (on the 26th in Vestfold)


– The final output document will become part of the formal IGF record and be distributed to all national parliaments


– Organisers pledged to circulate the final output document and session summaries to all participants


## Conclusion


The discussion addressed practical implementation challenges in digital governance while reviewing policy recommendations. Key themes included the need for concrete accountability mechanisms, environmental considerations in AI policy, knowledge sharing platforms, and balancing international cooperation with national sovereignty. The session produced specific commitments for future collaboration, including the Malaysian parliamentary event and ongoing tracking of AI policy developments across jurisdictions.


Session transcript

Celine Bal: very much. And before I actually give the floor also to Andy, our close collaborator from the Inter-Parliamentary Union, I also wanted to mention that we do have over 176 national and regional IGF initiatives and we also very much encourage members of parliaments not only to take part in our global parliamentary track but also the ones that exist at regional, sub-regional or even national levels. So I would like to give the floor now to Andy so that we discuss a little bit more the next steps for the output document that is resulting from that track.


Andy Richardson: Thank you, Celine, and to everybody who is still with us today. At the end of each parliamentary track at IGF there is an output document which tries to capture and summarize your discussions and then becomes part of the formal record of the parliamentary track, part of the record of this IGF and is also distributed to all national parliaments. And so between Celine, myself and with the help of many others we’ve tried to capture the different points that have come up in your discussions. A draft was circulated this morning and we received a lot of very positive feedback indicating a strong degree of consensus. Also some suggestions for modifications. I’ll say a couple of words about the document and then open the floor if anyone would like to make further observations. So firstly on the output document itself, what does it say? Our discussions here have been largely focused on questions of information integrity, of combating online harms while protecting freedom of expression. And the main ideas can be maybe captured in three P’s. Parliaments, platforms and power. Noting that parliaments of course have a fundamental law-making role that all legislation should be in line with and compliant with international human rights principles and really inviting parliaments to draw upon the best available expertise within the technical community and amongst the whole IGF multi-stakeholder community. There’s also been a lot of discussion of the role of platforms and the particular responsibilities that platforms bear when it comes to information integrity and combating online harms. And from this intersection between parliaments and platforms there are really questions of power, relational power, where that sits. And you’ve heard the discussions yourselves. It’s a very dynamic relationship with different perspectives but it’s very much a live issue of where does the power lie. And out of your discussions the main conclusion that we really heard from you was about the fundamental importance of cooperation and dialogue. Cooperation between parliaments to share experiences but also cooperation between parliaments and public authorities and the whole range of stakeholders including the technical sector, including the very powerful technical companies. These questions can only be resolved through ongoing dialogue. And so the output document attempts to capture these points. The feedback on the draft made some really interesting and useful suggestions for additions which with your approval we propose to take on board. So points around noting the environmental impact of technologies, particularly AI data centers, reinforcing the notion of digital inclusion, inclusion for women, children but also other groups which risk exclusion such as people with disabilities. And reinforcing some of the points around trying to balance security with freedom of expression and combating hate speech. So there are a lot of really useful comments which will be integrated into the output document. There were a couple of proposals for deletions which didn’t obviously meet with the consensus of the discussions. I’ll reference them. There were proposals to delete references to the IGF and participation in international processes, to delete references to civil society. But really it didn’t feel that these were in line with the majority of views during the discussion. So unless someone wishes to pursue and explain the points we may set them aside. So with that this is the very main ideas around the output document as a reflection of your discussions. We have a little bit of time, a couple of minutes, if there are any further observations on the draft which you received earlier today and following these comments. Would anyone like to make any further comments? Please raise your hand.


Anne McCormick: Bonjour, it’s Anne McCormick from EY again. We provided some written comments but I had a question which is given the importance of speed, concreteness and credibility in these types of statements, from your experience and that of the people in this room, how do we make sure that this statement has weight and credibility? Do we state that we will monitor this or that we will come back and review progress on each of the points and sub points in six months? How do we make this more than talk? Talk matters but talk itself loses the power of its content if it’s not followed by action.


Mahabd Al-Nasir: Thank you. A comment from the side. My name is Mahabd Al-Nasir, I’m from Egypt. Actually this is the third international IGF I’m attending. I can say that we always say very great things, so capitalizing on what my colleague was just saying. I would love to have for the next IGF something like, I don’t know, a table saying that we need to take down what is already done or what we could achieve year over year. So we don’t just get out with the very good documents but we don’t know what they are going to do with it in our parliaments or in our governments or countries or whatever. Thank you.


Celine Bal: Thank you. A colleague from Algeria.


Audience: Thank you, Andy. I switch to French, please. So, in relation to the project of declaration of the parliamentary course, the Parliamentary Assembly of the Mediterranean wishes to formulate, in a constructive and cooperative spirit, three main reservations. Firstly, the respect for national sovereignty. Indeed, the PM underlines the importance of allowing each parliament to regulate the digital space according to its own legal, cultural and social frameworks. This requirement is particularly crucial in terms of the proposed legislative approaches on online content and platform regulation. I have given you references to the corresponding paragraphs. Secondly, there is a need for a balance between digital security and freedom of expression. While reaffirming the commitment of the PM against disinformation and harmful content and severe threats, the PM calls for guarantees to ensure that fundamental freedoms are not compromised. A greater clarity is also desired when defining terms such as harmful content, disinformation or intimidation. And thirdly, the equitable inclusion of the countries of the South and the Mediterranean. The APM insists on the need for strengthened support and on the development of the capacities of the South and the East Parliaments of the Mediterranean in order to guarantee a balanced and inclusive implementation of the commitments made. By thanking you for the collaboration and the spirit of openness and mutual respect, I just wanted to know when we will have the final version of the document. Thank you. One of the things that I think needs to be captured out of IGF 2025 is the urgency with which every Member of Parliament who has spoken here is asking for us to define the mission. We have been coming to IGF and generally speaking about technology and then in the last few years now we are hearing about AI. The conversations are general, they are talking about what parameters we should put but now there is an urgency in terms of what instruments, specific instruments are going to be developed so that we can draw the riverbanks for this work that we are doing. Each and every person I have listened to, including at the plenary when Joseph Gordon was presenting, each of them has a struggle. The developers have a struggle. I heard him speak about the struggle that the creatives have. We definitely have a struggle about data representation and people are asking what is that common ground that defines this struggle for all of us so that it is not a developer versus government front or a civil society versus government or even people versus big tech. We are asking what is that common ground and how can it be put in an instrument that we are able to pursue and to my mind I am thinking about working towards strategies, working towards frameworks and with the need to come to treaties or universally binding conventions and the urgency is real. Everybody is saying to us we have spoken too much, let’s put the ink to paper. I thank you.


Andy Richardson: Thank you very much. From Egypt. Thank you so much.


Amira Saber: I will be very quick about my suggestion. In my experience to develop the draft bill on climate in Egypt, I used a climate policy radar which is a very good platform where every legislation that is related to climate anywhere in the world is put on the map and is accessible for every parliamentarian. This gives the wealth of knowledge on the legislation. I suggest the same thing which could be easily done in collaboration with the Inter-Parliamentary Union and the IGF secretariat to have a digital inclusion or digital governance radar. Or what has been released from the parliamentarians across the world when it comes to digital governance. That would be extremely beneficial to any parliamentarian. Thank you.


Andy Richardson: Thank you very much. And perhaps a final comment at the back of the room, please.


Audience: Thank you so much, dear honorable colleagues and experts. First of all, I want to thank you for the enlightenment and discussions and inputs we got the last two days. It’s amazing. And we tackled a lot of issues that are very, very important. Especially, we discussed risks of digitalization of AI. We explored threats to human rights, to democracy, to truth itself through deepfake and algorithmic manipulation. One thing that we tackled, not enough, that I also sent to you is the need to have a clear vision of the future. One thing that we tackled, not enough, that I also sent to you is the issue of environmental impact of the technologies we use. Especially AI is very intense in energy. And I just wanted to stress out the point that we also have to think about that probably all the time because it’s an issue that tackles us on the whole world. And I think we have to, we didn’t, the only part where we did that was yesterday in the parliament. There was a discussion about that and I wanted to enforce that we get that too. Thank you.


Andy Richardson: Thank you very much to everybody for these very useful inputs. We take good note of all of these points. I think the first speakers raised the core action referring to the urgency, to the need to move. Now I think that this is a shared responsibility amongst every person in the room whether they’re a member of parliament or a different type of participant. There are different ways and at different levels that each of us can be taking action. Firstly, the parliamentarians here are truly and genuinely the central figures in this. And so it’s within your national parliaments. What questions are being raised? What hearings are being carried out? And are you able to move the political agenda in your own countries? We’ve heard a lot about the different resources available within the IGF community and really an invitation to draw upon the expertise of the technical community of civil society in the private sector. Parliaments can engage in their national IGF communities. These communities exist very broadly. They can engage at the regional level as well where there are many guidance documents and fora that exist. And at our level, at the IPU level, we are committed to continuing the exchange between parliaments providing fora for ongoing exchange of experience. I talked a little bit earlier about we have a specific focus around AI at the moment because it’s so new and emerging. So many parliaments are asking themselves questions. Tomorrow, in partnership with UNDP, there will be a session on AI regulation where we will hear examples from different jurisdictions, from the European Parliament, from Egypt and from Uruguay, but also all other parliaments that want to come and share what they have done or what they are doing. We are currently tracking parliamentary actions on AI policy. There are also other sources of what legislation exists. And there are links to all of the committees that are acting, to all of the draft legislation. If your parliament is taking action on AI policy, we want to hear about that so that that can be shared. And as the colleague from UNDP referred to earlier, we are about to publicly announce that at the end of November, the Parliament of Malaysia will host, along with the IPU, UNDP and the Commonwealth Parliamentary Association, a major parliamentary event on the role of parliaments in developing responsible AI. We see very much our role is trying to connect the members of parliament who are trying to take action in this area so that they can exchange notes on their progress, on their challenges, their obstacles and how to try to work together and build the coalitions that you have been describing today. So at all of these different levels, we believe there is space for progress and frankly I think we are heartened as well. by the very high level of participation in this parliamentary track and it’s the sign that there is action taking place in your parliaments. We will clean up the output document, we will circulate it and we hope that this will be helpful to you as you try to advance your agendas in your parliaments. Céline, maybe a closing word?


Celine Bal: Thank you very much, Andy, also for this very good summary. Perhaps just for advanced information, we will also, in addition to this output document, have some summary of discussions from each of the sessions and we will of course also track the different references that have been done, different documents that have been mentioned, especially from the session that happened just before with all the various stakeholder groups that came together and really wanted to show some concrete collaboration opportunities with you members of Parliament. And last but not least, before closing the parliamentary track, I mentioned already that there are other sessions taking place for the rest of the week from different stakeholder groups organized and there are specific ones that are really inviting members of parliaments also to join. So one of them will be starting tomorrow at 9 here in Studio N, a collaborative sessions on foundations of AI and cloud policy. There is also one that has been mentioned by the European Parliament, including youth and their policy visions, happening later on in Vestfold, also in the morning. And we have also the session that has been mentioned by Andy, co-organized by UNDP as well, with the Interparliamentary Union in the afternoon from 2.45 to 3.45 on the AI regulation. And last but not least, there’s also a session on the 26th organized by the Sámi Parliament, together with the Norwegian government and also UNESCO, on addressing the barriers to indigenous language technology and AI uptake. So again, explore the program and also, last but not least, let us know about any feedback that you may have. We’re going to integrate it also for future sessions. Thank you so much.


Andy Richardson: Thank you, Celine. And with that… Thank you. And with those final words, I thank the Parliament of Norway for hosting this parliamentary track to all of the participants and particularly to Celine for all of her efforts in putting this together. Thank you. Enjoy the rest of the IGF.


A

Andy Richardson

Speech speed

125 words per minute

Speech length

1219 words

Speech time

582 seconds

Document captures discussions on information integrity, combating online harms, and protecting freedom of expression through three key areas: parliaments, platforms, and power

Explanation

The output document summarizes parliamentary discussions focusing on information integrity and online harms while protecting freedom of expression. The main ideas are organized around three P’s: parliaments (with their law-making role), platforms (with their responsibilities for information integrity), and power (the dynamic relationship and questions of where power lies between parliaments and platforms).


Evidence

Draft document was circulated and received positive feedback indicating strong consensus, with discussions focused on parliaments’ fundamental law-making role, platform responsibilities, and the dynamic relationship between them


Major discussion point

Parliamentary Track Output Document Development


Topics

Human rights | Legal and regulatory | Sociocultural


Disagreed with

– Audience

Disagreed on

Inclusion of references to IGF and civil society in output document


Suggestions for additions include environmental impact of AI, digital inclusion for marginalized groups, and balancing security with freedom of expression

Explanation

Feedback on the draft document included useful suggestions for additions that would strengthen the output. These additions focus on noting the environmental impact of AI data centers, reinforcing digital inclusion for women, children, and people with disabilities, and reinforcing points about balancing security with freedom of expression while combating hate speech.


Evidence

Specific feedback mentioned environmental impact of AI data centers, digital inclusion for women, children and people with disabilities, and balancing security with freedom of expression and combating hate speech


Major discussion point

Parliamentary Track Output Document Development


Topics

Development | Human rights | Infrastructure


Agreed with

– Audience

Agreed on

Environmental impact of AI technologies requires greater attention


Commitment to continuing parliamentary exchanges and providing forums for experience sharing, particularly on AI regulation

Explanation

The Inter-Parliamentary Union commits to facilitating ongoing exchanges between parliaments and providing forums for sharing experiences, with a specific focus on AI regulation due to its emerging nature. They are tracking parliamentary actions on AI policy and connecting parliamentarians working in this area so they can exchange notes on progress, challenges, and obstacles.


Evidence

IPU has specific focus on AI, tracks parliamentary actions on AI policy, provides links to committees and draft legislation, and will host a major parliamentary event in Malaysia on responsible AI with UNDP and Commonwealth Parliamentary Association


Major discussion point

Future Parliamentary Engagement and Collaboration


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Amira Saber
– Celine Bal

Agreed on

Importance of knowledge sharing platforms and collaborative mechanisms


A

Audience

Speech speed

125 words per minute

Speech length

695 words

Speech time

333 seconds

Need for clear vision of future and greater emphasis on environmental impact of AI technologies which are energy-intensive

Explanation

The speaker emphasized that while many important issues were discussed over two days, including risks of digitalization and AI threats to human rights and democracy, there was insufficient focus on the environmental impact of technologies. They stressed that AI is very energy-intensive and this environmental consideration should be constantly kept in mind as it affects the whole world.


Evidence

AI technologies are very energy-intensive, and this was only briefly discussed in one parliamentary session the previous day


Major discussion point

Parliamentary Track Output Document Development


Topics

Development | Infrastructure | Legal and regulatory


Agreed with

– Andy Richardson

Agreed on

Environmental impact of AI technologies requires greater attention


Urgency for developing specific instruments and frameworks, moving from general discussions to binding conventions and treaties

Explanation

The speaker highlighted the urgency expressed by every Member of Parliament for defining specific missions and developing concrete instruments rather than continuing general discussions. They emphasized the need to move from talking about parameters to creating actual frameworks, strategies, and potentially binding conventions that address the common struggles faced by developers, governments, civil society, and people versus big tech.


Evidence

Every Member of Parliament who spoke expressed urgency, including Joseph Gordon’s presentation about struggles faced by developers and creatives, and the need for common ground rather than adversarial fronts between different stakeholders


Major discussion point

Ensuring Document Credibility and Implementation


Topics

Legal and regulatory | Human rights | Development


Agreed with

– Anne McCormick
– Mahabd Al-Nasir

Agreed on

Need for concrete action and implementation mechanisms beyond producing documents


Importance of respecting national sovereignty and allowing parliaments to regulate digital space according to their own legal, cultural, and social frameworks

Explanation

The Parliamentary Assembly of the Mediterranean emphasized that each parliament should be allowed to regulate the digital space according to its own legal, cultural, and social frameworks. This requirement is particularly crucial regarding proposed legislative approaches on online content and platform regulation, as it respects the diversity of national contexts and approaches.


Evidence

The speaker referenced specific paragraphs in the document and spoke on behalf of the Parliamentary Assembly of the Mediterranean


Major discussion point

National Sovereignty and Cultural Considerations


Topics

Legal and regulatory | Sociocultural | Human rights


Need for balance between digital security and freedom of expression with clearer definitions of harmful content and disinformation

Explanation

While reaffirming commitment against disinformation and harmful content, the Parliamentary Assembly of the Mediterranean called for guarantees that fundamental freedoms are not compromised. They specifically requested greater clarity in defining key terms such as harmful content, disinformation, and intimidation to ensure proper balance between security measures and freedom of expression.


Evidence

The Parliamentary Assembly of the Mediterranean’s formal position against disinformation and harmful content while emphasizing protection of fundamental freedoms


Major discussion point

National Sovereignty and Cultural Considerations


Topics

Human rights | Legal and regulatory | Sociocultural


Requirement for equitable inclusion of Global South and Mediterranean countries with strengthened capacity building support

Explanation

The Parliamentary Assembly of the Mediterranean insisted on the need for strengthened support and capacity development for South and East Mediterranean parliaments. This ensures balanced and inclusive implementation of commitments made, addressing the digital divide and ensuring that all regions can effectively participate in digital governance initiatives.


Evidence

Specific reference to South and East Parliaments of the Mediterranean needing capacity development support


Major discussion point

National Sovereignty and Cultural Considerations


Topics

Development | Legal and regulatory | Infrastructure


A

Amira Saber

Speech speed

146 words per minute

Speech length

120 words

Speech time

49 seconds

Proposal to create a digital governance radar platform similar to climate policy radar to share legislative knowledge globally

Explanation

Based on her experience developing a draft bill on climate in Egypt using a climate policy radar platform, the speaker suggested creating a similar digital governance radar. This platform would map all digital governance legislation worldwide and make it accessible to parliamentarians, providing a wealth of knowledge on legislation that could be developed in collaboration with the Inter-Parliamentary Union and IGF secretariat.


Evidence

Personal experience using climate policy radar for developing draft bill on climate in Egypt, which provided access to climate-related legislation from around the world


Major discussion point

Parliamentary Track Output Document Development


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Andy Richardson
– Celine Bal

Agreed on

Importance of knowledge sharing platforms and collaborative mechanisms


A

Anne McCormick

Speech speed

121 words per minute

Speech length

108 words

Speech time

53 seconds

Importance of speed, concreteness, and credibility in statements, with need for monitoring and progress review mechanisms

Explanation

The speaker emphasized that given the importance of speed, concreteness, and credibility in policy statements, there needs to be mechanisms to ensure the statement has weight and credibility. She questioned whether they should commit to monitoring progress or reviewing achievements on each point in six months, arguing that talk loses its power if not followed by concrete action.


Evidence

Speaker’s experience at EY and provision of written comments on the document


Major discussion point

Ensuring Document Credibility and Implementation


Topics

Legal and regulatory | Development


Agreed with

– Mahabd Al-Nasir
– Audience

Agreed on

Need for concrete action and implementation mechanisms beyond producing documents


M

Mahabd Al-Nasir

Speech speed

103 words per minute

Speech length

114 words

Speech time

66 seconds

Need for year-over-year tracking of achievements rather than just producing good documents without follow-up action

Explanation

Based on attending three international IGFs, the speaker observed that while great things are always discussed and good documents are produced, there’s no clear tracking of what gets implemented in parliaments, governments, or countries. He proposed having a tracking table for the next IGF to monitor what has been achieved year over year, ensuring accountability and progress measurement.


Evidence

Personal experience attending three international IGFs and observing the pattern of producing good documents without clear follow-up on implementation


Major discussion point

Ensuring Document Credibility and Implementation


Topics

Legal and regulatory | Development


Agreed with

– Anne McCormick
– Audience

Agreed on

Need for concrete action and implementation mechanisms beyond producing documents


C

Celine Bal

Speech speed

165 words per minute

Speech length

390 words

Speech time

141 seconds

Encouragement for parliamentarians to engage with national and regional IGF initiatives beyond the global parliamentary track

Explanation

Celine Bal emphasized that there are over 176 national and regional IGF initiatives available for parliamentary engagement. She strongly encouraged members of parliaments to participate not only in the global parliamentary track but also in regional, sub-regional, and national level IGF initiatives to maximize their involvement in internet governance discussions.


Evidence

Specific number of 176 national and regional IGF initiatives currently available


Major discussion point

Future Parliamentary Engagement and Collaboration


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Andy Richardson
– Amira Saber

Agreed on

Importance of knowledge sharing platforms and collaborative mechanisms


Upcoming collaborative sessions and events including AI policy discussions and indigenous language technology barriers

Explanation

Celine Bal outlined several upcoming sessions specifically inviting parliamentary participation, including collaborative sessions on AI and cloud policy foundations, youth policy visions, AI regulation co-organized with UNDP, and addressing barriers to indigenous language technology and AI uptake. These sessions represent concrete opportunities for continued parliamentary engagement beyond the current track.


Evidence

Specific sessions mentioned: AI and cloud policy in Studio N, youth policy visions in Vestfold, AI regulation session from 2:45-3:45 with UNDP and IPU, and indigenous language technology session on the 26th with Sámi Parliament, Norwegian government, and UNESCO


Major discussion point

Future Parliamentary Engagement and Collaboration


Topics

Legal and regulatory | Sociocultural | Development


Agreements

Agreement points

Need for concrete action and implementation mechanisms beyond producing documents

Speakers

– Anne McCormick
– Mahabd Al-Nasir
– Audience

Arguments

Importance of speed, concreteness, and credibility in statements, with need for monitoring and progress review mechanisms


Need for year-over-year tracking of achievements rather than just producing good documents without follow-up action


Urgency for developing specific instruments and frameworks, moving from general discussions to binding conventions and treaties


Summary

Multiple speakers emphasized the critical need to move beyond discussions and document production to concrete implementation, monitoring, and accountability mechanisms. They shared concerns about the gap between policy statements and actual action.


Topics

Legal and regulatory | Development


Environmental impact of AI technologies requires greater attention

Speakers

– Andy Richardson
– Audience

Arguments

Suggestions for additions include environmental impact of AI, digital inclusion for marginalized groups, and balancing security with freedom of expression


Need for clear vision of future and greater emphasis on environmental impact of AI technologies which are energy-intensive


Summary

Both speakers recognized that the environmental impact of AI technologies, particularly their energy-intensive nature, needs more emphasis and consideration in policy discussions.


Topics

Development | Infrastructure | Legal and regulatory


Importance of knowledge sharing platforms and collaborative mechanisms

Speakers

– Andy Richardson
– Amira Saber
– Celine Bal

Arguments

Commitment to continuing parliamentary exchanges and providing forums for experience sharing, particularly on AI regulation


Proposal to create a digital governance radar platform similar to climate policy radar to share legislative knowledge globally


Encouragement for parliamentarians to engage with national and regional IGF initiatives beyond the global parliamentary track


Summary

Speakers agreed on the value of creating and maintaining platforms for knowledge sharing, whether through parliamentary exchanges, digital governance radars, or multi-level IGF initiatives to facilitate collaboration and learning.


Topics

Legal and regulatory | Development | Infrastructure


Similar viewpoints

Both speakers expressed frustration with the cycle of producing good policy documents without adequate follow-up mechanisms to ensure implementation and track progress over time.

Speakers

– Anne McCormick
– Mahabd Al-Nasir

Arguments

Importance of speed, concreteness, and credibility in statements, with need for monitoring and progress review mechanisms


Need for year-over-year tracking of achievements rather than just producing good documents without follow-up action


Topics

Legal and regulatory | Development


Both organizational representatives emphasized the importance of ongoing parliamentary engagement through multiple channels and levels, from global to national initiatives.

Speakers

– Andy Richardson
– Celine Bal

Arguments

Commitment to continuing parliamentary exchanges and providing forums for experience sharing, particularly on AI regulation


Upcoming collaborative sessions and events including AI policy discussions and indigenous language technology barriers


Topics

Legal and regulatory | Development | Infrastructure


Unexpected consensus

Environmental impact of AI should be integrated into digital governance discussions

Speakers

– Andy Richardson
– Audience

Arguments

Suggestions for additions include environmental impact of AI, digital inclusion for marginalized groups, and balancing security with freedom of expression


Need for clear vision of future and greater emphasis on environmental impact of AI technologies which are energy-intensive


Explanation

It was unexpected to see environmental concerns emerge as a consensus point in what was primarily a discussion about parliamentary governance and digital policy. This suggests a growing recognition that environmental sustainability must be integrated into all technology policy discussions.


Topics

Development | Infrastructure | Legal and regulatory


Strong agreement on need for practical implementation despite diverse national contexts

Speakers

– Anne McCormick
– Mahabd Al-Nasir
– Audience

Arguments

Importance of speed, concreteness, and credibility in statements, with need for monitoring and progress review mechanisms


Need for year-over-year tracking of achievements rather than just producing good documents without follow-up action


Urgency for developing specific instruments and frameworks, moving from general discussions to binding conventions and treaties


Explanation

Despite representing different regions and contexts, speakers showed remarkable consensus on the need for concrete action and accountability mechanisms, suggesting universal frustration with the gap between policy discussions and implementation.


Topics

Legal and regulatory | Development


Overall assessment

Summary

The discussion revealed strong consensus around three main areas: the urgent need for concrete implementation mechanisms beyond document production, the importance of knowledge-sharing platforms and collaborative frameworks, and the recognition that environmental impacts of AI technologies must be integrated into digital governance discussions.


Consensus level

High level of consensus with constructive engagement. While there were some specific reservations raised (particularly around national sovereignty and cultural considerations), the overall tone was collaborative with speakers building on each other’s ideas rather than opposing them. This suggests a mature policy discussion environment where participants are focused on practical solutions rather than ideological differences. The implications are positive for future parliamentary cooperation on digital governance issues, as the shared recognition of implementation gaps and the value of collaboration provides a strong foundation for concrete action.


Differences

Different viewpoints

Inclusion of references to IGF and civil society in output document

Speakers

– Andy Richardson
– Audience

Arguments

Document captures discussions on information integrity, combating online harms, and protecting freedom of expression through three key areas: parliaments, platforms, and power


There were proposals to delete references to the IGF and participation in international processes, to delete references to civil society


Summary

Some participants proposed removing references to IGF and civil society from the output document, but Andy Richardson indicated these deletions didn’t meet consensus and weren’t in line with majority views during discussions


Topics

Legal and regulatory | Human rights | Development


Unexpected differences

National sovereignty versus international cooperation framework

Speakers

– Audience
– Andy Richardson

Arguments

Importance of respecting national sovereignty and allowing parliaments to regulate digital space according to their own legal, cultural, and social frameworks


Document captures discussions on information integrity, combating online harms, and protecting freedom of expression through three key areas: parliaments, platforms, and power


Explanation

The Parliamentary Assembly of the Mediterranean raised significant concerns about national sovereignty and the need for parliaments to regulate according to their own frameworks, which was unexpected given the collaborative nature of the IGF process and suggests tension between international coordination and national autonomy


Topics

Legal and regulatory | Sociocultural | Human rights


Overall assessment

Summary

The main disagreements centered on document content (IGF/civil society references), implementation mechanisms (monitoring vs tracking vs treaties), emphasis on environmental issues, and national sovereignty concerns. Most disagreements were procedural rather than substantive.


Disagreement level

Low to moderate disagreement level. While there were some tensions around national sovereignty and document content, most participants shared common goals of moving from discussion to action. The disagreements were primarily about methods and emphasis rather than fundamental objectives, suggesting good potential for resolution through continued dialogue.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers expressed frustration with the cycle of producing good policy documents without adequate follow-up mechanisms to ensure implementation and track progress over time.

Speakers

– Anne McCormick
– Mahabd Al-Nasir

Arguments

Importance of speed, concreteness, and credibility in statements, with need for monitoring and progress review mechanisms


Need for year-over-year tracking of achievements rather than just producing good documents without follow-up action


Topics

Legal and regulatory | Development


Both organizational representatives emphasized the importance of ongoing parliamentary engagement through multiple channels and levels, from global to national initiatives.

Speakers

– Andy Richardson
– Celine Bal

Arguments

Commitment to continuing parliamentary exchanges and providing forums for experience sharing, particularly on AI regulation


Upcoming collaborative sessions and events including AI policy discussions and indigenous language technology barriers


Topics

Legal and regulatory | Development | Infrastructure


Takeaways

Key takeaways

Parliamentary discussions should focus on three key areas: parliaments (law-making role compliant with human rights), platforms (responsibilities for information integrity), and power (dynamic relationships between stakeholders)


Cooperation and dialogue are fundamental – between parliaments to share experiences and between parliaments and all stakeholders including technical sector and companies


There is urgent need to move from general discussions to concrete action and specific instruments, with parliamentarians expressing frustration about too much talk without implementation


Environmental impact of AI technologies, particularly energy-intensive data centers, needs greater emphasis in policy discussions


Digital inclusion must encompass women, children, people with disabilities, and other marginalized groups


National sovereignty must be respected, allowing parliaments to regulate digital space according to their own legal, cultural, and social frameworks


Resolutions and action items

Integrate feedback into output document including environmental impact of AI, digital inclusion provisions, and security-freedom of expression balance


IPU committed to continuing parliamentary exchanges and providing forums for ongoing experience sharing


Track parliamentary actions on AI policy and share information about legislation and committees across jurisdictions


Create opportunities for parliamentarians to engage with national and regional IGF initiatives


Organize upcoming collaborative sessions including AI regulation discussions and indigenous language technology barriers


Parliament of Malaysia to host major parliamentary event on responsible AI development in partnership with IPU, UNDP, and Commonwealth Parliamentary Association


Circulate final output document and session summaries to all participants


Unresolved issues

How to ensure document credibility and implementation beyond just producing statements – no concrete monitoring mechanism established


Lack of year-over-year tracking system to measure actual achievements versus commitments


Need for clearer definitions of terms like ‘harmful content,’ ‘disinformation,’ and ‘intimidation’


Timeline for final document version not clearly specified


No specific binding instruments or treaties developed despite expressed urgency


Proposal for digital governance radar platform mentioned but not formally adopted or resourced


Suggested compromises

Balance between digital security and freedom of expression while protecting against disinformation and harmful content


Respect national sovereignty while maintaining international cooperation and human rights compliance


Include references to IGF participation and civil society engagement despite some proposals for deletion


Strengthen support for Global South and Mediterranean countries while maintaining universal applicability of commitments


Move toward specific instruments and frameworks while continuing dialogue and cooperation approaches


Thought provoking comments

Given the importance of speed, concreteness and credibility in these types of statements, from your experience and that of the people in this room, how do we make sure that this statement has weight and credibility? Do we state that we will monitor this or that we will come back and review progress on each of the points and sub points in six months? How do we make this more than talk?

Speaker

Anne McCormick


Reason

This comment cuts to the heart of a fundamental problem with international policy discussions – the gap between rhetoric and action. McCormick challenges the entire premise of creating output documents without accountability mechanisms, forcing participants to confront whether their efforts will have real-world impact.


Impact

This comment created a pivotal shift in the discussion from focusing on document content to questioning the entire framework of how policy recommendations are implemented. It sparked immediate agreement from other participants and led to concrete suggestions for tracking mechanisms and follow-up processes.


I would love to have for the next IGF something like, I don’t know, a table saying that we need to take down what is already done or what we could achieve year over year. So we don’t just get out with the very good documents but we don’t know what they are going to do with it in our parliaments or in our governments or countries or whatever.

Speaker

Mahabd Al-Nasir


Reason

This builds on McCormick’s challenge by proposing a concrete solution – creating accountability through year-over-year progress tracking. It demonstrates the frustration of repeat participants who see the same patterns of discussion without measurable outcomes.


Impact

This comment reinforced the accountability theme and provided a practical framework that other participants could envision implementing. It helped transform abstract concerns about effectiveness into actionable proposals.


One of the things that I think needs to be captured out of IGF 2025 is the urgency with which every Member of Parliament who has spoken here is asking for us to define the mission… We are asking what is that common ground and how can it be put in an instrument that we are able to pursue and to my mind I am thinking about working towards strategies, working towards frameworks and with the need to come to treaties or universally binding conventions and the urgency is real.

Speaker

Unidentified participant


Reason

This comment synthesizes the frustration expressed throughout the discussion and elevates it to a strategic level, calling for binding international instruments rather than voluntary guidelines. It reframes the discussion from technical cooperation to fundamental governance structures.


Impact

This intervention shifted the conversation toward more ambitious policy solutions and highlighted the inadequacy of current soft-law approaches. It introduced the concept of binding treaties, raising the stakes of the discussion significantly.


I suggest the same thing which could be easily done in collaboration with the Inter-Parliamentary Union and the IGF secretariat to have a digital inclusion or digital governance radar… That would be extremely beneficial to any parliamentarian.

Speaker

Amira Saber


Reason

This comment provides a concrete, actionable solution that addresses the knowledge-sharing challenges parliamentarians face. By referencing her successful experience with climate policy radar, she offers a proven model that could be adapted for digital governance.


Impact

This practical suggestion provided a tangible next step that organizers could implement, moving the discussion from abstract concerns to specific solutions. It demonstrated how cross-sector learning could address parliamentarians’ information needs.


The PM underlines the importance of allowing each parliament to regulate the digital space according to its own legal, cultural and social frameworks… calls for guarantees to ensure that fundamental freedoms are not compromised… insists on the need for strengthened support and on the development of the capacities of the South and the East Parliaments of the Mediterranean

Speaker

Audience member from Algeria (Parliamentary Assembly of the Mediterranean)


Reason

This comment introduces crucial tensions between international cooperation and national sovereignty, highlighting how global digital governance frameworks may conflict with regional values and capabilities. It challenges the assumption that one-size-fits-all approaches are appropriate.


Impact

This intervention added complexity to the discussion by highlighting potential conflicts between global standards and local contexts. It forced organizers to acknowledge that consensus might not be as strong as initially assumed and that implementation would need to account for diverse national circumstances.


Overall assessment

These key comments fundamentally transformed the discussion from a routine policy document review into a critical examination of international governance effectiveness. The progression from McCormick’s accountability challenge through Al-Nasir’s tracking proposal to the call for binding treaties created a crescendo of frustration with existing approaches. Saber’s practical radar suggestion and the Algerian representative’s sovereignty concerns added both solutions and complexity. Together, these interventions elevated the conversation from technical cooperation to fundamental questions about how international digital governance should work, creating pressure for more ambitious and accountable approaches while acknowledging the challenges of diverse national contexts.


Follow-up questions

How do we make sure that this statement has weight and credibility? Do we state that we will monitor this or that we will come back and review progress on each of the points and sub points in six months? How do we make this more than talk?

Speaker

Anne McCormick


Explanation

This addresses the critical need for accountability and follow-through on parliamentary commitments, moving beyond discussion to concrete action and measurable outcomes.


Need for a tracking system to monitor what has been achieved year over year from IGF outcomes

Speaker

Mahabd Al-Nasir


Explanation

This suggests creating a systematic approach to track progress on IGF commitments and outcomes across different parliaments and countries to ensure accountability.


When will we have the final version of the document?

Speaker

Audience member from Algeria


Explanation

This is a practical question about timeline and delivery of the output document that needs clarification for participants.


What is that common ground that defines the struggle for all stakeholders so that it is not a developer versus government front or a civil society versus government or even people versus big tech?

Speaker

Unidentified speaker


Explanation

This addresses the need to identify shared challenges and interests across different stakeholder groups to foster collaboration rather than adversarial relationships.


How can common ground be put in an instrument that we are able to pursue – working towards strategies, frameworks, treaties or universally binding conventions?

Speaker

Unidentified speaker


Explanation

This explores the need for concrete legal and policy instruments to address digital governance challenges at national and international levels.


Development of a digital inclusion or digital governance radar platform in collaboration with IPU and IGF secretariat

Speaker

Amira Saber


Explanation

This proposes creating a comprehensive database of digital governance legislation worldwide, similar to climate policy radar, to help parliamentarians access and learn from global legislative experiences.


How to better address and integrate environmental impact of technologies, especially AI’s energy intensity, into digital governance discussions

Speaker

Audience member


Explanation

This highlights the need for more comprehensive consideration of environmental sustainability in digital policy discussions, particularly regarding AI and data centers.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.