WS #362 Incorporating Human Rights in AI Risk Management

WS #362 Incorporating Human Rights in AI Risk Management

Session at a glance

Summary

This panel discussion focused on incorporating human rights considerations into AI risk management practices, bringing together perspectives from industry, civil society, and multilateral organizations. The session was organized by the Global Network Initiative (GNI), which works at the intersection of technology and human rights, particularly when companies face government requests that may impact freedom of expression or privacy rights.


Company representatives from Google and Telenor Group emphasized that successful human rights integration in AI requires top-level management commitment and comprehensive governance structures. Both organizations highlighted that their AI principles build upon existing commitments to UN Guiding Principles on Business and Human Rights and GNI frameworks. They stressed the importance of operationalizing these principles through specific processes, training, and risk assessment procedures that embed human rights considerations throughout the AI development lifecycle.


The UN Office of the High Commissioner for Human Rights, through their BTEC project, presented findings on the shared responsibility between companies and states in protecting human rights in AI deployment. They noted that AI technology evolution has outpaced current regulatory frameworks and emphasized the need for companies to conduct human rights due diligence while states must ensure adequate protection through appropriate regulations.


Civil society representatives highlighted the particular challenges facing the Global South, where different socioeconomic contexts and varying regulatory environments create unique human rights risks. They emphasized the need for culturally sensitive approaches and context-specific risk assessments, noting that AI systems developed in the Global North may have different implications when deployed in the Global South.


The discussion revealed broad consensus on the necessity of mandatory human rights impact assessments for high-risk AI systems, with participants supporting risk-based regulatory approaches similar to the EU AI Act. However, panelists also acknowledged that human rights frameworks alone may not capture all societal-level impacts of AI, suggesting the need for broader comprehensive approaches that address both individual and collective effects of AI deployment.


Keypoints

## Major Discussion Points:


– **Corporate Human Rights Governance as Foundation**: Multiple panelists emphasized that companies must establish baseline human rights governance structures before integrating AI-specific principles. This includes commitment to UN Guiding Principles, having dedicated human rights programs, and ensuring board-level oversight of human rights risks.


– **Operationalizing Human Rights in AI Risk Assessment**: The discussion focused extensively on practical implementation methods, including red teaming, comprehensive risk assessment frameworks, multi-stakeholder engagement, and the need for culturally sensitive staff who understand local contexts, particularly in Global South deployments.


– **Regulatory Landscape and Mandatory Assessments**: Panelists addressed the evolving regulatory environment, particularly the EU AI Act’s requirements for fundamental rights impact assessments for high-risk AI systems, and debated the merits of legally mandating human rights due diligence versus voluntary corporate initiatives.


– **Global South Perspectives and Context-Specific Challenges**: Significant attention was given to how AI impacts differ between Global North and South, including varying socioeconomic realities, different constitutional rights frameworks, and the need for locally relevant benchmarks and taxonomies rather than one-size-fits-all approaches.


– **Beyond Individual Rights to Societal Impact**: The conversation evolved to consider whether traditional human rights frameworks adequately capture broader societal impacts of AI, such as educational transformation, job displacement, and community-level effects, suggesting the need for more comprehensive assessment approaches.


## Overall Purpose:


The discussion aimed to explore how human rights considerations can be better integrated into AI risk management practices across different stakeholders – companies, multilateral organizations, and civil society – with particular attention to bridging the gap between Global North AI development and Global South deployment contexts.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by shared commitment to human rights principles despite different organizational perspectives. The tone was professional and solution-oriented, with panelists building on each other’s points rather than presenting conflicting viewpoints. There was a notable shift toward more nuanced complexity as the conversation progressed, moving from foundational principles to practical implementation challenges and ultimately to broader questions about the adequacy of current frameworks for addressing AI’s societal impacts.


Speakers

**Speakers from the provided list:**


– **Min thu Aung** – Accountability and Innovation Manager at the Global Network Initiative (GNI)


– **Alexandria Walden** – Representative from Google (specific title not mentioned in transcript)


– **Elina Thorstrom** – Representative from Telenor Group/DNA (leads AI risk assessment working group)


– **Nathalie Stadelmann** – Representative from OHCHR (Office of the High Commissioner for Human Rights), part of the BTEC project


– **Jhalak Mrignayani Kakkar** – Representative from CCG (founding member of GNI), works on tech law and policy in the Global South, expert in Global Partnership for AI


– **Caitlin Kraft-Buchman** – (Role/organization not clearly specified in transcript, appears to work on gender and AI issues)


– **Audience** – Various audience members asking questions


**Additional speakers:**


– **Byungil Oh** – From Jinbonet, a digital rights organization based in South Korea


– **Richard Ringfield** – From BSR (Business for Social Responsibility)


Full session report

# Human Rights Integration in AI Risk Management: A Multi-Stakeholder Panel Discussion


## Executive Summary


This panel discussion, organized by the Global Network Initiative (GNI) as part of the Internet Governance Forum, brought together diverse stakeholders to examine how human rights considerations can be effectively integrated into artificial intelligence risk management practices. The session featured representatives from major technology companies (Google, Telenor Group), multilateral organizations (UN Office of the High Commissioner for Human Rights), and civil society advocates from both the Global North and South.


The discussion revealed broad agreement on fundamental principles while highlighting significant implementation challenges. Participants agreed that human rights must serve as the foundational framework for AI governance, though the conversation revealed complex questions about how to address both individual and broader societal impacts of AI systems, particularly across different global contexts.


## Session Context and Structure


Min Thu Aung from GNI opened the session by outlining the organization’s role in advancing human rights integration in AI governance. GNI has developed implementation guidelines for their AI and human rights principles and participates in the OECD AI Network of Experts. They are also involved in the B-Tech project’s Generative AI Human Rights Due Diligence initiative, which aims to develop practical guidance for companies conducting human rights impact assessments for AI systems.


The session was structured as a panel discussion followed by Q&A, with some participants joining remotely. Technical difficulties affected portions of the discussion, particularly limiting the contribution from the UN representative.


## Corporate Perspectives on Human Rights Integration


### Google’s Approach to Foundational Frameworks


Alexandria Walden from Google emphasized that human rights must be integrated at the foundational level rather than treated as an add-on. She stated that “human rights has to be at the baseline and then integrated into those processes and frameworks,” arguing that companies need comprehensive baseline human rights governance structures before developing AI-specific principles.


Walden described Google’s operational approach, which includes technical processes such as red teaming, secure AI frameworks, and coalition work with other stakeholders. She emphasized that having principles alone is insufficient without robust operational frameworks that technical teams can implement in their day-to-day work.


### Telenor’s Comprehensive Risk Assessment


Elina Thorström from Telenor Group reinforced the importance of top management commitment for effective implementation. She outlined Telenor’s approach, which builds AI strategy foundations on responsible AI principles with comprehensive governance structures that ensure human rights considerations are embedded throughout the organization.


Thorström emphasized that company values and policies should guide behavior “irrespective of which country we are at,” suggesting that internal ethical frameworks should exceed local regulatory requirements. She described Telenor’s integrated risk assessment approach, which examines human rights, security, privacy, and data governance holistically through cross-organizational collaboration.


## Global South Perspectives and Contextual Challenges


Jhalak Mrignayani Kakkar from the Centre for Communication Governance introduced crucial complexity by highlighting how AI technologies may have dramatically different implications when deployed across different contexts. She noted that “even within the Indian context, within an urban part of India versus a semi-urban part of India versus a rural part of India will differ significantly.”


Kakkar emphasized the need for context-specific approaches to risk assessment that reflect local socioeconomic realities. She argued that meaningful human rights due diligence requires culturally and linguistically sensitive staff who understand local contexts, and stressed the importance of sustained multi-stakeholder engagement between companies, civil society, academia, and governments.


She also noted the heterogeneity in constitutional rights embedding across Global South countries, requiring tailored approaches that consider local legal and cultural frameworks.


## International Human Rights Law Framework


Caitlin Kraft-Buchman from Women@TheTable provided a strong advocacy for rights-based frameworks over company-specific ethical approaches. She argued that “we tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody.”


Kraft-Buchman emphasized the need for intentional design-based approaches that include multidisciplinary teams with social scientists, human rights experts, and anthropologists from the development stage. She also identified public procurement as a significant leverage point for deploying rights-respecting AI at scale.


## Regulatory Approaches and Implementation


The discussion revealed general support for regulatory frameworks among participants. Walden expressed Google’s support for risk-based regulatory approaches, particularly for high-risk AI applications, while acknowledging that voluntary approaches alone may be insufficient for widespread industry adoption.


However, Kakkar introduced a nuanced perspective on enforcement, suggesting that the lack of strict enforceability in human rights due diligence might actually encourage more meaningful assessments, as companies may be more willing to identify problems when they know there isn’t an automatic negative consequence.


## Individual Rights versus Societal Impact


A significant discussion point emerged when Richard Ringfield from Business for Social Responsibility questioned whether individual human rights-based approaches could adequately capture AI’s broader societal impacts. He noted that “many of the really big impacts that AI will have will be more at the societal rather than the individual level,” including transformations in education, employment, and social structures.


This intervention prompted acknowledgment from participants that traditional human rights frameworks, while essential, may need to be supplemented with broader societal impact evaluations. Kakkar responded by noting that human rights concepts are being reinterpreted for group and community settings, including developments in group privacy and community data rights.


## Questions and Additional Perspectives


The Q&A session included a question from Byungil Oh about South Korea’s National Human Rights Commission’s AI impact assessment tool and the Korean AI Basic Act, highlighting the global nature of efforts to develop human rights-based AI governance frameworks.


Technical difficulties limited the contribution from Nathalie Stadelmann of the UN Office of the High Commissioner for Human Rights, though she was able to emphasize the importance of shared responsibility between companies and states in protecting human rights in AI development and deployment.


## Key Areas of Consensus


Despite representing diverse stakeholder perspectives, participants showed agreement on several fundamental points:


– Human rights must be foundational to AI governance rather than an afterthought


– Top management commitment is essential for effective implementation


– Operational frameworks and processes are necessary to translate principles into practice


– Multi-stakeholder collaboration is critical for developing effective approaches


– Context-specific considerations are important, particularly for Global South deployment


– Some form of regulatory framework is needed to ensure widespread adoption of rights-respecting practices


## Outstanding Challenges


The discussion highlighted several unresolved challenges:


– Lack of standardized human rights impact assessment methodologies for AI systems


– Uncertainty about appropriate risk thresholds for triggering mitigation measures


– Questions about the adequacy of individual-focused rights frameworks for addressing systemic societal impacts


– Need for context-specific approaches that reflect diverse global realities


– Balancing transparency requirements with practical implementation concerns


## Conclusion


This panel discussion demonstrated both the growing consensus around the importance of human rights integration in AI governance and the significant practical challenges that remain in implementation. While participants agreed on fundamental principles, the conversation revealed the complexity of translating these principles into effective practice across diverse global contexts.


The emphasis on contextual sensitivity, particularly regarding Global South perspectives, represents an important evolution in AI governance conversations. The discussion also highlighted the emerging recognition that comprehensive AI governance may require frameworks that address both individual human rights and broader societal impacts.


The session underscored the ongoing need for multi-stakeholder collaboration, standardized assessment methodologies, and regulatory frameworks that can effectively balance mandatory requirements with practical implementation realities. While significant challenges remain, the broad consensus on fundamental principles provides a foundation for continued progress in integrating human rights considerations into AI risk management practices.


Session transcript

Min thu Aung: ♪♪ ♪♪ ♪♪ Good morning, everyone. Thank you so much for attending our panel on incorporating human rights in AI risk management. My name is Min Aung. I’m the Accountability and Innovation Manager at the Global Network Initiative, or GNI. Our panel today is set in the context of this. Governments around the world, especially in the EU of course, are starting to require tech companies to manage human rights risks in the way that they design and indeed use AI. Of course, companies do have AI-specific tools and principles, but sometimes these can fall short of aligning with international human rights standards. This panel aims to bring together a diverse range of voices from industry, from civil society, and also from multilateral bodies to explore how we can better integrate human rights into AI risk management practices. Perhaps a quick introduction to GNI first. We are a multi-stakeholder initiative. We bring together four constituencies of academics, civil society, companies, and investors for accountability, for shared learning, for collective advocacy on government and company policies and practices, which are at the intersection of technology and human rights. This is particularly relevant when companies face government requests or demands that have an impact on freedom of expression or privacy rights. We have a set of principles, the GNI principles and implementation guidelines, which aim to guide companies on how to respond when receiving government requests or demands that may have an impact on freedom of expression and privacy rights, as well as a law on conducting ongoing due diligence and so on. The GNI principles and implementation guidelines do indeed apply to AI insofar as governments produce mandates on AI services in various shapes or forms and indeed the need for companies to conduct human rights due diligence and, where necessary, impact assessments on AI services. In that vein, we as GNI have indeed been quite active in a variety of fora related to AI and also activities for our membership. For example, we are members of the OECD AI Network of Experts and we have been involved in BTEC’s GenAI Human Rights Due Diligence Project. Within the GNI company assessments, we are exploring the intersection between AI and human rights due diligence. We have also obviously put out many statements of concerns on potential rights violations in relation to AI government mandates in Canada and in India, among others. We have also hosted learning discussions among our members on AI as well as government mandates, the last of which was during our annual learning forum, which was held in DC last year. Last but not least, we have an AI working group within the Global Network Initiative where a selection of GNI members with deep AI experience are, amongst other things, developing a policy brief on government interventions in AI and rights-respecting responses to these mandates. Enough about GNI, now moving on to our panel. Firstly, we will hear from two company panelists from different parts of the Internet stack, along the theme of diversity, on how they integrate human rights considerations into the development and deployment of AI-related products and services. Secondly, we will hear from BTEC themselves on multilateral efforts to promote incorporation of human rights in AI governance. Finally, we will hear the views from civil society panelists on what more can be done by companies and by policymakers, lawmakers, and regulators on incorporating human rights in AI. We will then have a Q&A, and then I will provide a summary, and then we will wrap up. All right, we look like we are on time. Perhaps I will start with my first request for intervention to Alex Walden from Google. I will let you introduce yourself in a bit. Google is obviously an integrated player in the AI ecosystem, for example, developing consumer-facing AI services like Gemini and its predecessors, of course. We also note, of course, that Google is a founding member of GNI and has a dedicated human rights program and has had responsible AI principles since 2018. And, of course, that Google’s approach to human rights due diligence is informed by the UN Guiding Principles as well as the GNI framework. Just a couple of questions, which you may address in any order that you feel comfortable. I guess firstly, how does Google conduct risk assessments across its footprint? And how do you incorporate that human rights are incorporated in these risk assessments? And indeed, looking more externally, how do laws and regulations like the EU AI Act and the AI Basic Act in South Korea influence how human rights are incorporated in your AI risk assessments? And indeed, what advice do you have for other companies to normalize AI risk assessments, human rights within AI risk assessments within their operations? Over to you.


Alexandria Walden: All right. Thank you. Thanks for that question. Thanks to GNI for putting this session together. I think it’s a really important topic. I will try to – I think I can actually get to all of those across a few things. So, I mean, the first thing to say, and it may seem obvious, but I do think it’s important to point out that actually where companies should start with how you integrate human rights into how your AI work is fundamentally to make sure that the company has human rights governance at its baseline, right? So, at Google, we are committed, as you said, to the GNI principles. We’re committed to the UN guiding principles on business and human rights. And what that means for us is that we have a corporate policy that says that these are our values and that we have ways to operationalize these commitments throughout the company. Obviously, it doesn’t mean that we’re doing it perfectly, but what it does mean is that these are values that are set from the highest level of the company, that they are risks that are reviewed by the board, and that we have a human rights program that works to implement and ensure these commitments across our various products and services. And so, having that in place is what allows us to then get to a point where, okay, we also have AI principles on top of that. Our AI principles build on top of our commitments to the UNGPs, our GNI principles. And so, the AI principles are really about a single type of technology that we have that’s integrated across our policies. Our AI principles, again, reinforce the commitment to international law and human rights. And so, that is a reminder to everyone who is doing technical work or who is doing more qualitative trust and safety work or public policy work or legal work across these products that human rights is something we need to be thinking about. And then, at the more operational level, because we have this sort of governance structure and these principles integrated, that means that everyone who is in our technical teams, those who are developing AI, are aware that they should be thinking about what rights-related impacts or risks might arise in their work. And so, we have to set up processes to address that. And so, really, that’s, I think, the biggest piece, is making sure that you have processes and teams in place to operationalize all of those principles. Those teams are the ones that write the policies to ensure we are thinking about how human rights might manifest in AI and content or related to AI and privacy or related to AI and bias and discrimination. All of these things require process and really guidelines. So, you just sort of keep getting at the more granular and granular level of what’s required. But ultimately, human rights has to be at the baseline and then integrated into those processes and frameworks. So, we do things like red teaming or we have the safe framework, the secure AI framework, and that’s sort of some coalition work that we do with other companies. All of that embeds the way we think about human rights but really gets more pragmatic at the operational level for testing and for red teaming, et cetera, to make sure that we are identifying where risks may arise.


Min thu Aung: Elina Thorström, Jhalak Mrignayani Kakkar, Elina Thorström, Min thu Aung, Alexandria Walden and has been a strong advocate of AI ethics and uses AI indeed within its network management, customer services and indeed customer-facing IoT solutions. So if I could pose similar questions to what I posed to Alex, how does Telenor Group indeed conduct AI risk assessments across its footprint and how do you ensure that human rights is incorporated in these risk assessments and how does your exposure to the EU AI Act within Europe but indeed the comparative lack of such laws and regulations within Telenor’s Asian business units influence how human rights are incorporated into AI risk assessments and what advice do you have for companies as well? Thank you.


Elina Thorstrom: Thank you Min and hello everyone. It’s great to be here and talk about human rights and AI, an extremely topical topic currently. And if we look at what Telenor does, I think very similarly what Alex pointed out, that it all comes down to having that top management commitment to responsible AI as well as human rights. And that is at the core of what Telenor does. And for example how we have started building our AI strategy, the foundation of it is really responsible AI. And that’s embedded to everything we do when we develop, deploy AI applications etc. So I think the top management commitment to that is very important in order for us to actually achieve those goals and promote responsible AI throughout the organisation and actually bring that to the structures and procedures as also Alex pointed out. What we have done at DNA for example is that we have driven our AI risk assessment from our Telenor’s responsible AI principles. So actually those principles guide our risk assessment work. However of course AI governance is a much more broader topic than that. So of course we need a lot of more different elements to that besides risk assessment. So for example awareness and training building, we need to have proper tools in use, we need to work together for example with our vendors and with our stakeholders. And we need to have those policies, guidelines as well as principles in place. So it’s a very big topic, comprehensive topic where we need to take into account these different elements. And then if we look at the risk assessments that we do in DNA, we go through very practical level our AI applications and think about the risks very holistically. So we have a very comprehensive view on this. And it has been very rewarding I must say. I lead that working group myself. And we look at human rights perspectives, we look at impacts on security, privacy, we look at data governance. So we do a very holistic view on AI. And we see that very beneficial also because AI is such a large topic. So this is the way we have built our own program. And if we look at our responsible AI principles, the first one actually is promoting incorporating human rights to our procedures. So that’s why it’s core at the risk assessment procedures that we also do. Coming into then to the question of EU’s AI Act and how to apply legislation when AI Act as we know is in EU. But Telenor as mentioned, we operate also in Asia. However our values, our policies, our principles, they guide our work. Irrespective of which country we are at. So that is also at the core of how we do things. Although a part of AI governance and good AI governance is of course to make sure that we are compliant with the regulation. But that’s only a part of it. So the company’s culture, policy, guidelines, those are the ones that actually guide us and the work that we do. Perhaps also to the last question of what would be my suggestions to go forward. I would come back to the commitment part. So I think that’s the important and key element. So that we are actually committed to human rights approach as well as responsible AI. But also the collaboration. So we utilize expertise throughout our organization. And that has helped a lot and also supports our work. And also means that we learn from each other a lot. So this I think would be my… And it’s a journey. So this is not a sprint. So it’s a journey that we are all in this together. And we learn as we go. Thank you.


Min thu Aung: Great, thank you very much. I see a lot of commonalities in the answers. So that’s really great to hear. So perhaps we can move to a different, I guess, different actor within the AI landscape to Natalie from OHCHR, part of the BTEC project. So Natalie, thank you so much for joining remotely. Good to see you. I hope you can hear and see us just fine. So we would love to explore the role of the UN more generally and of BTEC more specifically in this context. So BTEC obviously has produced various outputs on the intersection of human rights and AI. So most notably the taxonomy of human rights risk connected to Gen AI, which was produced in November, which GNI was also involved in. So we would love to hear how do you see the role of multilateral organizations like the UN, like the OECD and others in ensuring the widespread adoption of human rights in AI risk assessments? What role indeed do you see for WSIS and GDC in promoting this adoption? And last but not least, how do you see the kind of global geopolitical divides on human rights in AI impacting this drive and what suggestions do you have for companies that are navigating these divides and changes? Over to you.


Nathalie Stadelmann: Thank you. Thank you, Jean. I hope everybody can hear me. And thank you very much for the invitation and for bringing together this panel. And greetings from Geneva. So just a few words about the BTEC project. It’s a project that was launched by the High Commissioner Office for Human Rights already almost six years ago. And really very much with the goal to translate the guiding principle for the technology sector. So we have produced a lot of guidance in very much a multi-stakeholder fashion. So we are working with GNI and with the OECD and as well with some of the companies through our committee of practice. And I obviously recognize Alex from Google in that space. And just that we are really using the guiding principle as the global standard for business conduct. And in that respect, when it comes to the governments of AI, I couldn’t miss the opportunity because we have the Human Rights Council now ongoing in Geneva. And we just produced BTEC report that was mandated by the Human Rights Council. We just showed the interest on this topic about the shared responsibility of companies that are developing and deploying AI to respect human rights as well as the state’s duty to protect those rights through requiring those companies to respect human rights as well as to access remedy. So maybe I was thinking to just run you through quickly the latest findings that we have in this report. So obviously we know that the key message is that innovation can bring a lot of promises but as well peril for human rights. Especially in terms of the complex human rights challenges that they bring and some of them because of their unforeseen nature. So the report very much acknowledged the speed and scale of which AI technology are evolving and that outspaced the current regulatory framework. Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorstrom, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden


Min thu Aung: Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden


Caitlin Kraft-Buchman: Thank you so much, Min, and thank you very, very much for including us in this conversation. We began this journey, actually, in 2017 with OHCHR and the Women’s Rights Division, where we convened—we thought we were just convening several professors from FAFL and ETH and one or two people from OHCHR, but 21 lawyers showed up in this room because everybody was so fascinated with this idea that gender and AI together, what did they have to do with one another? So that alliance, in a way, has led us to working with FAFL, somebody doing a master’s thesis on what these human rights impacts are, and to our creating a methodology that’s now been taught at Cambridge, Technical University Munich. We work with African Center for Technology Studies in Kenya, which is a Pan-African university with Chile’s National Center for Artificial Intelligence, and the course actually sits on the CERBON Center for AI website, which looks at a human rights-based approach to the AI lifecycle. We’ve also taught it at CERN, because we found that technologists, of course, want to do the right thing, but they don’t really know—human rights is really quite abstract idea. It’s particularly abstract to people from North America, who tend to think of civil and political rights as being the only—as the primary human rights, and economic and social rights really not really being part of the larger conversation. People are like, really? Are you sure? Right to health? Is that really a thing? So we have this design course, which is really, really focused on—a design course for developers, for data science majors, but now we’ve found that policymakers like it as well, because it’s a conversation. It’s a critical analysis. And what we’ve also found is that a common vocabulary needs to sort of be implemented, because policymakers are even afraid of talking to technologists, and technologists don’t really understand what the policymakers want. So we’re also trying to create a space where people can have conversations, because I think good people want to make this all work for this technology that’s all surrounding us. And this starts with a design-based approach, with intentionality, with understanding what an objective is, that actually why are you making a product, actually what is the impact that you would like to have, through to who should the team be sitting around the table, and when we talk about diversity. In this case, we’re not only talking about diversity of gender. We’re talking geographic diversity, but also multidisciplinary diversity. Where are the social scientists? Where are the human rights experts? Where are the anthropologists? If it’s a medical application, where are the doctors, the nurses, who are the people taking the blood? Really, everybody in all the stakeholders in the lifecycle of a product, because we’re seeing that there’s a lot of siloed-off invention. And then going through there to data discovery, to understanding whether you actually have the data. We know that for health applications, for example, we’ve never really discovered the data on women’s bodies, no less women of the global south, or people of the global south. So what does that mean, then, when you’re deploying, at scale, a health application that actually only has the data for a very, very small demographic, and what are some of the fixes to that? So it’s a question of creating awareness, and creating a conversation. So that’s really what we’re focused on. In terms of, and being intentional, and I think the intentionality is really key, in terms of our work, we would like everyone, of course, to expand beyond compliance. So this notion of principles, we tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody, and as a point of departure, it really is a very good place to start, as opposed to one company’s or one academic institution’s idea of what really should be foregrounded or not. So we think that that would actually help everybody work towards really, kind of, systemic rebalance, and maybe even using some of these products to look at the way that these can be positively help, instead of just be deployed. We would say that one thing I just want to say is an opportunity, where we’re also working very deeply, and have for some time, is on procurement, because we know that public procurement, in particular, because we know that it’s a very large part, it’s 13% of the EU GDP. In developing nations, it can be up to 30 to 40% of the GDP. These things are being, really, these products are being deployed at scale, and we think that using really interesting AI deployment levers, we would maybe be able to take products that connect people to services, as opposed to only detect fraud. So right now, we’re sort of in a negative part of how, and all of us want to save money for our governments and ourselves, but it’s also how do we also connect to better quality of life and to services. So we think that that could be a lever, a really interesting deployment, and indeed, we’re working on technical guidelines that, in other words, questions that procurers can ask of vendors from the public sector about, like, oh, are you doing a fairness metric? Right now, if people are asking that, it’s just like, check, yes, we did. You know that didn’t work with Compass, all of you know, but so which fairness metrics? Why did you use that? Did you experiment? Why did you think that was a good idea? And how these conversations really sort of more deeply before things are deployed. And finally, I’ll say we’re also working on human rights and AI benchmark, which is going to be sort of the first machine learning benchmark that’s dealing with an international human rights law framework, which we’re hoping, once it’s put on, you know, it’ll sit on Hugging Face and other open source platforms, developers, machine learning experts can use to understand whether what they’ve created really does match with human rights criteria, international human rights law.


Min thu Aung: Thanks. That’s quite an innovation. Very impressive. Thank you so much for that, Caitlin. Okay, so we’ll now shortly move on to an intervention from Jalak, but before that, we’re just one intervention away from the Q&A session, so for those that are online, I would encourage you to pose your questions already in the chat, if you haven’t done so already, and we will take the online questions first in the spirit of, you know, ensuring everybody online also feels a part of this, you know, a part of this room. So yeah, then we’ll move on. Jalak, if I can, so yeah, Jalak CCG is a founding member of GNI, and within your work on tech law and policy in the Global South, you’ve done extensive research exploring different sort of modalities of AI laws and regulations, including a recent workshop that you did on AI and rule of law with South Asian judiciary members back in November. You’ve also been an active participant as an expert in GPI, the Global Partnership for AI, and of course the recent benefits of AI impact the Global South in ways that are maybe sometimes different to the impacts on the Global North, right? So the relative absence of the dedicated laws and regulations, maybe there’s varying capacity to enforce laws that may or may not exist, different consumer patterns, and perhaps the potential to impact very large populations that may have different levels of AI literacy or indeed digital and media literacy. So the role of Global South governments in protecting or indeed not protecting user rights will be covered in GNI’s policy brief on government interventions in AI that I alluded to a bit earlier on. So yeah, so in your view, when creating local AI laws and regulations, or indeed adapting existing laws and regulations in the context of AI, what can or what should Global South policymakers, lawmakers, and regulators learn from the human rights impacts of companies’ emerging AI activities? Secondly, where do you see opportunities to influence the inclusion of human rights in companies’ risk management processes and policies in the Global South? And last but not least, taking a very particular Global South angle here, why is it so important for the Global South in general and India in particular? Over to you, Jalak.


Jhalak Mrignayani Kakkar: Thank you. Thanks, Min. I think there’s a lot of work happening globally on human rights due diligence, risk assessment of AI systems. A lot of it is concentrated currently in what we call the Global North. And I think there may be, there’s not enough work that’s currently happening in the Global South. and the Global South. Why it’s important that work happens in the Global South is there are different socioeconomic realities, different societal contexts, but also if a lot of these technologies are being developed in the North, we don’t know. They could have very different implications in the South. They’re not being developed and designed keeping in mind those contexts. I think which really underlines the need for human rights due diligence by companies. It underlines the need for human rights risk assessment to be built into governance frameworks that are being designed in the Global South because that will really allow us to operationally, proactively identify risks in a methodical way instead of post-facto reacting to harms. It’ll also help regulators understand what the harms are and really design governance regulatory risk mechanisms accordingly. I think one of the things that there’s now increasing focus amongst academics and civil society in the Global South is while at the global level there’s been a lot of benchmark development, taxonomy development that has happened, I think increasingly many of us in our context are looking at how we can build out more specific benchmarks and taxonomies that more accurately cover the range of risks and harms that arise in our specific context. I think that really is maybe the first step towards enabling an effective human rights due diligence exercises by companies. Many of our AI companies are global companies and very often a lot of their staff is global staff that is not very familiar with local contexts. I think that’s a very key part for academy and civil society in various country contexts to come in and start playing this role of developing benchmarks, taxonomies, but also points to the need for a sustained multi-stakeholder engagement between companies and civil society and academia on one hand so that there can be cross-learning, cross-pollination of ideas, but also with governments as they figure out how to identify harms and conduct human rights risk assessment. I think it’s sometimes hard to articulate what risks should be assessed for and I think we’ve been talking about how human rights frameworks provide a great underpinning and starting point for identification of risks. But what I do want to point to is there is, for instance, a lot of heterogeneity within the global south in terms of what rights are embedded in their constitutions, the extent of embedding of human rights. And while it’s important to pay attention to human rights frameworks, I think we also have to be strategic about perhaps language we are using and how we are encouraging governments in certain contexts to adopt certain ethical principles or frameworks. And we have to think about how we approach some of these conversations and we frame these conversations so that we can reach the intended outcome and impact that we want. So I think we really have to think about how the other sort of question that has repeatedly come up in the Indian context within which I work is how broadly do you define the risk? If there’s too much breadth and too much variety of risks being identified, it can hinder the development of more specific assessment tools and methods, for instance for algorithmic bias, but if it’s too specific, you lose out on the ability to allow a more broad capture of harms that may be arising as these due diligence or risk assessments are conducted. How do you prioritize certain human rights over others, given their mutually affirming character, and I think the way a particular service interacts, even within the Indian context, within an urban part of India versus a semi-urban part of India versus a rural part of India will differ significantly. So I think even within a particular country context, there will have to be various scenarios that are built out in terms of context that the same technology is being deployed in. So I think there are many, many things to think about as these risk assessments are being designed, and I think it’s important to keep that in mind. And that goes back to a point that I raised earlier, that you need culturally and linguistically sensitive stuff, which I mean even within the Indian context, that means in different parts of the country there’s people speaking different languages, so you may need staff involved in this that has a multiplicity of perspectives or to engage with the different challenges that emerge in those contexts. I’ll just close up by pointing to two points. Risk assessment, human rights due diligence. One of the criticisms is the lack of enforceability. But perhaps that’s also where the value is, because perhaps companies are more incentivized to conduct it when they know that there isn’t a negative consequence. But I think one of the challenges that we’ve seen is, at what threshold do you ask for risk mitigation to be undertaken? How do you articulate that? That is a challenge at this moment in time, to specify that threshold. And here I want to point back to the fact that this is where multi-stakeholder conversations and openness become important. But also, what is the level of transparency that we expect from companies? What is the level of transparency governments should require in legislation, regulation they’re designing? Because until there is a level of disclosure, it’s hard to really identify what challenges are emerging and what needs to be articulated more clearly to make this a more meaningful exercise so that we can really ensure that AI is developing in a way that supports human rights rather than starts to impede it.


Min thu Aung: Thank you very much for the talk. Lots of things to consider to make sure that AI development is context-specific and also considers different rights impacts, not only within the global South in general, but also even going in a more granular level between urban and rural impacts, perhaps even. Thank you so much. All right, then we move to the Q&A part of our panel. I don’t actually see any questions in the chat yet, apart from questions about the links that Natalie kindly re-shared, so thank you so much for that, Natalie. Then perhaps moving to the room, are there any questions from the room to our panelists? Yes, I see one from Ben. Any others? Yes. Okay. Richard? Did you have a question too? Yes. People have to line up. Okay, great. All right. Thank you. Go ahead, please. If you could introduce yourself first, and then a question, please. Thank you. Thank you.


Audience: My name is Byungil Oh. I’m from Jinbonet, a digital rights organization based in South Korea. Last year, Korea’s National Human Rights Commission released a human rights impact assessment tool for AI, and I was involved in its development. We conducted a human rights impact assessment on the Human Rights Commission’s pilot system earlier this year. The process wasn’t just about going through a checklist. It involved an exchange of diverse perspectives, so we found it very useful. However, the tool has not yet been widely used, mainly because there is no legal obligation to conduct such assessment. Of course, some companies may conduct internal risk assessment, but independent assessment that include participation from affected parties are rarely carried out. All the Korean past The panel discussed basic AI law last year, which includes a provision related to human rights impact assessment. It only states that efforts should be made to conduct them. It does not mandate them. Korean civil society groups are now calling for mandatory human rights impact assessment for high-risk AI systems. I would like to hear the panelist’s view on legality, legally mandating such assessment.


Min thu Aung: Thank you. That’s a great question. Thank you so much for that. I would propose having one company intervention at least on the question that was posed, mandating human rights due diligence for AI, and I guess in the context of high-risk AI perhaps. And perhaps, yeah, one civil society intervention at least if possible. And Natalie, if you want to jump in, please feel free. Who would like to go first?


Alexandria Walden: I’m happy to just jump in quickly. I think if you have human rights governance inside of a company, then you should be doing ongoing human rights due diligence across all of the activity in your company, and that should also apply to your AI work, one. And then two, with respect to sort of regulations to require human rights due diligence, and then specifically for high-risk application areas, we see that with the EU AI Act, and many companies, including mine, have supported a risk-based approach, which does mandate fundamental rights or human rights impact assessment for high-risk application. So I think that’s something that you will see a lot of support for from industry.


Caitlin Kraft-Buchman: Wonderful. Thank you so much. Yeah, and I’m really happy for the question, because I think that we need to focus basically more on impact than necessarily even risk or harm, but just really say how, from the very get-go, what is the impact on humans? Integrate it, as we’ve just heard it, you know, all the way through the objective and the design, all the way through. As we know, in the HUD area, which is the sort of fundamental rights impact suggestion from the Council of Europe, there’s going to now be a more formal – I’m sure Natalie will speak to that – more formal, because there is no standardized human rights impact assessment anywhere from any body, sort of international body. But that HUD area really brings – I mean, what we’ve done, we’ve worked with the Turing, who did it, and we’ve brought the stakeholder part of it really way up front, and I think that that’s going to really make a huge difference if you do have this sort of multi-stakeholder consultation, this idea of co-creation, really at the get-go. I just want to say two things. I think that we’re going to also go to a right to know, ultimately, in terms of the legislation, and that right to know will be the transparency of what the training data is writ large, right, once we get all the IP issues settled. And then the second thing will be the explainability, that really, at all levels of society, we can kind of understand what’s happening with the algorithm and then also potential redress. Yeah, that’s it.


Min thu Aung: Thank you, Caitlin. Would anyone else like to intervene? Natalie should talk about their impact assessment. I mean, Natalie, would you like to intervene as well, perhaps talking about your impact assessment tools?


Nathalie Stadelmann: Sure. Maybe just to draw, because I didn’t really go into the recommendation of the report, but indeed, to the colleague representing civil society in South Korea, I would invite him to look at the recommendation we have when it comes to states, when indeed there are regulatory requirements requiring, basically, a company to conduct human rights due diligence, and there should be, as well, encouragement that they publish the human rights due diligence and impact assessment that they have implemented. And those regulations should, as well, as much as possible, request companies developing and deploying AI that they verify the data input and the resulting output to ensure that there is proper representation in terms of gender, race, cultural diversity, and basically safeguards against any negative impact linked to possible discriminatory AI outputs and their consequences. So this is a recommendation in the report. And in terms of human rights impact assessment, we have produced, together with the great support of GNI, as well, and that’s part of the resources listed in the session panel, guidance specifically on generative AI, and there is detailed guidance on human rights impact assessment of generative AI. So I would, as well, invite colleagues to have a look at this specific guidance that we produced now a bit more than a year ago. And just a comment, as well, to the colleague on the panel who mentioned India, I wanted just to draw attention that there will be an AI summit in India in February. And that’s very interesting, precisely in terms of bringing global majority perspective into the discussion. And it seems from the documents published so far that the focus will be on open, transparent, and rights-respecting AI development during the summit. And I think it’s very welcome that after countries like the UK and South Korea and France having hosted those past AI summits, that this summit next February in India, I believe, will be really a good opportunity to possibly, because there was this question asked to me about the geopolitical context, as well. I think we have seen, as well, Brazil developing AI regulation, and so as counterbalance in brackets to the developers in the global north.


Min thu Aung: Thank you very much, Nathalie. I appreciate it. We have two and a half minutes left, and I think two questions. So if we could have the questions together, if possible. Okay, great. Thanks, Ben. Richard, please go ahead.


Audience: Richard Ringfield from BSR. I think while a human rights-based approach is really a necessary prerequisite, many of the really big impacts that AI will have will be more at the societal rather than the individual level. So we’re seeing already, for example, shifts in the way education needs to be carried out as a result of generative AI and people’s ability to research and learn. It’s likely that AI will lead to job displacement or shifts in different jobs. So I’m just wondering whether the panel thinks that those sort of bigger societal impacts or risks can be captured by a human rights-based approach, or whether we need to go a bit beyond sort of the individual human rights-based approach to make sure we fully acknowledge all of the risks that come with AI.


Min thu Aung: Wonderful question. Thank you, Richard. This is really quite open to anyone, so would anyone like to intervene there?


Elina Thorstrom: I can, for example, tell about our approach. A very good question, I think. Sorry. Excellent. Very good question. And how we at least see it is that we need to have a very comprehensive approach at Telenor. And it requires, of course, taking into account the human rights, but it is correct, as you say, that that is definitely not enough. So we need to look at AI much more broadly and look at the impacts, what it has also at the company level, and also educate, train, and build awareness to our employees. So all of these are essential part of, in my opinion, on AI governance.


Jhalak Mrignayani Kakkar: Yeah, I agree. Just two points. I think social media platforms have pointed to the need of societal impact assessment. Secondly, existing human rights are being reinterpreted in group and community settings like privacy, group privacy, community rights over data. So I think there’s also a reinterpretation and broadening of perspective required around human rights in the technology context.


Min thu Aung: Thank you very much. We have 30 seconds remaining. So I would like to, unless anyone has any last-minute must-have interventions. No? I would like to close the panel here. Thank you so much to our panelists for taking part and sharing their views. Thank you so much to those participating online and also for the questions that we received. So yeah, as per the IGF requirements, we will be posting up a summary of this related to the session. So please feel free to read there. And with that, I thank everyone again. Appreciate it. Thank you. Thank you. Thank you.


A

Alexandria Walden

Speech speed

182 words per minute

Speech length

691 words

Speech time

227 seconds

Companies need baseline human rights governance before integrating AI principles, with corporate policies committed to UN Guiding Principles and board-level oversight

Explanation

Walden argues that companies must establish fundamental human rights governance structures as a foundation before layering on AI-specific principles. This includes having corporate policies that commit to international standards like the UN Guiding Principles, with risks reviewed at the board level and human rights programs that implement commitments across products and services.


Evidence

Google’s commitment to GNI principles and UN Guiding Principles, with AI principles building on top of these commitments, corporate policy setting values from highest company level, and board-level risk review


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Agreed with

– Elina Thorstrom

Agreed on

Top management commitment is essential for effective human rights integration in AI


Technical teams need processes and guidelines to operationalize human rights principles through red teaming, secure AI frameworks, and coalition work

Explanation

Walden emphasizes that having principles is insufficient without operational processes that enable technical teams to implement human rights considerations. This requires specific frameworks, testing methodologies, and collaborative approaches to identify and address rights-related risks in AI development.


Evidence

Red teaming processes, secure AI framework (SAIF), coalition work with other companies, and policies ensuring consideration of human rights in AI content, privacy, bias and discrimination


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Infrastructure


Agreed with

– Elina Thorstrom

Agreed on

Operational processes and frameworks are necessary to implement human rights principles in AI development


Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches

Explanation

Walden advocates for comprehensive human rights due diligence that encompasses all company activities, including AI development and deployment. She supports regulatory frameworks that mandate human rights impact assessments for high-risk AI applications, viewing this as a reasonable and necessary approach.


Evidence

Support for EU AI Act’s risk-based approach that mandates fundamental rights impact assessment for high-risk applications, industry support for such regulatory requirements


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Agreed with

– Audience

Agreed on

Support for risk-based regulatory approaches with mandatory human rights assessments for high-risk AI


Disagreed with

– Audience

Disagreed on

Regulatory enforcement vs voluntary approaches


E

Elina Thorstrom

Speech speed

132 words per minute

Speech length

745 words

Speech time

338 seconds

Top management commitment to responsible AI and human rights is essential, with AI strategy foundations built on responsible AI principles

Explanation

Thorstrom argues that successful integration of human rights into AI development requires commitment from the highest levels of company leadership. She emphasizes that responsible AI must be the foundation of AI strategy and embedded throughout organizational structures and procedures.


Evidence

Telenor’s AI strategy foundation built on responsible AI, responsible AI principles guiding risk assessment work at DNA, top management commitment driving responsible AI throughout organization


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Agreed with

– Alexandria Walden

Agreed on

Top management commitment is essential for effective human rights integration in AI


Comprehensive risk assessment should examine human rights, security, privacy, and data governance holistically through cross-organizational collaboration

Explanation

Thorstrom advocates for a holistic approach to AI risk assessment that goes beyond human rights to include security, privacy, and data governance considerations. She emphasizes the importance of utilizing expertise throughout the organization and collaborative learning processes.


Evidence

DNA’s comprehensive AI application review process examining human rights, security, privacy, and data governance; cross-organizational collaboration and expertise utilization


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Cybersecurity | Legal and regulatory


Agreed with

– Alexandria Walden

Agreed on

Operational processes and frameworks are necessary to implement human rights principles in AI development


AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches

Explanation

Thorstrom acknowledges that while human rights approaches are necessary, AI’s impacts on society require broader consideration including company-level impacts and employee education. She advocates for comprehensive AI governance that addresses these wider societal implications.


Evidence

Need for employee education, training, and awareness building; comprehensive approach beyond human rights at company level


Major discussion point

Broader Societal Impact Assessment


Topics

Human rights | Economic | Sociocultural


Agreed with

– Jhalak Mrignayani Kakkar
– Audience

Agreed on

AI impacts extend beyond individual rights to broader societal implications


Disagreed with

– Jhalak Mrignayani Kakkar
– Audience

Disagreed on

Scope of impact assessment – individual vs societal level


C

Caitlin Kraft-Buchman

Speech speed

164 words per minute

Speech length

1316 words

Speech time

480 seconds

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards

Explanation

Kraft-Buchman argues that international human rights law offers a superior foundation for AI governance compared to individual companies’ or institutions’ ethical principles. She emphasizes that human rights frameworks have been agreed upon by all nations and provide a comprehensive starting point rather than selective ethical approaches.


Evidence

International human rights law agreed by everybody as point of departure, contrast with a la carte ethical principles that vary by company or institution


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Design-based approach with intentionality is needed, including multidisciplinary teams with social scientists, human rights experts, and anthropologists

Explanation

Kraft-Buchman advocates for intentional design processes that bring together diverse expertise from the beginning of AI development. She emphasizes the need for multidisciplinary teams that include not just technologists but also social scientists, human rights experts, and other relevant specialists depending on the application area.


Evidence

Design course methodology taught at Cambridge, Technical University Munich, African Center for Technology Studies, Chile’s National Center for AI; emphasis on geographic and multidisciplinary diversity including doctors, nurses for health applications


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Sociocultural | Development


Public procurement represents significant deployment lever (13% EU GDP, 30-40% developing nations GDP) for connecting people to services rather than just detecting fraud

Explanation

Kraft-Buchman highlights public procurement as a major opportunity for positive AI deployment, noting its substantial economic impact. She advocates for using procurement processes to deploy AI systems that connect people to services and improve quality of life, rather than focusing primarily on cost-saving measures like fraud detection.


Evidence

Public procurement statistics: 13% of EU GDP, 30-40% of GDP in developing nations; current focus on fraud detection versus potential for service connection


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Economic | Development | Legal and regulatory


J

Jhalak Mrignayani Kakkar

Speech speed

124 words per minute

Speech length

1058 words

Speech time

511 seconds

Global South faces different socioeconomic realities where technologies developed in the North may have different implications, requiring specific benchmarks and taxonomies

Explanation

Kakkar argues that AI technologies developed in the Global North may have vastly different impacts when deployed in Global South contexts due to different socioeconomic conditions and societal structures. This necessitates the development of context-specific risk assessment tools and frameworks rather than relying solely on Global North-developed standards.


Evidence

Different socioeconomic realities and societal contexts in Global South, technologies not designed keeping those contexts in mind, increasing focus on building specific benchmarks and taxonomies for local contexts


Major discussion point

Global South Perspectives and Context-Specific Challenges


Topics

Human rights | Development | Sociocultural


Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for cross-learning and context-appropriate solutions

Explanation

Kakkar emphasizes the critical need for sustained collaboration between different stakeholders to ensure effective human rights due diligence in AI. She argues that global companies often lack familiarity with local contexts, making engagement with local civil society and academia essential for developing appropriate solutions.


Evidence

Many AI companies are global with staff unfamiliar with local contexts, need for sustained multi-stakeholder engagement for cross-learning and cross-pollination of ideas


Major discussion point

Global South Perspectives and Context-Specific Challenges


Topics

Human rights | Development | Legal and regulatory


Agreed with

– Nathalie Stadelmann
– Min thu Aung

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Strategic framing of conversations is needed given heterogeneity in constitutional rights embedding across Global South countries

Explanation

Kakkar points out that Global South countries have varying degrees of human rights embedding in their constitutions and legal frameworks. This requires strategic approaches to how human rights and AI conversations are framed to achieve intended outcomes while respecting different national contexts and legal traditions.


Evidence

Heterogeneity within Global South in constitutional embedding of human rights, need for strategic language and framing of conversations to reach intended impact


Major discussion point

Global South Perspectives and Context-Specific Challenges


Topics

Human rights | Legal and regulatory | Sociocultural


Human rights due diligence allows proactive risk identification rather than post-facto harm reaction, requiring culturally and linguistically sensitive staff

Explanation

Kakkar argues that proper human rights due diligence enables organizations to identify and address potential harms before they occur, rather than reacting after damage is done. She emphasizes that this requires staff who understand local cultural and linguistic contexts, which is particularly important in diverse countries like India.


Evidence

Proactive identification of risks versus post-facto reaction to harms, need for culturally and linguistically sensitive staff, example of different languages spoken in different parts of India requiring multiplicity of perspectives


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Sociocultural | Development


Legal mandates for human rights impact assessments face implementation challenges without enforcement mechanisms, requiring transparency thresholds and disclosure requirements

Explanation

Kakkar acknowledges the tension between mandatory human rights assessments and their practical implementation. She notes that while lack of enforceability may actually encourage company participation, there remain challenges in defining thresholds for risk mitigation and determining appropriate levels of transparency and disclosure.


Evidence

Criticism of lack of enforceability but potential value in encouraging company participation, challenges in articulating thresholds for risk mitigation, need for transparency and disclosure requirements


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights

Explanation

Kakkar argues that traditional individual-focused human rights frameworks are being expanded and reinterpreted to address collective and community impacts of AI technologies. This includes concepts like group privacy and community rights over data that go beyond individual rights protections.


Evidence

Social media platforms pointing to need for societal impact assessment, reinterpretation of privacy as group privacy, community rights over data


Major discussion point

Broader Societal Impact Assessment


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Elina Thorstrom
– Audience

Agreed on

AI impacts extend beyond individual rights to broader societal implications


Disagreed with

– Elina Thorstrom
– Audience

Disagreed on

Scope of impact assessment – individual vs societal level


N

Nathalie Stadelmann

Speech speed

138 words per minute

Speech length

839 words

Speech time

363 seconds

AI technology evolution outpaces current regulatory frameworks, requiring shared responsibility between companies and states for human rights protection

Explanation

Stadelmann argues that the rapid pace of AI development has created a gap where existing regulatory frameworks cannot keep up with technological advancement. This necessitates a shared responsibility model where both companies and states have obligations to protect human rights in AI development and deployment.


Evidence

BTEC report mandated by Human Rights Council showing speed and scale of AI evolution outpacing regulatory frameworks, shared responsibility between companies developing/deploying AI and states’ duty to protect rights


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


Regulations should require publication of human rights due diligence and verification of data input/output for proper representation and non-discrimination

Explanation

Stadelmann advocates for regulatory requirements that mandate transparency in human rights due diligence processes and verification of AI systems’ data inputs and outputs. This includes ensuring proper representation across gender, race, and cultural diversity while implementing safeguards against discriminatory outcomes.


Evidence

BTEC report recommendations for regulatory requirements on publishing human rights due diligence, verification of data input and output for gender, race, cultural diversity representation, safeguards against discriminatory AI outputs


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development

Explanation

Stadelmann describes the UN’s role, particularly through BTEC, in translating broad human rights principles into practical guidance for the technology sector. This involves multi-stakeholder collaboration with organizations like GNI and OECD to develop actionable frameworks for companies.


Evidence

BTEC project launched 6 years ago to translate guiding principles for technology sector, multi-stakeholder work with GNI, OECD, and companies through committee of practice


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Legal and regulatory


Agreed with

– Jhalak Mrignayani Kakkar
– Min thu Aung

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Global AI summits, particularly upcoming India summit, provide opportunities to bring global majority perspectives into rights-respecting AI development discussions

Explanation

Stadelmann highlights the importance of global AI summits in fostering international cooperation on AI governance, with particular emphasis on the upcoming India summit as an opportunity to center Global South perspectives. She sees this as a counterbalance to previous summits hosted by Global North countries.


Evidence

AI summit in India in February focusing on open, transparent, and rights-respecting AI development, contrast with previous summits in UK, South Korea, France, Brazil developing AI regulation as counterbalance to Global North developers


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Development | Legal and regulatory


M

Min thu Aung

Speech speed

133 words per minute

Speech length

1969 words

Speech time

881 seconds

Multi-stakeholder initiatives like GNI are essential for accountability and collective advocacy at the intersection of technology and human rights

Explanation

Min thu Aung argues that organizations like GNI, which bring together academics, civil society, companies, and investors, play a crucial role in ensuring accountability and shared learning. These initiatives are particularly important when companies face government requests that impact freedom of expression or privacy rights.


Evidence

GNI brings together four constituencies (academics, civil society, companies, investors) for accountability, shared learning, collective advocacy; GNI principles guide companies on government requests impacting freedom of expression and privacy


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Legal and regulatory


Agreed with

– Jhalak Mrignayani Kakkar
– Nathalie Stadelmann

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


AI governance requires active engagement across multiple international forums and policy development processes

Explanation

Min thu Aung emphasizes that effective AI governance necessitates participation in various international bodies and collaborative projects. This includes membership in expert networks, involvement in due diligence projects, and development of policy briefs on government interventions in AI.


Evidence

GNI membership in OECD AI Network of Experts, involvement in BTEC’s GenAI Human Rights Due Diligence Project, AI working group developing policy brief on government interventions in AI


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Legal and regulatory


Companies need practical guidance on incorporating human rights into AI risk assessments across different regulatory environments

Explanation

Min thu Aung highlights the need for companies to understand how to conduct human rights-informed risk assessments while navigating different regulatory frameworks like the EU AI Act. This requires practical advice on normalizing human rights considerations within AI operations across various jurisdictions.


Evidence

Questions posed to panelists about conducting risk assessments, incorporating human rights, navigating EU AI Act and other regulations, advice for normalizing AI risk assessments


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


A

Audience

Speech speed

145 words per minute

Speech length

324 words

Speech time

133 seconds

Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption

Explanation

The audience member from South Korea argues that while human rights impact assessment tools exist and can be valuable, they are not widely used without legal obligations. Even when companies conduct internal risk assessments, independent assessments with affected party participation are rarely carried out.


Evidence

Korea’s National Human Rights Commission human rights impact assessment tool not widely used due to lack of legal obligation, Korean AI Basic Act only states efforts should be made rather than mandating assessments


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


Agreed with

– Alexandria Walden

Agreed on

Support for risk-based regulatory approaches with mandatory human rights assessments for high-risk AI


Disagreed with

– Alexandria Walden

Disagreed on

Regulatory enforcement vs voluntary approaches


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches

Explanation

The audience member from BSR argues that while human rights approaches are necessary, many significant AI impacts occur at societal rather than individual levels. These include changes in education systems, job displacement, and shifts in social structures that may not be fully captured by traditional individual-focused human rights frameworks.


Evidence

Examples of societal impacts: shifts in education due to generative AI affecting research and learning capabilities, job displacement and shifts in employment patterns


Major discussion point

Broader Societal Impact Assessment


Topics

Human rights | Economic | Sociocultural


Agreed with

– Elina Thorstrom
– Jhalak Mrignayani Kakkar

Agreed on

AI impacts extend beyond individual rights to broader societal implications


Disagreed with

– Elina Thorstrom
– Jhalak Mrignayani Kakkar

Disagreed on

Scope of impact assessment – individual vs societal level


Agreements

Agreement points

Top management commitment is essential for effective human rights integration in AI

Speakers

– Alexandria Walden
– Elina Thorstrom

Arguments

Companies need baseline human rights governance before integrating AI principles, with corporate policies committed to UN Guiding Principles and board-level oversight


Top management commitment to responsible AI and human rights is essential, with AI strategy foundations built on responsible AI principles


Summary

Both speakers emphasize that successful integration of human rights into AI requires commitment from the highest levels of company leadership, with corporate policies and governance structures established at the board level


Topics

Human rights | Legal and regulatory


Operational processes and frameworks are necessary to implement human rights principles in AI development

Speakers

– Alexandria Walden
– Elina Thorstrom

Arguments

Technical teams need processes and guidelines to operationalize human rights principles through red teaming, secure AI frameworks, and coalition work


Comprehensive risk assessment should examine human rights, security, privacy, and data governance holistically through cross-organizational collaboration


Summary

Both speakers agree that having principles alone is insufficient and that companies need specific operational processes, frameworks, and collaborative approaches to effectively implement human rights considerations in AI development


Topics

Human rights | Infrastructure | Legal and regulatory


Support for risk-based regulatory approaches with mandatory human rights assessments for high-risk AI

Speakers

– Alexandria Walden
– Audience

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption


Summary

There is agreement that regulatory frameworks requiring human rights impact assessments for high-risk AI applications are necessary and supported by industry, as voluntary approaches have proven insufficient


Topics

Human rights | Legal and regulatory


Multi-stakeholder collaboration is essential for effective AI governance

Speakers

– Jhalak Mrignayani Kakkar
– Nathalie Stadelmann
– Min thu Aung

Arguments

Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for cross-learning and context-appropriate solutions


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development


Multi-stakeholder initiatives like GNI are essential for accountability and collective advocacy at the intersection of technology and human rights


Summary

All three speakers emphasize the critical importance of bringing together diverse stakeholders including companies, civil society, academia, and governments to develop effective AI governance frameworks


Topics

Human rights | Legal and regulatory


AI impacts extend beyond individual rights to broader societal implications

Speakers

– Elina Thorstrom
– Jhalak Mrignayani Kakkar
– Audience

Arguments

AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches


Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Summary

There is consensus that AI’s impacts go beyond individual human rights to encompass broader societal changes including education, employment, and community structures, requiring expanded assessment frameworks


Topics

Human rights | Economic | Sociocultural


Similar viewpoints

Both speakers advocate for using established international human rights frameworks as the foundation for AI governance rather than relying on individual companies’ ethical principles, emphasizing the universal agreement and comprehensive nature of human rights law

Speakers

– Caitlin Kraft-Buchman
– Nathalie Stadelmann

Arguments

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development


Topics

Human rights | Legal and regulatory


Both speakers emphasize the need for context-specific approaches to AI development that consider diverse perspectives and local realities, requiring multidisciplinary expertise and culturally sensitive frameworks

Speakers

– Jhalak Mrignayani Kakkar
– Caitlin Kraft-Buchman

Arguments

Global South faces different socioeconomic realities where technologies developed in the North may have different implications, requiring specific benchmarks and taxonomies


Design-based approach with intentionality is needed, including multidisciplinary teams with social scientists, human rights experts, and anthropologists


Topics

Human rights | Development | Sociocultural


Both speakers support comprehensive human rights due diligence requirements for AI systems, with regulatory frameworks that mandate transparency and verification processes to ensure non-discriminatory outcomes

Speakers

– Alexandria Walden
– Nathalie Stadelmann

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Regulations should require publication of human rights due diligence and verification of data input/output for proper representation and non-discrimination


Topics

Human rights | Legal and regulatory


Unexpected consensus

Industry support for mandatory human rights regulations

Speakers

– Alexandria Walden
– Audience

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption


Explanation

It is somewhat unexpected to see strong alignment between a major tech company representative and civil society on the need for mandatory regulatory requirements, as industry typically resists additional regulatory burdens. This suggests a maturing recognition that voluntary approaches are insufficient


Topics

Human rights | Legal and regulatory


Acknowledgment of limitations in current human rights frameworks for AI

Speakers

– Jhalak Mrignayani Kakkar
– Elina Thorstrom
– Audience

Arguments

Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights


AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Explanation

There is unexpected consensus across different stakeholder types that traditional individual-focused human rights frameworks may be insufficient for addressing AI’s broader societal impacts, suggesting a need for new approaches beyond established human rights paradigms


Topics

Human rights | Economic | Sociocultural


Overall assessment

Summary

There is strong consensus on the need for top management commitment, operational frameworks for implementation, multi-stakeholder collaboration, and regulatory requirements for human rights in AI. Speakers also agree that AI impacts extend beyond individual rights to broader societal implications requiring expanded assessment approaches.


Consensus level

High level of consensus across diverse stakeholders (industry, civil society, multilateral organizations, Global South perspectives) on fundamental principles and approaches, with surprising alignment on the need for mandatory regulatory frameworks. This suggests the field is maturing toward shared understanding of necessary governance structures, though implementation challenges remain regarding context-specific applications and enforcement mechanisms.


Differences

Different viewpoints

Scope of impact assessment – individual vs societal level

Speakers

– Elina Thorstrom
– Jhalak Mrignayani Kakkar
– Audience

Arguments

AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches


Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Summary

While there’s agreement that AI impacts go beyond individual rights, there are different views on how to address this. Thorstrom emphasizes comprehensive company-level approaches, Kakkar focuses on reinterpreting existing human rights frameworks for group settings, and the audience member suggests that traditional human rights frameworks may be insufficient for capturing societal-level changes.


Topics

Human rights | Economic | Sociocultural


Regulatory enforcement vs voluntary approaches

Speakers

– Alexandria Walden
– Audience

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption


Summary

While Walden supports risk-based regulatory approaches for high-risk AI applications, the audience member from South Korea argues more broadly that legal mandates are necessary because voluntary approaches are not widely adopted, suggesting a difference in views on the extent of regulatory requirements needed.


Topics

Human rights | Legal and regulatory


Unexpected differences

Adequacy of human rights frameworks for AI governance

Speakers

– Caitlin Kraft-Buchman
– Audience

Arguments

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Explanation

This disagreement is unexpected because both parties are advocating for comprehensive approaches to AI governance, yet they differ on whether existing human rights frameworks are sufficient. Kraft-Buchman strongly advocates for human rights frameworks as superior to other approaches, while the audience member suggests these frameworks may be inadequate for capturing broader societal impacts, potentially requiring approaches that go beyond traditional human rights paradigms.


Topics

Human rights | Economic | Sociocultural


Overall assessment

Summary

The discussion shows remarkable consensus on fundamental principles – all speakers agree that human rights should be central to AI governance, that multi-stakeholder collaboration is essential, and that proactive risk assessment is preferable to reactive harm mitigation. The main disagreements center on implementation approaches rather than core values.


Disagreement level

Low to moderate disagreement level with high strategic significance. While speakers largely agree on goals, their different perspectives on implementation methods (voluntary vs mandatory approaches, individual vs societal impact focus, adequacy of existing frameworks) reflect important tensions in the field that could significantly impact how AI governance develops globally. These disagreements are constructive and reflect the complexity of translating human rights principles into practical AI governance mechanisms across different contexts and stakeholder perspectives.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for using established international human rights frameworks as the foundation for AI governance rather than relying on individual companies’ ethical principles, emphasizing the universal agreement and comprehensive nature of human rights law

Speakers

– Caitlin Kraft-Buchman
– Nathalie Stadelmann

Arguments

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development


Topics

Human rights | Legal and regulatory


Both speakers emphasize the need for context-specific approaches to AI development that consider diverse perspectives and local realities, requiring multidisciplinary expertise and culturally sensitive frameworks

Speakers

– Jhalak Mrignayani Kakkar
– Caitlin Kraft-Buchman

Arguments

Global South faces different socioeconomic realities where technologies developed in the North may have different implications, requiring specific benchmarks and taxonomies


Design-based approach with intentionality is needed, including multidisciplinary teams with social scientists, human rights experts, and anthropologists


Topics

Human rights | Development | Sociocultural


Both speakers support comprehensive human rights due diligence requirements for AI systems, with regulatory frameworks that mandate transparency and verification processes to ensure non-discriminatory outcomes

Speakers

– Alexandria Walden
– Nathalie Stadelmann

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Regulations should require publication of human rights due diligence and verification of data input/output for proper representation and non-discrimination


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

Companies must establish baseline human rights governance with top management commitment before integrating AI-specific principles, using UN Guiding Principles as foundation rather than ad hoc ethical frameworks


Effective AI risk management requires comprehensive, holistic approaches that integrate human rights considerations into technical processes through red teaming, secure AI frameworks, and cross-organizational collaboration


Global South contexts require specialized attention due to different socioeconomic realities, with need for context-specific benchmarks, taxonomies, and culturally sensitive implementation approaches


Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for effective human rights integration in AI development and deployment


Risk-based regulatory approaches with mandatory human rights impact assessments for high-risk AI systems are gaining industry support, particularly following EU AI Act model


AI impacts extend beyond individual rights to broader societal changes requiring both human rights assessments and societal impact evaluations


Public procurement represents a significant leverage point for deploying rights-respecting AI at scale, particularly in developing nations where it comprises 30-40% of GDP


Resolutions and action items

GNI AI working group to complete policy brief on government interventions in AI and rights-respecting responses


BTEC to continue developing standardized human rights impact assessment tools and guidance, building on existing generative AI guidance


Participants encouraged to review BTEC’s latest Human Rights Council report on shared responsibility for AI governance


Civil society and academia in Global South to develop context-specific benchmarks and taxonomies for AI risk assessment


Companies to implement ongoing human rights due diligence across all AI activities with appropriate transparency and disclosure mechanisms


Unresolved issues

Lack of standardized human rights impact assessment methodology across international bodies


Challenges in defining appropriate risk thresholds for triggering mitigation measures in AI systems


Difficulty balancing breadth versus specificity in risk identification frameworks


Enforcement mechanisms for human rights due diligence requirements remain unclear


Geopolitical divides on human rights approaches to AI governance continue to impact global coordination


Questions about prioritizing different human rights when they conflict in AI deployment contexts


Uncertainty about optimal levels of transparency and disclosure requirements for companies


Gap between individual human rights frameworks and broader societal impact assessment needs


Suggested compromises

Risk-based regulatory approach that mandates human rights assessments only for high-risk AI applications rather than all AI systems


Strategic framing of human rights conversations in different jurisdictions based on local constitutional and cultural contexts rather than universal approach


Voluntary human rights due diligence with incentives rather than punitive enforcement mechanisms to encourage company participation


Combination of individual human rights assessments with broader societal impact evaluations to capture full scope of AI implications


Multi-stakeholder consultation processes that include affected communities in co-creation rather than top-down assessment approaches


Thought provoking comments

Human rights has to be at the baseline and then integrated into those processes and frameworks… having that in place is what allows us to then get to a point where, okay, we also have AI principles on top of that. Our AI principles build on top of our commitments to the UNGPs, our GNI principles.

Speaker

Alexandria Walden (Google)


Reason

This comment established a foundational framework that human rights governance must precede AI-specific principles, challenging the common approach of treating AI ethics as separate from broader human rights commitments. It provided a clear hierarchy and integration model.


Impact

This comment set the tone for the entire discussion by establishing that human rights should be the foundational layer, not an add-on. It influenced subsequent speakers like Elina Thorström to emphasize similar points about top management commitment and comprehensive approaches, creating a consistent thread throughout the panel.


Our values, our policies, our principles, they guide our work. Irrespective of which country we are at… Although a part of AI governance and good AI governance is of course to make sure that we are compliant with the regulation. But that’s only a part of it. So the company’s culture, policy, guidelines, those are the ones that actually guide us.

Speaker

Elina Thorström (Telenor)


Reason

This insight challenged the compliance-focused approach to AI governance, arguing that internal values should drive behavior regardless of local regulatory requirements. It highlighted the tension between regulatory compliance and ethical leadership.


Impact

This comment shifted the discussion from regulatory compliance to values-driven governance, influencing later speakers to discuss the limitations of purely compliance-based approaches and the need for companies to go beyond legal requirements.


If a lot of these technologies are being developed in the North, we don’t know. They could have very different implications in the South. They’re not being developed and designed keeping in mind those contexts… even within the Indian context, within an urban part of India versus a semi-urban part of India versus a rural part of India will differ significantly.

Speaker

Jhalak Mrignayani Kakkar


Reason

This comment introduced crucial complexity about contextual differences in AI impacts, challenging the assumption that risk assessments can be universally applied. It highlighted the need for granular, context-specific approaches even within single countries.


Impact

This intervention significantly deepened the discussion by introducing the concept of contextual variation in AI impacts. It led to more nuanced conversations about the need for culturally and linguistically sensitive staff, different benchmarks for different contexts, and the limitations of one-size-fits-all approaches to human rights due diligence.


We tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody, and as a point of departure, it really is a very good place to start, as opposed to one company’s or one academic institution’s idea of what really should be foregrounded or not.

Speaker

Caitlin Kraft-Buchman


Reason

This comment provided a sharp critique of the current approach to AI ethics, distinguishing between voluntary ethical principles and binding human rights frameworks. It challenged the legitimacy and effectiveness of company-specific ethical approaches.


Impact

This comment reframed the discussion from AI ethics to human rights law, emphasizing the importance of universally agreed standards. It reinforced the earlier points about human rights as foundational and influenced the conversation toward more concrete, legally grounded approaches rather than voluntary principles.


Many of the really big impacts that AI will have will be more at the societal rather than the individual level… I’m just wondering whether the panel thinks that those sort of bigger societal impacts or risks can be captured by a human rights-based approach, or whether we need to go a bit beyond sort of the individual human rights-based approach.

Speaker

Richard Ringfield (BSR)


Reason

This question challenged the fundamental premise of the entire panel by questioning whether human rights frameworks, traditionally focused on individual rights, are adequate for addressing systemic societal changes brought by AI. It introduced a critical limitation of the human rights approach.


Impact

This question prompted the panel to acknowledge the limitations of individual-focused human rights approaches and led to discussions about societal impact assessments, group privacy rights, and the need for broader governance frameworks. It added important nuance to the conversation by highlighting what might be missing from a purely human rights-based approach.


Risk assessment, human rights due diligence. One of the criticisms is the lack of enforceability. But perhaps that’s also where the value is, because perhaps companies are more incentivized to conduct it when they know that there isn’t a negative consequence.

Speaker

Jhalak Mrignayani Kakkar


Reason

This paradoxical insight challenged conventional thinking about enforcement, suggesting that the voluntary nature of human rights due diligence might actually be its strength rather than weakness. It introduced a counterintuitive perspective on regulatory design.


Impact

This comment introduced complexity to the discussion about mandatory versus voluntary approaches to human rights assessments. It influenced the conversation about the Korean AI law and the debate over whether to mandate human rights impact assessments, showing that enforcement mechanisms need careful consideration of incentive structures.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a clear hierarchy (human rights as foundation, not add-on), introducing critical complexity about contextual variation and Global South perspectives, challenging the adequacy of voluntary ethical approaches, and questioning whether individual rights frameworks can address systemic societal impacts. The comments created a progression from basic principles to implementation challenges to fundamental limitations, resulting in a nuanced conversation that moved beyond simple advocacy for human rights in AI to grapple with practical and theoretical challenges. The discussion evolved from ‘why’ human rights matter in AI to ‘how’ to implement them effectively across different contexts, and finally to ‘whether’ current frameworks are sufficient for AI’s societal impacts.


Follow-up questions

How to determine the threshold for requiring risk mitigation in human rights due diligence for AI systems

Speaker

Jhalak Mrignayani Kakkar


Explanation

This is a critical operational challenge for implementing effective human rights risk management, as it determines when companies must take action to address identified risks


What level of transparency should be expected from companies and required by governments in AI human rights assessments

Speaker

Jhalak Mrignayani Kakkar


Explanation

Transparency is essential for identifying emerging challenges and making human rights due diligence a meaningful exercise, but the appropriate level needs to be defined


How to balance breadth versus specificity in defining AI risks for assessment purposes

Speaker

Jhalak Mrignayani Kakkar


Explanation

Too broad a definition hinders development of specific assessment tools, while too narrow a definition may miss important harms that could arise


How to prioritize certain human rights over others in AI risk assessments, given their mutually affirming character

Speaker

Jhalak Mrignayani Kakkar


Explanation

This addresses the practical challenge of resource allocation and focus when multiple human rights may be impacted by AI systems


Development of context-specific benchmarks and taxonomies for Global South countries to identify AI risks and harms

Speaker

Jhalak Mrignayani Kakkar


Explanation

Global benchmarks may not capture the specific socioeconomic realities and societal contexts of Global South countries, requiring localized frameworks


How to strategically frame human rights conversations with governments that have varying levels of human rights embedding in their constitutions

Speaker

Jhalak Mrignayani Kakkar


Explanation

Different countries have different constitutional frameworks for human rights, requiring tailored approaches to achieve intended outcomes


Whether societal-level AI impacts can be fully captured by individual human rights-based approaches

Speaker

Richard Ringfield (BSR)


Explanation

AI’s biggest impacts may be societal rather than individual (education shifts, job displacement), raising questions about whether current human rights frameworks are sufficient


Development of the first machine learning benchmark dealing with international human rights law framework

Speaker

Caitlin Kraft-Buchman


Explanation

This would provide developers with tools to test whether their AI systems align with human rights criteria, filling a current gap in available assessment tools


How to ensure effective multi-stakeholder engagement between companies, civil society, academia, and governments for cross-learning on AI human rights

Speaker

Jhalak Mrignayani Kakkar


Explanation

Sustained collaboration is needed for knowledge sharing and developing effective human rights risk assessment frameworks


Development of standardized human rights impact assessment methodology for AI systems

Speaker

Byungil Oh (Jinbonet)


Explanation

While tools exist, there’s no standardized international methodology, and current tools often lack legal mandates for implementation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles

Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles

Session at a glance

Summary

This discussion at the IGF 2025 focused on how artificial intelligence can advance sustainable development and the respective roles of public and private sectors in this process. The session was hosted by Tsinghua University and featured speakers from academia, government, and youth perspectives. Professor Yong Guo opened by emphasizing AI as a strategic technology with strong spillover effects, highlighting Tsinghua University’s commitment to intelligent society governance through their Institute for Intelligent Society Governance established in 2019. He stressed three key areas: talent development with interdisciplinary education, strengthening collaborative mechanisms across sectors, and deepening international cooperation.


Ms. Xuanyun You from China’s Cyberspace Administration outlined the Chinese government’s approach to AI governance, including President Xi Jinping’s Global AI Governance Initiative and China’s proposal for AI capacity building action plans. She detailed China’s comprehensive regulatory framework encompassing AI policy rules, data security laws, and online information governance, emphasizing principles of balancing development with security and implementing inclusive supervision. Youth Ambassador Xin Yi Ding presented the younger generation’s perspective, acknowledging AI’s benefits in areas like fire prediction, autonomous vehicles, and rural economic development, while warning about risks including data monopolies, algorithmic biases, and the dangerous proliferation of deepfakes and misinformation.


Professor Rony Medaglia from Copenhagen Business School provided research-based insights on AI’s dual impact on sustainability, presenting positive examples in efficiency improvements, data governance, and sustainable business models, while cautioning about negative effects including high energy consumption, water usage for server cooling, rebound effects, and embedded biases in AI systems. The discussion concluded that realizing AI’s potential for sustainable development requires careful mitigation of risks while maximizing opportunities through coordinated international efforts.


Keypoints

## Major Discussion Points:


– **AI’s Role in Advancing Sustainable Development Goals**: The discussion emphasized how AI can contribute to achieving the UN’s 2030 Sustainable Development Goals through improved efficiency, data governance, and sustainable business models, with concrete examples like water management systems and pollution monitoring.


– **International Cooperation and Governance Frameworks**: Speakers highlighted the importance of global collaboration in AI governance, including China’s Global AI Governance Initiative and UN resolutions on AI capacity building, particularly focusing on helping developing countries benefit from AI advancements.


– **Balancing AI Opportunities with Risks**: The conversation addressed both the positive potential of AI (enhanced public services, educational access, economic opportunities) and significant risks (misinformation, deepfakes, algorithmic bias, energy consumption, and threats to public trust).


– **Youth Perspective and Generational Responsibility**: A key focus was placed on the younger generation’s unique position as “digital natives” who must navigate AI’s challenges while taking responsibility for ethical AI development and promoting digital literacy across society.


– **Public-Private Sector Collaboration**: The discussion emphasized the need for coordinated efforts between government, industry, academia, and research institutions to develop appropriate regulations, standards, and ethical frameworks for AI development.


## Overall Purpose:


The discussion aimed to explore how artificial intelligence can be leveraged to advance sustainable development while addressing the respective roles of public and private sectors in this process. The session sought to bring together diverse perspectives from academia, government, youth, and international research to identify both opportunities and challenges in using AI for global sustainability goals.


## Overall Tone:


The discussion maintained a consistently optimistic yet cautious tone throughout. Speakers were enthusiastic about AI’s potential to solve global challenges and advance sustainability, but they balanced this optimism with realistic acknowledgment of significant risks and challenges. The tone was collaborative and forward-looking, emphasizing the need for international cooperation and responsible development. There was no notable shift in tone during the conversation – it remained professionally optimistic while being appropriately mindful of the complexities involved in AI governance and sustainable development.


Speakers

– **Cynthia Su** – Deputy Dean of Tsinghua Range Joint Research Institute for Intelligent Society at Tsinghua University, China; Session moderator


– **Yong Guo** – Vice Chair of University Council of Tsinghua University; Professor


– **Xuanyun You** – Associate Deputy Director-General of the Bureau of Law-Based Cyberspace Governance at the Cyberspace Administration of China


– **Xin Yi Ding** – Youth Ambassador of the Institute of Intelligent Society, Governments of Tsinghua University


– **Rony Medaglia** – Professor at Copenhagen Business School (also referred to as “Ron and Anthony Meddalia” in introduction, but appears to be the same person based on context)


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# Discussion Report: AI for Sustainable Development – Public and Private Sector Roles


## Executive Summary


This session at the 20th United Nations Internet Governance Forum was hosted by the Tsinghua Range Joint Research Institute for Intelligent Society. The listening talk brought together speakers from Chinese academia and government, international research, and youth advocacy to examine how artificial intelligence can advance sustainable development goals and the roles of public and private sectors in this process.


The discussion featured four main speakers: Professor Yong Guo, Vice Chair of University Council at Tsinghua University; Ms. Xuanyun You, Associate Deputy Director-General of the Bureau of Law-Based Cyberspace Governance at China’s Cyberspace Administration; Youth Ambassador Xin Yi Ding from the Institute of Intelligent Society Governance; and Professor Rony Medaglia from Copenhagen Business School. The conversation addressed both AI’s potential benefits for sustainability and the significant challenges that must be addressed.


## Detailed Discussion Analysis


### Academic and Institutional Framework


Professor Yong Guo opened by referencing President Xi Jinping’s statement that AI “has strong spillover effects that can drive broader progress” as a strategic technology driving scientific revolution and industrial transformation. He highlighted Tsinghua University’s Institute for Intelligent Society Governance, established in 2019 with Professor Su Jun as founding dean, which focuses on fundamental theories and core policy issues of AI integration into society.


Guo outlined three critical areas requiring attention: developing talent through interdisciplinary education that combines technical skills with social responsibility; strengthening collaborative mechanisms across government, industry, academia, and research institutions; and deepening international cooperation through technology exchanges, standard setting, and financial integration to ensure equitable sharing of digital dividends. He emphasized that universities must lead AI development to serve communities and enhance accessibility through interdisciplinary research approaches.


### Government Policy and Regulatory Approach


Ms. Xuanyun You presented China’s comprehensive approach to AI governance, detailing President Xi Jinping’s Global AI Governance Initiative emphasizing people-centered, AI-for-good principles. She referenced specific UN resolutions, including the 17th session resolution on “Seize the Opportunities of Safe, Secure, and Trustworthy AI Systems for Sustainable Development” and the 1978 session resolution on “Enhancing International Cooperation on Capacity Building of AI.”


You outlined China’s regulatory philosophy of “giving equal importance to development and security, innovation and governance,” implementing “inclusive, prudent, and classified and graded supervision of AI.” She detailed China’s legal framework including the “Next Generation AI Development Plan,” “interim measures for the management of generative AI services,” and comprehensive policies covering AI governance, data security, and online information management.


The government’s approach focuses on reducing biases, misinformation, and security threats while building capacity in computing power, data management, and governance structures. You emphasized China’s commitment to international cooperation through AI Capacity Building Action Plans designed to help developing nations strengthen their AI capabilities and participate in the global digital economy.


### Youth Perspective and Digital Literacy


Youth Ambassador Xin Yi Ding opened with direct questions: “Have you ever been tricked by the information online or paused before sharing a viral news thinking it might be AI-generated? These are not just hypothetical questions, they are profound signs of a huge shift.” She demonstrated how AI can create convincing deepfakes that blur lines between reality and fabrication, potentially harming public trust and democratic governance.


Ding highlighted positive AI applications including fire pattern prediction systems, autonomous vehicles with collision avoidance technology, and AI-powered educational platforms. She provided a detailed example of rural economic development: a golden retriever in Zigui County helping sell oranges (Mao Mao fruit) through AI-powered marketing, generating 4 billion yuan in sales by March 2024.


She emphasized that young people bear responsibility in both public and private spheres for AI governance, rejecting “symbolic participations” and demanding substantive involvement in shaping AI development. Ding stressed the need for improved AI literacy to help everyone understand data usage and algorithmic logic while developing critical thinking skills to guard against AI-generated misinformation.


### International Research and Environmental Considerations


Professor Rony Medaglia provided research-based insights on AI’s dual impact on sustainability, identifying three key areas: efficiency improvements, enhanced data governance, and sustainable business model development. He presented concrete examples including Aarhus city using AI sensors in water grids to anticipate usage and reduce waste, Danish drone companies monitoring emissions, and blockchain-AI combinations enabling product provenance tracking for environmental verification.


Medaglia introduced critical data about AI’s environmental costs: “generating one single image with a large language model uses the same amount of CO2 as charging your mobile phone up to 50%.” He detailed how generative AI systems require significant energy resources and create substantial water withdrawal demands for server cooling – “six times the whole country of Denmark for a whole year of water.”


He discussed rebound effects, where efficiency gains can lead to increased resource consumption, using car-sharing applications potentially reducing public transportation use as an example. This highlighted how AI’s benefits can have unintended consequences requiring systems-level thinking rather than assuming efficiency automatically equals sustainability.


## Key Areas of Agreement


All speakers acknowledged AI’s transformative potential for sustainable development across sectors including disaster prediction, resource management, urban governance, and business model innovation. There was consensus on the necessity of multi-stakeholder collaboration involving government, industry, academia, and international institutions, with particular attention to ensuring developing countries can participate in AI advancements.


Speakers agreed that AI poses serious risks requiring attention alongside its benefits, including bias and discrimination, misinformation and deepfakes, energy consumption and environmental impact, and potential threats to public trust and governance systems.


## Different Emphases and Approaches


While agreeing on fundamental principles, speakers emphasized different implementation approaches. The government perspective focused on comprehensive regulatory frameworks and international cooperation initiatives. The academic perspective emphasized institutional leadership and interdisciplinary research. The youth perspective prioritized individual empowerment through AI literacy and meaningful participation in governance processes. The international research perspective stressed evidence-based understanding of both benefits and risks, particularly environmental impacts and systemic effects.


## Ongoing Challenges


Several challenges were identified without complete resolution. The environmental paradox of AI – where systems designed to improve sustainability may contribute to environmental degradation through energy consumption – requires frameworks to measure net environmental impact. The crisis of AI-generated misinformation and deepfakes needs technical and policy solutions for detection and prevention at scale.


International cooperation frameworks, while widely endorsed, require detailed implementation across diverse political and economic systems. Ensuring AI literacy reaches all populations, especially in developing countries, remains a practical challenge despite being identified as essential.


## Conclusion


The discussion demonstrated understanding of AI’s complex relationship with sustainable development among key stakeholders. Rather than presenting AI simply as a sustainability tool, the conversation examined AI as a transformative force that challenges our capacity for collective action on sustainability issues.


The combination of policy, academic, youth, and international research perspectives created dialogue that addressed governance, equity, and social trust in the digital age. While significant challenges remain, the consensus on principles and complementary stakeholder perspectives suggest a foundation for continued collaboration in developing AI governance frameworks that can advance sustainable development goals while addressing risks and ensuring equitable participation in shaping our technological future.


Session transcript

Cynthia Su: Ladies and gentlemen, good afternoon. Welcome to this listening talk at IGF 2025, AI for Sustainable Development, the Roles of the Public and Private Sectors. I’m Cynthia Su, the Deputy Dean of Tsinghua Range Joint Research Institute for Intelligent Society at Tsinghua University, China. And I’m honored to serve as your moderator for today’s session. First of all, on behalf of the organizers, Tsinghua University, I would like to extend our warmest welcome to our distinguished guests and audience. Today’s listening talk will focus on how AI can help advance sustainable development. And today we are honored to have four distinguished guests to join us. First of all, I’d like to welcome Professor Yong Guo, the Vice Chair of University Council of Tsinghua University to deliver the opening remarks. Welcome, Professor Guo.


Yong Guo: Thank you, Xin, for your brief introduction. Distinguished guests, colleagues and friends, ladies and gentlemen, it’s a true pleasure to join you at the 20th United Nations Internet Governance Forum to explore some of the most pressing questions of our time. How can AI contribute to sustainable development? And how are the respective roles of the public and private sectors evolving in this process? On behalf of Tsinghua University, the host of this listening talk, I would like to extend my warmest welcome and heartfelt thanks to all the distinguished experts and guests for being here today. Sustainable development in the AI era has become a central theme in global governance. Chinese President Mr. Xi Jinping has emphasized that AI is a strategic technology and the forefront of today’s scientific revolution and industrial transformation, with strong spillover effects that can drive broader progress. Universities are engines for knowledge and innovation, so we must take the active role in leading AI to serve communities and accessibility. As AI is becoming more and more integrated into society, Tsinghua is committed to leading in AI research and applications. We are also pursuing interdisciplinary studies that investigate how to govern a future intelligent society. In China, we refer to this concept as intelligent society governance. We are deeply mindful of the societal challenges that AI brings. In response, Tsinghua University established the Institute for Intelligent Society Governance in 2019, with Professor Su Jun serve as its founding dean. This institute brings together the university’s multidisciplinary strategies to conduct in-depth research on the fundamental theories and core policy issues of the intelligent society. Over the years, the RISG has made significant contributions in key areas, including AI and information cocoons, social polarization, employment transformation, and energy transition. Our work represents an important exploration of Chinese solutions for intelligent society governance. These efforts have received strong support from the Chinese governments and broad recognition from across the society. The RISG has delivered exceptional outcomes in scientific research, talent cultivation, international collaboration, and standard setting, offering valuable Chinese insights to the global discourse on intelligent society governance. We are honored to host this lightning talk today, and in this context, I would like to share three reflections. First, we must focus on talent development. This means advancing interdisciplinary education in AI and intelligent society governance to cultivate a new generation of talent, individuals with both technical proficiency and a strong sense of social responsibility. Secondly, we must strengthen collaborative mechanisms. Deeper collaboration across government, industry, academia, and research institutions is needed to tackle both theoretical and practical challenges in using AI for sustainable development and to elevate our innovation and applications to new heights. Thirdly, we must deepen international cooperation. By advancing global exchanges in technology, standard setting, and financial integration, we can ensure digital dividends are shared more widely and equitably. In closing, I wish this session great success. I look forward to hearing the diverse insights from all the speakers today and to joining forces in shaping AI’s development towards a more sustainable and equitable future for all. Thank you. Thank you.


Cynthia Su: Thank you. Thank you, Professor Guo, for your inspiring remarks. Now I’d like to invite Ms. Xuanyun You, the Associate Deputy Director-General of the Bureau of Law-Based Cyberspace Governance at the Cyberspace Administration of China. She will speak about how the Chinese government is actively promoting the development of international frameworks and implementing national policies, regulations, and standards of AI towards sustainable development. Welcome.


Xuanyun You: Thank you. As you know, AI has exerted profound influence on socioeconomic development and the progress of human civilizations, bringing unprecedented development opportunities to the world. While present unprecedented risks and challenges, AI can turbocharge stable development, but transforming its potential into reality requires AI that reduces biases, misinformation, and security threats, building capacity on computing power, data, and governance, global coordinating to build a safe, secure, and inclusive AI that is accessible to all. Last year, the 17th session of the UN General Assembly adopted a resolution entitled, Seize the Opportunities of Safe, Secure, and Trustworthy AI Systems for Sustainable Development. It’s the first ever resolution negotiated at the UN General Assembly to establish a global consensus approach to AI conference. The resolution encouraged member states to promote safe, secure, and trustworthy AI systems to advance sustainable development by developing regulatory and governance approaches and frameworks related to AI systems. China is an active advocate and practitioner of global AI governance. In 2023, President Xi Jinping proposed the Global AI Governance Initiative, systematically expanding on the Chinese plan in three aspects, AI development, security, and governance. The initiative suggested that development AI should adhere to the principle of people-centered, AI for good, to ensure that AI always develops in a way that is beneficial to human civilization. We support the rule of AI in promoting sustainable development. and Tackling Global Challenges. To implement the initiative, National Technical Committee on Cyber Security and Standardization Administration of China released the AI Safety Governance Framework. In 2024, the 1978 session of the UN General Assembly unanimously adopted the resolution entitled Enhancing International Cooperation on Capacity Building of AI, proposed by China. The resolution aims to achieve inclusive, beneficial and sustainable development of AI, encourages international cooperation and practical actions to help countries, especially developing countries, strengthen their AI capacity building and support the United Nations in playing a central role in international cooperation. To implement the resolution, China proposed the AI Capacity Building Action Plan for Good and for AI, and put forward five visions and goals and attend China’s actions, which aim to bridge the AI digital device, especially to help the global South benefit actively from AI developments, and promote the implementation of the UN 2030 Agenda for Sustainable Development. China has formulated and implemented laws and regulations on the development and governance of AI. First, AI policy and rules and standards, including Next Generation AI Development Plan formulated by the State Council, six regulations and rules formulated by the Cyberspace Administration of China, such as interim measures for the management of generative AI services, measures for the identification of synthetic contacts generated by AI, and several standards, including basic security requirements for generative AI services. Second, data security governance laws and regulations related to AI, such as data security law, personal information protection law, and regulations on the management of network data security. Third, online information governance regulations and rules related to AI, such as regulation on the protection of minors in cyberspace, and the rules on the governance of online violence information. In legislation and law enforcement, China adheres to the principles of giving equal importance to development and security, innovation, and governance, takes effective measures to encourage the innovative development of AI, and implements inclusive, prudent, and classified and graded supervision of AI. In the development and application of AI, China protects personal privacy and data security, prohibits theft, alteration, leakage, and other illegal collection and use of personal information, protects international property rights, and prevents and manages risks such as false information, adverse biases, and system attacks. In the future, China will speed up the formulation and improvement of relevant policies, regulations, application specifications, and ethical standards, build a technical monitoring, risk warning, and emergency response system, improve the development and management mechanism of generative AI, establish an AI security supervision system, ensure that every stage of the AI life cycle is more safe, reliable, controllable, and equitable, and will continuously counter international exchanges and cooperation in AI development, security, and governance to promote sustainable development. Thank you so much.


Cynthia Su: Okay. Thank you, Ms. Yu. Now, I’d like to welcome Ms. Xin Yi Ding, the Youth Ambassador of the Institute of Intelligent Society, Governments of Tsinghua University, to share from the young perspective.


Xin Yi Ding: Good afternoon. Honorable guests, ladies, and gentlemen, have you ever been tricked by the information online or paused before sharing a viral news thinking it might be AI-generated? These are not just hypothetical questions, they are profound signs of a huge shift. Artificial intelligence is not just changing how we access information, but also the very concept of truth and trust. As a part of the generation that creates the future, we’re born into a coexistence with artificial intelligence. Its influence shifts every corner of society, from individuals to organizations, from governments to markets. Now, let’s be clear. AI creates opportunities for many fronts, including public service, urban governance, and sustainable development. Advanced AI models, like the one I show, the fire patterns, predicts fire patterns before it spreads. And autonomous cars, like mobile eyes, could enhance road safety by providing collision avoidance systems. And also, air-powered platforms provide educational opportunities to underserved areas. With each innovation, we make human lives more efficient, convenient, and safe. AI also creates opportunities for groups unseen by the mainstream society. In rural China, farmers started selling Mao Mao fruit, named after a golden retriever who took bites on oranges. And the story went viral online, which revitalized the economy of Zigui County in rural China. By March 2024, it generated 4 billion yuan in annual sales. Among other similar cases, AI also demonstrates the power not just to benefit a few, but also to uplift many. Yet, as we embrace AI’s promises, we must also see its risks. Data monopolies, algorithmic biases, information overload, and high energy consumption, all of which could widen the gap between the ones with power and the ones without. Let’s take this information as an example. Our generation habitually relies on AI, yet AI can generate false information while presenting it as truth. This photo, it shows me, but I’ve never had that hairstyle, never got a sweater like this. It was actually swapped with a face of the lady there, and it’s generated by Pika Art. So AI has immense power that’s misleading. Imagine seeing a pornography of a friend online. How many seconds would it take you to doubt if it’s deepfaked? When AI can so easily blur the lines between reality and falsehood, such fabrication can cause great harm. And I’m presenting a video right here. It’s IGF. It’s also AI-generated. None of this is real. And this is not just a personal, interpersonal relationship danger. It’s also a crisis in terms of public trust. When a world leader’s face can be put into any video, when seeing is no longer believing, our executional trust crumbles. And that is why our generation bears the dual responsibility to act in both public and private spheres. This begins with imperative of intelligent society governance, which is to remedy the impact of AI and putting a focus on a human and ethical aspect of AI society. And first, at individual level, we’ll be empowered with cognitive tools to guard against AI hallucination, to question sorts, cultivate critical thinking. Second, as collective, we young people cannot just settle for symbolic participations. We must also push for ethical frameworks and regulations that keep pace with AI’s rapid growth. Thirdly, we need to improve AI literacy for everyone. People need to understand how data is used and collected and learn logic behind algorithms. And our mission, as Yuan stated, it’s to leave no one behind. We have a long journey ahead in the future shaped by AI, but I believe we’re ready to take the right steps.


Cynthia Su: Thank you. Thank you, Ms. Ding, for your excellent insights on the importance of youth in promoting a sustainable development of AI. And finally, it’s my great honor to invite our last keynote speaker, the Professor Ron and Anthony Meddalia from Copenhagen Business School. Welcome.


Rony Medaglia: Thank you very much. So when we talk about artificial intelligence impacts on sustainability, of course, a useful thing to do is look at what data shows us and what scholarly research shows us. So I’ll give you a little bit of insights and you’re free to look up into the sources of everything I say in the QR codes that you see in the slides, starting from my own profile as professor at the Copenhagen Business School. When we talk about potentials, basically it’s very hard to think of any achievement of any of the SDGs without using artificial intelligence. Research shows us that the potentials are really in three areas. Efficiency, data governance and sustainable business models. So here are some examples from research. This is an example that really shows us how you can use AI for increasing efficiency for SDG goals. In the city of Aarhus in Denmark, where I live and work, there has been a use of sensors in the drinkable water grid of the city to anticipate the use of drinkable water throughout the territory and therefore reduce waste of such a precious resource. So this is clearly an example in line with SDG number six about clean water and sanitation. Another key area of use for AI is related to sustainability data governance. This is an example of a Danish company that uses drone technology to collect data about emissions from factories or from physical locations that are very hard to reach otherwise. And by feeding this data into machine learning, they’re able to anticipate levels of pollution in an area and inform policies. The third potential area is about creating new sustainable business models. This is a Danish company that is using a blockchain together with artificial intelligence to track the provenance of products such as design products, in this case a pair of shoes, whereby the consumer, by scanning a QR code, can identify whether that pair of shoes has been produced in ways that are environmentally friendly. However, the other side research shows us of AI impacts on sustainability are also negative. So we do know that, for instance, generative AI, the new large language models, use a lot of energy. So the flip side of this is that, for instance, research shows us that to generate one single image with a large language model, such as chat GPT, uses the same amount of CO2 as charging your mobile phone up to 50%. These servers that do the strong computing for large language models also need to be cooled down. So it has been calculated recently that the global AI demand may be accountable in two years from now for a water withdrawal equal to six times the whole country of Denmark for a whole year of water. Another element that cannot be understated is the so-called rebound effects. When we are able to distribute resources more efficiently, think about the sharing of cars that is done through new applications, this could be unintended consequences. For instance, people use less of public transportation, and then we will have more traffic on the streets. So how do we account for these unintended consequences? Last but not least, with AI, we see a lot of potential for bias and discrimination. If you ask a large language model to produce an image, for instance, of a CEO, you will always have, most of the times, a white young male. But if you ask for an HR manager or a nurse, you will always have a woman. So the potential for increasing discrimination, digital divide, and bias is sort of embedded in the way AI works in many cases, and it has to be mitigated. So the bottom line is that we need to have an understanding of what research shows us besides the desiderata of AI on sustainability, and therefore mitigate the risk in order to achieve the potentials of artificial intelligence for sustainable development goals. Thank you very much.


Cynthia Su: Thank you for your excellent speech. So dear guests and audience members, Today’s keynote speakers have shared from various perspectives how AI empowers sustainable development, its potential risks and practices for promoting AI sustainable growth, inspiring deep reflection among us. Due to time constraints, today’s event comes to an end. Thank you very much, and I’m looking forward to seeing you next year. The last but not the least, tomorrow we’ll have the session at 9am at Workshop 3. I’m looking forward to seeing you there as well. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.


Y

Yong Guo

Speech speed

104 words per minute

Speech length

517 words

Speech time

298 seconds

AI is a strategic technology driving scientific revolution and industrial transformation with strong spillover effects

Explanation

Yong Guo emphasizes that AI represents a fundamental strategic technology that is at the forefront of current scientific and industrial changes. He argues that AI has powerful spillover effects that can drive broader progress across multiple sectors and domains.


Evidence

Reference to Chinese President Xi Jinping’s emphasis on AI as a strategic technology


Major discussion point

AI’s Role in Advancing Sustainable Development


Topics

Development | Economic


Agreed with

– Xuanyun You
– Xin Yi Ding
– Rony Medaglia

Agreed on

AI has transformative potential for sustainable development across multiple sectors


Universities must take active roles in leading AI to serve communities and accessibility through interdisciplinary research

Explanation

Guo argues that universities, as engines of knowledge and innovation, have a responsibility to lead AI development in ways that benefit communities and ensure accessibility. This requires interdisciplinary approaches that combine technical expertise with social responsibility.


Evidence

Tsinghua University’s establishment of the Institute for Intelligent Society Governance in 2019 and its multidisciplinary research on AI and social issues


Major discussion point

Government and Institutional Frameworks for AI Governance


Topics

Development | Sociocultural


Disagreed with

– Xin Yi Ding

Disagreed on

Primary focus for addressing AI challenges


The concept of intelligent society governance addresses fundamental theories and core policy issues of AI integration

Explanation

Guo presents intelligent society governance as a comprehensive framework for understanding and managing AI’s integration into society. This concept encompasses both theoretical foundations and practical policy challenges that arise from AI adoption.


Evidence

Tsinghua’s RISG contributions in AI and information cocoons, social polarization, employment transformation, and energy transition


Major discussion point

Government and Institutional Frameworks for AI Governance


Topics

Legal and regulatory | Development


Deeper collaboration across government, industry, academia, and research institutions is needed to tackle AI challenges

Explanation

Guo advocates for strengthened collaborative mechanisms that bring together multiple stakeholders to address both theoretical and practical challenges in using AI for sustainable development. This multi-sector approach is essential for elevating innovation and applications to new heights.


Major discussion point

International Cooperation and Capacity Building


Topics

Development | Economic


Agreed with

– Xuanyun You
– Cynthia Su

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Global exchanges in technology, standard setting, and financial integration are necessary to ensure equitable sharing of digital dividends

Explanation

Guo emphasizes the importance of international cooperation in advancing global exchanges across multiple dimensions including technology transfer, standard development, and financial integration. The goal is to ensure that the benefits of digital transformation are shared more widely and equitably across different regions and populations.


Major discussion point

International Cooperation and Capacity Building


Topics

Development | Economic | Infrastructure


X

Xuanyun You

Speech speed

105 words per minute

Speech length

684 words

Speech time

387 seconds

AI can turbocharge sustainable development by reducing biases, misinformation, and security threats while building capacity in computing power, data, and governance

Explanation

You argues that AI has the potential to accelerate sustainable development, but this requires addressing key challenges like bias, misinformation, and security issues. Success depends on building robust capacity in computing infrastructure, data management, and governance frameworks.


Evidence

Reference to UN General Assembly resolution on ‘Seize the Opportunities of Safe, Secure, and Trustworthy AI Systems for Sustainable Development’


Major discussion point

AI’s Role in Advancing Sustainable Development


Topics

Development | Cybersecurity


Agreed with

– Xin Yi Ding
– Rony Medaglia

Agreed on

AI poses significant risks that must be addressed alongside its benefits


China proposed the Global AI Governance Initiative emphasizing people-centered, AI for good principles

Explanation

You describes China’s 2023 Global AI Governance Initiative as a comprehensive framework that systematically addresses AI development, security, and governance. The initiative emphasizes that AI development should be people-centered and beneficial to human civilization.


Evidence

President Xi Jinping’s 2023 proposal and the three aspects: AI development, security, and governance


Major discussion point

Government and Institutional Frameworks for AI Governance


Topics

Legal and regulatory | Development


China has formulated comprehensive laws and regulations including AI policy rules, data security governance, and online information governance

Explanation

You outlines China’s multi-layered regulatory approach to AI governance, which includes specific AI policies, data security laws, and online information governance rules. This comprehensive framework addresses different aspects of AI development and deployment while ensuring security and ethical considerations.


Evidence

Specific examples including Next Generation AI Development Plan, interim measures for generative AI services, data security law, personal information protection law, and regulations on network data security


Major discussion point

Government and Institutional Frameworks for AI Governance


Topics

Legal and regulatory | Human rights


Disagreed with

– Rony Medaglia

Disagreed on

Approach to AI risk mitigation and regulation


China proposed AI Capacity Building Action Plan to bridge the digital divide and help developing countries benefit from AI developments

Explanation

You describes China’s initiative to address global AI inequality through capacity building efforts specifically targeted at developing countries. The plan aims to ensure that the benefits of AI development are accessible to the Global South and support the UN 2030 Agenda for Sustainable Development.


Evidence

UN General Assembly resolution ‘Enhancing International Cooperation on Capacity Building of AI’ adopted unanimously in 2024, and the five visions and goals of the Action Plan


Major discussion point

International Cooperation and Capacity Building


Topics

Development | Infrastructure


Agreed with

– Yong Guo
– Cynthia Su

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


X

Xin Yi Ding

Speech speed

145 words per minute

Speech length

596 words

Speech time

246 seconds

AI creates opportunities in public service, urban governance, and sustainable development through applications like fire pattern prediction and autonomous vehicles

Explanation

Ding highlights the positive potential of AI across multiple sectors, emphasizing how advanced AI models can predict and prevent disasters, enhance safety, and provide educational opportunities. She argues that these innovations make human lives more efficient, convenient, and safe.


Evidence

Examples of AI models predicting fire patterns, autonomous cars with collision avoidance systems, and AI-powered educational platforms for underserved areas


Major discussion point

AI’s Role in Advancing Sustainable Development


Topics

Development | Infrastructure


Agreed with

– Yong Guo
– Xuanyun You
– Rony Medaglia

Agreed on

AI has transformative potential for sustainable development across multiple sectors


AI presents unprecedented risks including data monopolies, algorithmic biases, information overload, and high energy consumption

Explanation

Ding warns about the significant risks that accompany AI’s promises, particularly how these risks could exacerbate existing inequalities. She argues that these challenges could widen the gap between those with power and those without, creating new forms of digital divide.


Major discussion point

Risks and Challenges of AI Implementation


Topics

Human rights | Economic | Development


Agreed with

– Xuanyun You
– Rony Medaglia

Agreed on

AI poses significant risks that must be addressed alongside its benefits


AI can generate false information and deepfakes that blur the lines between reality and falsehood, threatening public trust

Explanation

Ding demonstrates how AI’s ability to create convincing fake content poses serious threats to individual relationships and public trust. She argues that when AI can easily manipulate visual and audio content, it undermines the fundamental basis of truth and trust in society.


Evidence

Personal demonstration of AI-generated fake photos and videos, including a face-swapped image and an AI-generated IGF video


Major discussion point

Risks and Challenges of AI Implementation


Topics

Sociocultural | Human rights


Young people bear dual responsibility to act in both public and private spheres regarding AI governance

Explanation

Ding argues that the younger generation, having grown up with AI, has a unique responsibility to address AI’s challenges across different domains. This includes both personal responsibility for critical thinking and collective responsibility for pushing ethical frameworks and regulations.


Major discussion point

Youth Perspective and Future Responsibilities


Topics

Legal and regulatory | Sociocultural


The younger generation needs cognitive tools to guard against AI hallucination and must cultivate critical thinking

Explanation

Ding emphasizes that young people need to be equipped with specific skills to identify and resist AI-generated misinformation. She argues that developing critical thinking abilities is essential for navigating an AI-dominated information landscape.


Major discussion point

Youth Perspective and Future Responsibilities


Topics

Sociocultural | Human rights


Disagreed with

– Yong Guo

Disagreed on

Primary focus for addressing AI challenges


AI literacy improvement is essential for everyone to understand data usage and algorithmic logic

Explanation

Ding advocates for widespread AI literacy education that goes beyond basic digital skills to include understanding of how AI systems collect and use data, and how algorithms make decisions. This knowledge is presented as fundamental for informed participation in an AI-driven society.


Major discussion point

Youth Perspective and Future Responsibilities


Topics

Development | Sociocultural


AI applications in rural China, such as the Mao Mao fruit case, demonstrate AI’s power to uplift communities and generate economic benefits

Explanation

Ding presents a success story from rural China where AI-powered social media helped transform a local agricultural product into a viral sensation, generating significant economic impact. This example illustrates how AI can benefit marginalized communities and create new economic opportunities.


Evidence

The Mao Mao fruit story from Zigui County that generated 4 billion yuan in annual sales by March 2024


Major discussion point

Practical Applications and Real-World Examples


Topics

Economic | Development


R

Rony Medaglia

Speech speed

145 words per minute

Speech length

668 words

Speech time

275 seconds

AI potentials for sustainability exist in three key areas: efficiency, data governance, and sustainable business models

Explanation

Medaglia presents a research-based framework for understanding AI’s contribution to sustainability, identifying three primary areas where AI can make significant impact. He argues that these areas represent the main pathways through which AI can contribute to achieving Sustainable Development Goals.


Evidence

Research examples from Danish cities and companies, including water management in Aarhus, drone technology for emissions monitoring, and blockchain-AI combination for product tracking


Major discussion point

AI’s Role in Advancing Sustainable Development


Topics

Development | Economic | Infrastructure


Agreed with

– Yong Guo
– Xuanyun You
– Xin Yi Ding

Agreed on

AI has transformative potential for sustainable development across multiple sectors


Generative AI uses significant energy resources and creates substantial water withdrawal demands for server cooling

Explanation

Medaglia presents research findings on the environmental costs of AI, particularly generative AI models. He quantifies the energy consumption and water usage required for AI operations, demonstrating that AI’s environmental footprint is substantial and growing.


Evidence

Research showing that generating one image with large language models uses CO2 equivalent to charging a mobile phone 50%, and global AI demand may require water withdrawal equal to six times Denmark’s annual consumption


Major discussion point

Risks and Challenges of AI Implementation


Topics

Development | Infrastructure


AI systems exhibit embedded bias and discrimination, often reinforcing stereotypes in generated content

Explanation

Medaglia demonstrates how AI systems perpetuate and amplify existing social biases through their outputs. He argues that these biases are embedded in how AI works and represent a significant challenge for achieving equitable AI deployment.


Evidence

Examples of large language models consistently generating images of CEOs as white young males, while depicting HR managers and nurses as women


Major discussion point

Risks and Challenges of AI Implementation


Topics

Human rights | Sociocultural


Agreed with

– Xuanyun You
– Xin Yi Ding

Agreed on

AI poses significant risks that must be addressed alongside its benefits


Disagreed with

– Xuanyun You

Disagreed on

Approach to AI risk mitigation and regulation


Danish cities use AI sensors in water grids to anticipate usage and reduce waste of precious resources

Explanation

Medaglia provides a concrete example of how AI can improve resource efficiency in urban infrastructure. The case demonstrates how predictive AI can optimize resource distribution and reduce waste in essential services.


Evidence

Specific example from the city of Aarhus, Denmark, using sensors in the drinkable water grid to anticipate usage and reduce waste


Major discussion point

Practical Applications and Real-World Examples


Topics

Development | Infrastructure


Blockchain and AI combination enables tracking product provenance for environmentally friendly production verification

Explanation

Medaglia describes how combining blockchain technology with AI creates new possibilities for sustainable business models. This approach allows consumers to verify the environmental credentials of products, potentially driving demand for more sustainable production methods.


Evidence

Example of a Danish company using blockchain and AI to track shoe production, allowing consumers to scan QR codes to verify environmental friendliness


Major discussion point

Practical Applications and Real-World Examples


Topics

Economic | Development


C

Cynthia Su

Speech speed

124 words per minute

Speech length

376 words

Speech time

180 seconds

AI for sustainable development requires examining roles of both public and private sectors

Explanation

Su frames the discussion around understanding how AI can advance sustainable development by examining the distinct and complementary roles that public and private sectors play. She emphasizes that both sectors have important contributions to make in leveraging AI for sustainability goals.


Evidence

Session title and framing: ‘AI for Sustainable Development, the Roles of the Public and Private Sectors’


Major discussion point

AI’s Role in Advancing Sustainable Development


Topics

Development | Economic


Keynote speakers provide diverse perspectives on AI empowerment, risks, and sustainable growth practices

Explanation

Su synthesizes the contributions of the various speakers, noting that they have shared insights from different viewpoints on how AI can empower sustainable development while also addressing potential risks. She emphasizes that these diverse perspectives inspire deep reflection on the challenges and opportunities ahead.


Evidence

Summary of speakers’ contributions covering government policy, youth perspectives, academic research, and institutional frameworks


Major discussion point

Government and Institutional Frameworks for AI Governance


Topics

Development | Sociocultural


Academic institutions like Tsinghua University play crucial roles in hosting international dialogue on AI governance

Explanation

Su positions universities as important conveners and facilitators of international discussions on AI governance and sustainable development. She emphasizes the role of academic institutions in bringing together diverse stakeholders to address global challenges related to AI implementation.


Evidence

Tsinghua University hosting the IGF 2025 listening talk and bringing together government officials, academics, and youth representatives


Major discussion point

International Cooperation and Capacity Building


Topics

Development | Sociocultural


Agreed with

– Yong Guo
– Xuanyun You

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Agreements

Agreement points

AI has transformative potential for sustainable development across multiple sectors

Speakers

– Yong Guo
– Xuanyun You
– Xin Yi Ding
– Rony Medaglia

Arguments

AI is a strategic technology driving scientific revolution and industrial transformation with strong spillover effects


AI can turbocharge sustainable development by reducing biases, misinformation, and security threats while building capacity in computing power, data, and governance


AI creates opportunities in public service, urban governance, and sustainable development through applications like fire pattern prediction and autonomous vehicles


AI potentials for sustainability exist in three key areas: efficiency, data governance, and sustainable business models


Summary

All speakers acknowledge AI’s significant potential to advance sustainable development goals through various applications including disaster prediction, resource management, urban governance, and creating new business models


Topics

Development | Economic | Infrastructure


Multi-stakeholder collaboration is essential for effective AI governance

Speakers

– Yong Guo
– Xuanyun You
– Cynthia Su

Arguments

Deeper collaboration across government, industry, academia, and research institutions is needed to tackle AI challenges


China proposed AI Capacity Building Action Plan to bridge the digital divide and help developing countries benefit from AI developments


Academic institutions like Tsinghua University play crucial roles in hosting international dialogue on AI governance


Summary

Speakers emphasize the need for collaborative approaches involving government, industry, academia, and international institutions to address AI governance challenges effectively


Topics

Development | Legal and regulatory | Sociocultural


AI poses significant risks that must be addressed alongside its benefits

Speakers

– Xuanyun You
– Xin Yi Ding
– Rony Medaglia

Arguments

AI can turbocharge sustainable development by reducing biases, misinformation, and security threats while building capacity in computing power, data, and governance


AI presents unprecedented risks including data monopolies, algorithmic biases, information overload, and high energy consumption


AI systems exhibit embedded bias and discrimination, often reinforcing stereotypes in generated content


Summary

Speakers recognize that while AI offers tremendous opportunities, it also presents serious challenges including bias, misinformation, energy consumption, and potential for discrimination that require careful management


Topics

Human rights | Development | Cybersecurity


Similar viewpoints

Both speakers emphasize the importance of international cooperation and capacity building to ensure equitable access to AI benefits, particularly for developing countries

Speakers

– Yong Guo
– Xuanyun You

Arguments

Global exchanges in technology, standard setting, and financial integration are necessary to ensure equitable sharing of digital dividends


China proposed AI Capacity Building Action Plan to bridge the digital divide and help developing countries benefit from AI developments


Topics

Development | Economic | Infrastructure


Both speakers highlight how AI systems can perpetuate harmful biases and create misleading content, emphasizing the need for critical evaluation of AI outputs

Speakers

– Xin Yi Ding
– Rony Medaglia

Arguments

AI can generate false information and deepfakes that blur the lines between reality and falsehood, threatening public trust


AI systems exhibit embedded bias and discrimination, often reinforcing stereotypes in generated content


Topics

Human rights | Sociocultural


Both speakers emphasize the critical role of universities and academic institutions in leading AI research, governance discussions, and ensuring AI serves broader societal needs

Speakers

– Yong Guo
– Cynthia Su

Arguments

Universities must take active roles in leading AI to serve communities and accessibility through interdisciplinary research


Academic institutions like Tsinghua University play crucial roles in hosting international dialogue on AI governance


Topics

Development | Sociocultural


Unexpected consensus

Environmental impact of AI systems

Speakers

– Xin Yi Ding
– Rony Medaglia

Arguments

AI presents unprecedented risks including data monopolies, algorithmic biases, information overload, and high energy consumption


Generative AI uses significant energy resources and creates substantial water withdrawal demands for server cooling


Explanation

Despite representing different perspectives (youth advocate vs. academic researcher), both speakers independently identified AI’s environmental impact as a significant concern, with specific attention to energy consumption – an issue that might not be immediately obvious when discussing AI for sustainability


Topics

Development | Infrastructure


Need for comprehensive regulatory frameworks

Speakers

– Xuanyun You
– Xin Yi Ding

Arguments

China has formulated comprehensive laws and regulations including AI policy rules, data security governance, and online information governance


Young people bear dual responsibility to act in both public and private spheres regarding AI governance


Explanation

Unexpected alignment between a government official emphasizing existing regulatory frameworks and a youth representative calling for stronger ethical frameworks and regulations, showing cross-generational agreement on the need for robust governance structures


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

The speakers demonstrated strong consensus on AI’s transformative potential for sustainable development, the need for multi-stakeholder collaboration, and the importance of addressing AI risks alongside benefits. Key areas of agreement include the necessity of international cooperation, the role of academic institutions, and the recognition of both opportunities and challenges presented by AI systems.


Consensus level

High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers approached the topic from different angles (government policy, academic research, youth advocacy, international cooperation) but arrived at similar conclusions about the need for balanced, collaborative approaches to AI governance for sustainable development. This strong alignment suggests a mature understanding of AI governance challenges and indicates potential for effective policy coordination across different stakeholder groups.


Differences

Different viewpoints

Approach to AI risk mitigation and regulation

Speakers

– Xuanyun You
– Rony Medaglia

Arguments

China has formulated comprehensive laws and regulations including AI policy rules, data security governance, and online information governance


AI systems exhibit embedded bias and discrimination, often reinforcing stereotypes in generated content


Summary

You emphasizes China’s comprehensive regulatory framework as a solution to AI challenges, while Medaglia focuses on the inherent technical limitations and biases embedded in AI systems that require mitigation beyond regulatory approaches


Topics

Legal and regulatory | Human rights


Primary focus for addressing AI challenges

Speakers

– Xin Yi Ding
– Yong Guo

Arguments

The younger generation needs cognitive tools to guard against AI hallucination and must cultivate critical thinking


Universities must take active roles in leading AI to serve communities and accessibility through interdisciplinary research


Summary

Ding emphasizes individual-level cognitive skills and critical thinking as primary solutions, while Guo focuses on institutional leadership and interdisciplinary research approaches


Topics

Sociocultural | Development


Unexpected differences

Environmental impact prioritization

Speakers

– Rony Medaglia
– Other speakers

Arguments

Generative AI uses significant energy resources and creates substantial water withdrawal demands for server cooling


Explanation

Medaglia was the only speaker to explicitly quantify and emphasize the environmental costs of AI operations, while other speakers focused primarily on AI’s potential benefits for sustainability without addressing its environmental footprint. This represents an unexpected gap in a discussion about sustainable development


Topics

Development | Infrastructure


Overall assessment

Summary

The discussion showed relatively low levels of direct disagreement, with speakers generally complementing rather than contradicting each other’s perspectives. Main areas of difference centered on implementation approaches (regulatory vs. technical vs. educational) and priority focus (institutional vs. individual vs. international cooperation)


Disagreement level

Low to moderate disagreement level. The speakers represented different stakeholder perspectives (government, academia, youth, international research) and their differences were more about emphasis and approach rather than fundamental opposition. This suggests a constructive dialogue environment but may indicate insufficient critical examination of competing approaches to AI governance and sustainable development


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the importance of international cooperation and capacity building to ensure equitable access to AI benefits, particularly for developing countries

Speakers

– Yong Guo
– Xuanyun You

Arguments

Global exchanges in technology, standard setting, and financial integration are necessary to ensure equitable sharing of digital dividends


China proposed AI Capacity Building Action Plan to bridge the digital divide and help developing countries benefit from AI developments


Topics

Development | Economic | Infrastructure


Both speakers highlight how AI systems can perpetuate harmful biases and create misleading content, emphasizing the need for critical evaluation of AI outputs

Speakers

– Xin Yi Ding
– Rony Medaglia

Arguments

AI can generate false information and deepfakes that blur the lines between reality and falsehood, threatening public trust


AI systems exhibit embedded bias and discrimination, often reinforcing stereotypes in generated content


Topics

Human rights | Sociocultural


Both speakers emphasize the critical role of universities and academic institutions in leading AI research, governance discussions, and ensuring AI serves broader societal needs

Speakers

– Yong Guo
– Cynthia Su

Arguments

Universities must take active roles in leading AI to serve communities and accessibility through interdisciplinary research


Academic institutions like Tsinghua University play crucial roles in hosting international dialogue on AI governance


Topics

Development | Sociocultural


Takeaways

Key takeaways

AI has dual potential for sustainable development – offering significant opportunities through efficiency gains, improved data governance, and new business models, while also presenting serious risks including energy consumption, bias, and misinformation


Successful AI governance requires a multi-stakeholder approach involving government, industry, academia, and civil society working together across national boundaries


China is positioning itself as a leader in AI governance through comprehensive policy frameworks, international initiatives, and the concept of ‘intelligent society governance’


Youth perspectives are critical for AI governance as the generation most affected by AI integration, emphasizing the need for AI literacy, critical thinking, and ethical frameworks


International cooperation and capacity building are essential to ensure AI benefits are shared equitably, particularly helping developing countries bridge the digital divide


Real-world applications demonstrate AI’s tangible benefits for sustainability, from water management in Danish cities to economic revitalization in rural China


The challenge lies in maximizing AI’s positive potential while effectively mitigating risks through proper governance, regulation, and education


Resolutions and action items

China will speed up formulation and improvement of AI policies, regulations, application specifications, and ethical standards


Build technical monitoring, risk warning, and emergency response systems for AI


Improve development and management mechanisms for generative AI and establish AI security supervision systems


Continue international exchanges and cooperation in AI development, security, and governance


Advance interdisciplinary education in AI and intelligent society governance to cultivate talent with technical proficiency and social responsibility


Implement the AI Capacity Building Action Plan to help developing countries strengthen AI capabilities


Promote AI literacy improvement for everyone to understand data usage and algorithmic logic


Unresolved issues

How to effectively address the rebound effects and unintended consequences of AI efficiency gains


Specific mechanisms for ensuring AI systems remain ‘safe, reliable, controllable, and equitable’ throughout their lifecycle


Concrete strategies for mitigating AI’s significant energy consumption and environmental impact


Detailed frameworks for international coordination on AI standards and governance across different political and economic systems


Methods for effectively combating deepfakes and AI-generated misinformation while preserving innovation


Specific approaches for eliminating algorithmic bias and discrimination embedded in AI systems


How to balance AI innovation with prudent regulation without stifling technological progress


Suggested compromises

China’s approach of ‘giving equal importance to development and security, innovation and governance’ as a balanced regulatory philosophy


Implementation of ‘inclusive, prudent, and classified and graded supervision of AI’ rather than blanket restrictions


Focus on capacity building and international cooperation to address the digital divide rather than restricting AI development


Emphasis on education and AI literacy as a complement to regulatory approaches


Multi-stakeholder governance involving both public and private sectors rather than government-only or market-only solutions


Thought provoking comments

Have you ever been tricked by the information online or paused before sharing a viral news thinking it might be AI-generated? These are not just hypothetical questions, they are profound signs of a huge shift. Artificial intelligence is not just changing how we access information, but also the very concept of truth and trust.

Speaker

Xin Yi Ding


Reason

This opening immediately reframes the discussion from abstract policy considerations to visceral, personal experiences that resonate with everyone. It introduces the fundamental epistemological crisis that AI creates – challenging our basic ability to distinguish truth from falsehood, which is foundational to all other discussions about AI governance and sustainability.


Impact

This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a more urgent tone and introduced the critical theme of trust erosion that underlies many AI governance challenges, setting up the need for the practical solutions discussed later.


When AI can so easily blur the lines between reality and falsehood, such fabrication can cause great harm… When a world leader’s face can be put into any video, when seeing is no longer believing, our executional trust crumbles.

Speaker

Xin Yi Ding


Reason

This insight connects individual-level deception (deepfakes of friends) to systemic threats to democratic governance and public trust. It demonstrates how AI’s impact on sustainability isn’t just about energy consumption or efficiency, but about the fundamental social fabric that enables collective action on sustainability challenges.


Impact

This comment elevated the discussion beyond technical solutions to examine the societal prerequisites for sustainable development. It highlighted how AI threatens the social trust necessary for collective action on sustainability, adding a crucial dimension that the other speakers hadn’t addressed.


However, the other side research shows us of AI impacts on sustainability are also negative… to generate one single image with a large language model uses the same amount of CO2 as charging your mobile phone up to 50%

Speaker

Rony Medaglia


Reason

This provides concrete, quantifiable data that starkly illustrates AI’s environmental costs in terms everyone can understand. It transforms abstract concerns about energy consumption into tangible comparisons, making the sustainability paradox of AI viscerally clear.


Impact

This comment introduced hard data that balanced the optimistic framing from earlier speakers. It forced a reckoning with AI’s direct environmental costs and established the need for the nuanced approach to AI governance that considers both benefits and costs simultaneously.


Another element that cannot be understated is the so-called rebound effects. When we are able to distribute resources more efficiently… this could be unintended consequences. For instance, people use less of public transportation, and then we will have more traffic on the streets.

Speaker

Rony Medaglia


Reason

This introduces the sophisticated concept of rebound effects – how efficiency gains can paradoxically lead to increased resource consumption. It challenges the linear thinking that efficiency automatically equals sustainability and introduces systems thinking to the discussion.


Impact

This comment added crucial complexity to the discussion by showing how AI’s benefits can backfire in unexpected ways. It demonstrated the need for holistic, systems-level thinking in AI governance rather than focusing solely on direct effects, influencing how we must approach AI policy for sustainability.


AI has immense power that’s misleading… our generation bears the dual responsibility to act in both public and private spheres… We young people cannot just settle for symbolic participations. We must also push for ethical frameworks and regulations that keep pace with AI’s rapid growth.

Speaker

Xin Yi Ding


Reason

This rejects tokenistic youth involvement and demands substantive participation in AI governance. It connects generational responsibility with the urgency of AI development, arguing that those who will live longest with AI’s consequences must have real agency in shaping its development.


Impact

This comment challenged traditional power structures in technology governance and established youth not as beneficiaries of adult decision-making, but as essential actors with unique stakes and perspectives. It added urgency to the governance discussion by emphasizing the pace of AI development relative to regulatory responses.


Overall assessment

These key comments fundamentally transformed what could have been a conventional policy discussion into a nuanced examination of AI’s paradoxical relationship with sustainability. Xin Yi Ding’s contributions introduced existential questions about truth and trust that reframed sustainability as not just an environmental challenge but a social and epistemological one. Rony Medaglia’s data-driven insights provided the empirical grounding that balanced optimistic policy statements with hard realities about AI’s environmental costs and systemic complexities. Together, these comments created a more sophisticated understanding of AI governance that moves beyond simple benefit-risk calculations to consider feedback loops, unintended consequences, and the social prerequisites for sustainable development. The discussion evolved from presenting AI as a tool for sustainability to examining AI as a force that fundamentally challenges our capacity for collective action on sustainability challenges.


Follow-up questions

How can we effectively measure and mitigate the unintended consequences of AI implementation, particularly rebound effects in sustainability initiatives?

Speaker

Rony Medaglia


Explanation

Professor Medaglia highlighted rebound effects as a significant concern, using the example of car-sharing apps potentially reducing public transportation use and increasing traffic, but didn’t provide solutions for addressing these unintended consequences.


What specific mechanisms and frameworks are needed to ensure AI literacy education reaches all populations, especially in developing countries?

Speaker

Xin Yi Ding


Explanation

Ms. Ding emphasized the need for AI literacy for everyone but didn’t elaborate on the practical implementation strategies or how to overcome barriers to access, particularly in underserved communities.


How can we quantify and balance the environmental costs of AI (energy consumption, water usage) against its sustainability benefits?

Speaker

Rony Medaglia


Explanation

Professor Medaglia presented concerning statistics about AI’s environmental impact (CO2 emissions, water usage) but didn’t address how to create frameworks for measuring net environmental impact when AI is used for sustainability purposes.


What are the most effective methods for detecting and preventing AI-generated misinformation and deepfakes at scale?

Speaker

Xin Yi Ding


Explanation

Ms. Ding demonstrated the ease of creating convincing deepfakes and highlighted the crisis of public trust, but didn’t discuss technical or policy solutions for detection and prevention of such content.


How can international cooperation frameworks be strengthened to ensure equitable AI development and governance across different political and economic systems?

Speaker

Yong Guo and Xuanyun You


Explanation

Both speakers emphasized the importance of international cooperation and China’s initiatives, but didn’t address the practical challenges of implementing global governance frameworks across diverse political systems and economic development levels.


What specific metrics and evaluation methods should be used to assess the effectiveness of AI capacity building initiatives in developing countries?

Speaker

Xuanyun You


Explanation

Ms. You mentioned China’s AI Capacity Building Action Plan and efforts to help the Global South, but didn’t specify how success would be measured or what indicators would demonstrate effective capacity building.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #344 Multistakeholder Perspectives WSis+20 the Technical Layer

WS #344 Multistakeholder Perspectives WSis+20 the Technical Layer

Session at a glance

Summary

This discussion focused on technical community perspectives on WSIS Plus 20, examining the governance of the internet’s technical architecture and the role of multi-stakeholder models in maintaining a secure, stable, and open internet. The panel brought together experts from various internet governance organizations including ICANN, APNIC, Internet Society, and civil society representatives to discuss the complex technical layer underpinning the internet.


The panelists explained that the internet’s technical architecture consists of federated entities working together, including standards and protocols (managed through organizations like IETF), the domain name system (coordinated by ICANN), and internet number resources (managed by Regional Internet Registries like APNIC). Each component operates through open, bottom-up, consensus-based processes that allow diverse stakeholders to participate in governance decisions. The speakers emphasized that the multi-stakeholder model’s strength lies in bringing together engineers, companies, government representatives, and civil society to shape internet governance collaboratively.


Several challenges were identified, including rising geopolitical tensions driving digital sovereignty initiatives, cybersecurity threats, and regulatory fragmentation that could lead to internet fragmentation. The panelists noted particular concerns about the governance crisis at AFRINIC, highlighting vulnerabilities in the technical governance system. Regarding the Internet Governance Forum (IGF), there was debate about its fitness for purpose in addressing emerging digital governance issues beyond traditional internet infrastructure.


The discussion concluded with calls for maintaining the IGF’s multi-stakeholder approach while addressing concerns about focus, resource allocation, and the balance between technical internet governance and broader digital cooperation issues. The panelists emphasized the critical importance of preserving the internet’s global, open architecture amid increasing pressures for centralized control.


Keypoints

## Major Discussion Points:


– **Technical Architecture and Governance of the Internet**: The panelists provided detailed explanations of different components of the internet’s technical layer, including standards and protocols (IETF), domain name system (DNS/ICANN), IP address allocation (RIRs like APNIC), and how these federated entities work together through multi-stakeholder governance models.


– **Multi-stakeholder Model Benefits and Challenges**: Extensive discussion on how the multi-stakeholder approach enables diverse actors (technical community, governments, civil society, private sector) to collaborate as peers, with examples like HTTPS protocol development and public interest technology groups in IETF, while acknowledging barriers like resource constraints and knowledge asymmetries.


– **Internet Fragmentation Risks and Threats**: Panelists highlighted growing concerns about technical and policy fragmentation due to geopolitical tensions, digital sovereignty assertions, cybersecurity threats, and regulatory decisions made without understanding technical implications, emphasizing the urgency of preserving the internet’s global, open architecture.


– **Role and Future of the Internet Governance Forum (IGF)**: Discussion centered on whether IGF remains fit for purpose amid evolving digital governance challenges, with debates about the distinction between “governance of the internet” (technical layer) versus “governance on the internet” (applications), and calls for better focus, streamlined processes, and sustainable funding.


– **WSIS+20 Review and Technical Community Perspectives**: The conversation addressed how the technical community should engage in the World Summit on the Information Society review process, emphasizing the importance of evidence-based, technically-informed policy decisions while maintaining the successful multi-stakeholder governance model.


## Overall Purpose:


The discussion aimed to provide technical community perspectives on internet governance ahead of the WSIS+20 review, explaining the complex technical architecture underlying the internet, demonstrating how multi-stakeholder governance works in practice, and articulating the importance of preserving this model against emerging threats of fragmentation and centralized control.


## Overall Tone:


The tone was professional and collaborative throughout, with panelists building on each other’s points constructively. While generally optimistic about the multi-stakeholder model’s successes, there was an underlying sense of urgency and concern about emerging threats. The discussion became more pointed when addressing specific challenges like the AFRINIC governance crisis and calls for IGF reform, but remained respectful and solution-oriented. Panelists demonstrated mutual respect and shared commitment to preserving an open, secure, and globally interoperable internet.


Speakers

**Speakers from the provided list:**


– **Ajith Francis** – Session moderator/chair


– **Joyce Chen** – Senior Advisor for Strategic Engagement at APNIC (Asia-Pacific Network Information Centre)


– **Israel Rosas** – Director for Partnerships and Internet Development at Internet Society


– **Chris Chapman** – Member of the ICANN board (Internet Corporation for Assigned Names and Numbers), Deputy Chair, former chairman of Australian Communications and Media Authority


– **Ellie McDonald** – Policy and Advocacy Lead at Global Partners Digital (civil society and human rights organisation)


– **Paulos Nyirenda** – Manager of the country code top-level domain for Malawi (.mw), joining online


– **Frodo Sorensen** – Senior Advisor for Internet Governance at the Norwegian Communications Authority


– **Alicia Sharif** – Policy and Public Affairs Lead at Nominet, session co-moderator


– **Audience** – Various audience members asking questions


**Additional speakers:**


– **Mia Kuehlewin** – From the Internet Architecture Board in the IETF (Internet Engineering Task Force)


– **Nicholas** – Audience member who submitted an online question about internet security extensions


Full session report

# Technical Community Perspectives on WSIS Plus 20: Internet Governance and Multi-Stakeholder Models


## Introduction and Context


This panel discussion, moderated by Ajith Francis and co-moderated by Alicia Sharif from Nominet, brought together experts from key internet governance organizations to examine technical community perspectives on the World Summit on the Information Society (WSIS) Plus 20 review process. Francis opened by clarifying the session’s focus on “governance of the internet” – the technical infrastructure and coordination mechanisms – rather than “governance on the internet” such as content regulation.


The panel featured representatives from ICANN, APNIC, the Internet Society, civil society groups, and government perspectives, discussing how multi-stakeholder governance models maintain internet security, stability, and openness amid emerging challenges.


## The Internet’s Technical Architecture and Governance


### Standards Development and Open Participation


Israel Rosas from the Internet Society explained how internet standards are developed through open, inclusive processes. He emphasized that “we can all be the IETF in some way,” describing how any organization with technical expertise can participate in Internet Engineering Task Force discussions through meetings, online participation, and mailing lists. This demonstrates how “the open model of voluntary adoption of standardisation can work” through diverse stakeholder contributions rather than mandated compliance.


### Domain Name System Coordination


Paulos Nyirenda, managing Malawi’s country code top-level domain (.mw), explained the hierarchical structure of the Domain Name System that converts human-readable addresses to IP addresses. Chris Chapman, an ICANN board member, elaborated on ICANN’s role in coordinating global internet unique identifiers through multi-stakeholder oversight, describing it as “an impressive approach” that, while imperfect, serves as the foundation for internet coordination.


### Regional Internet Registry Challenges


Joyce Chen from APNIC detailed how Regional Internet Registries manage IP address allocation through community-based, consensus-driven policy development. However, she highlighted current challenges, particularly the governance crisis at AFRINIC. Nyirenda confirmed the severity of this situation, noting that AFRINIC had “annulled board elections just a few hours ago,” demonstrating how governance failures in technical infrastructure can directly affect internet stability for entire regions.


## Multi-Stakeholder Governance Models


### Collaborative Decision-Making


The panelists demonstrated support for multi-stakeholder governance across different sectors. Ellie McDonald from Global Partners Digital highlighted how “multi-stakeholder spaces allow engineers, companies, governments and civil society to collaborate,” citing examples like IETF public interest technology groups that enable “human rights by design” in technical standards.


Frodo Sorensen from the Norwegian Communications Authority provided governmental endorsement, stating that “Norway strongly supports multi-stakeholder approach and ICANN/IETF as core institutions,” emphasizing that technical community involvement prevents destabilization.


### The Role of Productive Disagreement


Rosas offered a unique perspective on disagreement in multi-stakeholder processes, arguing that “disagreement is good is positive because different stakeholders may have different views different interests, but if we pursue the same objective… it’s through the discussion that we can reach consensus.” He described multi-stakeholder models as providing “spaces of influence and translation where different stakeholders can reach consensus through open disagreement and discussion.”


## Current Threats and Challenges


### Geopolitical Pressures


Chen identified 2025 as a “critical inflection point with geopolitical tensions driving digital sovereignty proposals that risk fragmenting internet into isolated silos.” McDonald expressed concern that the “WSIS+20 process poses risks of more state-centric approaches that could exclude multi-stakeholder accountability.”


### Regulatory Fragmentation


Chapman noted the challenge of “rising regulatory pressures from multiple digital media regulators worldwide making decisions without understanding technical implications.” Sorensen reinforced this concern, arguing that “internet fragmentation risks emerge when stakeholders are excluded from governance discussions, particularly if technical community involvement is insufficient.”


### Technical Security Implementation


An online participant, Nicholas, raised questions about integrating security measures like RPKI and DNSSEC into policy processes. Rosas argued for maintaining separation between high-level policy discussions and technical implementation, suggesting that “WSIS+20 is a high-level process where for instance we can agree that we want a more secure more trusted internet. How? Well that’s for the community to work in specific spaces.”


## Internet Governance Forum: Perspectives on Reform


### Defending Current Flexibility


Rosas provided a strong defense of the IGF’s current approach, arguing it “remains fit for purpose with valid working definition covering emerging technologies, with community addressing AI and other issues before formal UN processes.” He emphasized the forum’s adaptability as a strength.


Sorensen highlighted the IGF’s significance as a “successful prototype for multi-stakeholder approach in UN system, building trust and legitimacy by involving affected parties in decision-making.”


### Calls for Greater Focus


Chen presented a more critical assessment, noting that while “technical community contributes 30% of funding,” there is “decreasing space for technical topics in discussions.” She argued that the IGF “needs greater focus and streamlining,” explaining: “We are very good at picking up things, but we don’t know how to put them down, you know, to make space for other pressing issues. We’re trying to juggle everything and we’re trying to please everyone. And to me, this is a disservice to everybody because it’s impossible to dive deeply into particular topics.”


Chapman offered a middle position, acknowledging that the “IGF provides unique global space where stakeholders meet as peers, deserving continued support” while recognizing the need for improvements.


## Technical Community Contributions and Representation


Chen’s revelation that technical organizations comprise 30% of the IGF trust fund while “seeing fewer technical topics being discussed at the IGF” highlighted a concerning disconnect between financial contribution and topical representation. This substantial investment demonstrates the technical community’s commitment to multi-stakeholder governance, but raises questions about the forum’s direction and sustainability if major funders feel inadequately represented.


## Regional Perspectives: The AFRINIC Crisis


The governance crisis at AFRINIC, Africa’s internet registry, served as a real-world example of the consequences when technical governance systems face institutional challenges. Nyirenda’s report of annulled board elections demonstrated how governance failures directly affect internet infrastructure stability for entire regions.


Chen acknowledged the severity while highlighting community-driven responses, noting the “urgent need for community-driven solutions and review of fundamental governance documents.” This illustrates how the multi-stakeholder model can adapt to address serious challenges through community action rather than external intervention.


## WSIS+20 and Coordination Challenges


Panelists expressed concern about potential changes to internet governance frameworks through WSIS+20. The challenge lies in engaging constructively while protecting multi-stakeholder principles that have enabled internet development.


Sorensen raised important questions about coordination between WSIS+20 and other initiatives like the Global Digital Compact, asking “how can we better connect WSIS and the Global Digital Compact to avoid duplicated and fragmented efforts?” This reflects concerns about proliferating digital governance processes creating parallel, potentially conflicting frameworks.


## Key Technical Community Perspectives


The discussion revealed several core technical community positions:


– **Separation of Policy and Implementation**: High-level processes can establish principles, but technical implementation should remain within community-driven processes with appropriate expertise


– **Voluntary Adoption**: Internet standards work through voluntary adoption rather than mandated compliance


– **Community-Driven Solutions**: Technical governance challenges are best addressed through community processes rather than external intervention


– **Multi-Stakeholder Participation**: Technical expertise must remain central to internet governance, but within inclusive processes involving all stakeholders


## Conclusion


The discussion highlighted both the strengths of current internet governance arrangements and emerging pressures that could affect their future effectiveness. While panelists supported multi-stakeholder governance principles, they identified significant challenges including geopolitical tensions, regulatory fragmentation, and institutional governance crises.


The disagreement about IGF reform within the technical community itself suggests recognition that even successful governance models require continuous evolution. The path forward appears to require maintaining core multi-stakeholder principles while addressing legitimate concerns about effectiveness, focus, and representation.


The urgency expressed about 2025 as a critical inflection point indicates these challenges require immediate attention. The internet governance community faces the task of demonstrating that multi-stakeholder approaches can adapt and improve while preserving the fundamental characteristics that have enabled the internet’s global success.


Session transcript

Ajith Francis: Good afternoon and thank you for joining us. Hello and good afternoon, everybody. Welcome to yet another session on WSIS Plus 20. However, we hopefully today are going to talk about the technical community perspectives on WSIS Plus 20, but also what are the challenges with regards to governance when it comes to the technical architecture. The purpose of today’s session really is to provide a framing and a brief understanding on the technical layer of the Internet and the technical architecture, its governance, but also the role of the multi-stakeholder model. We also want to spend a little bit of time to address the role of the IGF and maybe try to articulate a positive vision for the IGF as well. We often take the technical underpinning of the Internet for granted, and that’s with good reason, because the end user doesn’t necessarily need to know the actually ins and outs of how the Internet works as they navigate the Internet. But when it comes to the actual questions around policy and governance, that understanding of what is the actual technical layer and the different components of it are extremely critical. The technical architecture underpinning the Internet is definitely not a monolith, but it’s actually a set of federated entities, operators, and actors that work together to actually keep the Internet running and accessible for all of us. So to help me understand some of these perspectives on what is the technical layer but also what are the governance parameters as well as challenges facing the technical layer at the moment, I’m joined by a stellar panel with diverse sets of expertise and perspectives. So I’m joined today and I’m gonna go alphabetically, I’m joined by Chris Chapman who’s a member of the ICANN board which is the Internet Corporation for Assigned Names and Numbers. I have Joyce Chen who is the Senior Advisor for Strategic Engagement at APNIC. Also joining us is Ellie McDonald, Policy and Advocacy Lead at Global Partners Digital. We also have our colleague Paulos Nyirenda joining us online. He is the manager of the country code top-level domain for the country of Malawi.mw. We’re also joined by Israel Rosas who is the Director for Partnerships and Internet Development at Internet Society. And finally and but not definitely the least, we also have Frodo Sorensen who is the Internet Advisor, the Senior Advisor for Internet Governance at the Norwegian Communications Authority. Alicia Sharif from Nominet who’s also the Policy and Public Affairs Lead is helping us also moderate the session and has also helped put this session together. So thank you all very much for joining today’s session. I really appreciate you taking the time out to to speak to us today. We want to keep this session fairly interactive and a bit of a discussion format. We will have two sort of blocks for Q&A for short brief Q&A in between the session. So we there are for those of you joining us in person we have two mics on the on either side of the of the room so please feel free to use that. If you’re joining us virtually please feel free to put your comments in the chat box or in the Q&A part and online moderator Alicia will help us sort of navigate that. So I’ve provided a very generic and a very broad overview of what is the technical layer But I really want to turn to my fellow panelists today to really sort of dig into some of the nuances of the technical layer And I’m gonna start with with Israel if you wouldn’t mind giving us your perspective Particularly on the standards and the protocol side that make up the the internet Particularly given your role at the Internet Society and your interactions with the IETF and Internet Architecture Board So if you could frame out what is sort of the role of the the standards and the protocols community


Israel Rosas: Really Relevant conversation nowadays that we’ve seen many Organizations from the technical community of the internet participating here And Just for and thank you for framing the question that way because the Internet Society is a nonprofit organization founded in 1992 to support the community of People working on the internet Later at that time then we’ve seen different parts of the of the internet being developed in different ways and One particular part is the Internet Engineering Task Force the Internet Architecture Board I cannot say that I speak on behalf of them We’ve seen some of the members of the IAB are here at the IETF. So if you would like to get in touch with them They are around the halls Here’s I can see Warren here and there are some other members the chair of the IETF is also here But you know what I would like to highlight from from that part of the technical community It’s not necessarily the topics but how the that part of the community addresses the multi-stakeholder open conversations because something that has been catching my attention these days that we’ll be having conversations on the wisp plus any review and like this kind of high-level discussions is that in some occasions some actors perceive that the IETF or that part of the community should solve something when a problem is identified for instance but seeing it as a separate entity as if you could outsource the work to that entity for that entity to solve it when in reality what we’re saying is that we can all be the IETF in some way for instance when discussing this with government officials we have at the Internet Society we have a policy makers program where we invite government officials policy makers to attend the IETF meetings to see how the standards are developed even these ways of measuring consensus with the humming in the rooms that is something that really surprises some of the policy makers in our programs like but and how do you vote now let me explain it’s like rough consensus this kind of discussions it’s important for them to to understand that any organization any team having technical people within their organizations they can join these conversations at the IETF and there are mechanisms to attend the meetings in person online to participate mailing list and that’s why we have that program at the Internet Society for the policy makers for them to understand that they can be part of the solution instead of asking the body to to solve particular issues so I would like to to lead it at this point to keep the conversation going but I think that’s the most important part like these organizations are showing how the open model of voluntary adoption of standardization can work and then that how can be implemented in some other spaces in some other ways


Ajith Francis: Thanks Israel. So maybe I’m going to turn now to our online panelist, Paulos. If Paulos, would you mind giving us your perspective on sort of the name side of the Internet technical layer?


Paulos Nyirenda: Thank you, Jay. Can you hear me? Yeah, we can hear you. Thank you. So, yes, the technical layer of the Internet is responsible for things like routing data packets across the network. And it’s used by people and people normally use easy to remember addresses for accessing services on the Internet, like vast amounts of information. They do online banking, they do shopping, they enjoy various forms of Internet. They do learning. And all these are done using human interfaces with easy to remember addresses for the resources. However, as we all know, computers use numbers for sending the data across the Internet. And so there is need for an intermediary. The domain name system converts the man readable and easy to remember addresses to IP addresses, which the computers use for sending the traffic across the Internet. I think my task here is to look at the naming side and the DNS in particular. And the DNS has a hierarchy. So there is the root and the top. are managed by ICANN and with several operators. They generate top level domains like .com, country called top level domains like .mw for Malawi or .no for Norway. And these are supported by DNS servers, are operated by many registry and internet service providers across the world. So as we see the technical layer of the internet involves many players and it attracts a multi-stakeholder approach to its governance. But as governments manage people who use the internet and these are also managed by civil society, we see that multi-stakeholder coordination is really necessary for internet governance and the IGF has assisted to play a role in presenting a platform for this. So internet names, organization or management or governance, we can think of the ICANN as being at the top with its individual supporting organizations, the GNSO, the CCNSO, GNSO for generic top level domains, CCNSO for country codes and many countries have taken a particularly keen interest on their country code top level domains which featured highly in WSIS and features highly in IGF. discussions. We have governments taking part under GAC, the Government Advisory Committee, and end-users under ALAC as a constituency of ICANN. And there are many other ICANN constituencies that support or present platforms for many stakeholders to come in. Around the origins of the Internet, the US government used to play a critical role in approving things like registrations for domain names, updates at the top level, and operations at the root level. That was the so-called US oversight role. That role ended in 2016 with transition of IANA functions under ICANN to the Internet community in the current multistakeholder model of Internet governance. So right now, oversight at ICANN is now carried out using supporting organizations and advisory committees that I have mentioned earlier. So the IANA function that plays a critical role at the top level is carried out on a day-to-day basis by ICANN and the BTI. Maybe I should stop there for now. Thank you very much.


Ajith Francis: Thanks, Paulos. Maybe, Joyce, if you would mind sharing with us the perspective of the numbers community?


Joyce Chen: Thanks very much, Ajith, and good afternoon to all of you, and good afternoon, good morning, good evening to those of you who are online and joining us from all over the world. Thanks for having me. So the way that I would like to approach this question is to sort of describe the work of APNIC and as well our policy development process, which I think Many people in the room are quite familiar with how ICANN policy development works, but may not be as familiar as how the regional internet registries approach our policy development. So just very quickly, for those of you who might be new to this world, and may not be as familiar with the terms, those of you who already are very familiar, this is the part where you can switch off. APINIC is the regional internet registry, I’ve mentioned that before, which is also the term RIR, for the Asia-Pacific. It is a not-for-profit, all the internet organizations are not-for-profit. We are also membership-based, and we play a vital role in the technical coordination of the internet. We are one of five regional internet registries around the world that does the work of managing and allocating internet number resources, such as IP addresses and autonomous system numbers, or ASNs. And internet number resources are fundamental resources, because they allow devices and networks to communicate across the global internet. So let’s talk a bit about APINIC’s policy development. What it refers to are how the rules and guidelines are created for managing internet number resources within the Asia-Pacific region. So what that means is we have community-based policy development for the region, by the region. And this is the same practice across the various regions in the world. These policies determine who gets IP resources, how they are used, and how they are distributed fairly and efficiently. Our policy development process is open to all, bottom-up, consensus-based, transparent, and documented. And what it means is anyone, not just APINIC members, can propose a policy change or participate in discussions. By upholding these values in our policy development, we ensure fairness in IP address distribution. Distribution, because it is based on community’s needs across diverse stakeholders in the region. Our aim is to protect the integrity of the global internet by maintaining consistent technical standards.


Ajith Francis: Thanks, Joyce. I think to round this out, maybe Chris, if you could give us your perspective on the role of ICANN, which also Paulos has alluded to as well. So over to you, Chris.


Chris Chapman: Well, Ajit, firstly, thank you for having me on the panel. I’ve never been described as part of a stellar panel before, so there’s always a first. And talking about first, this is my first IGF. I’ve been on the ICANN board for nearly three years. Currently the deputy chair. I note that in the audience in the front row is our current chair, Tripti Singha, and Becky Burr. So I feel like I’m here having an annual performance review. But in all seriousness, I joined the board having a longstanding, sorry, I’ll just go back one step. I’m a lapsed lawyer by training. And it’s a bit like being a lapsed Catholic, which I also am. But throughout my career, whether it was ultimately running the Seven Network Broadcasting in Australia or building the Olympic Stadium in Sydney, or running the Optus Broadband Telecommunications Initiative, or being for 10 years the inaugural chairman of the Australian Communications and Media Authority, which was probably the first genuinely converged regulator broadcasting telecommunication spectrum. and so-called online services. Or indeed from 2016 to 2023, being the president of the International Institute of Communications, which is 56 years old. Based in London, it’s a hosting platform stroke think tank for media and communications. Started in public broadcasting, broadcasting telecommunications, and now has, like all things, evolved into the digital ecosystem. I have always thoroughly enjoyed working within the technical community, although I would not in any way, shape, or form say I’m technically savvy. But I emphasize all that because on each and every occasion, the technical side of those businesses is what has ensured long-term prosperity, long-term stability. It has been the sanity check, the reality check. They are the foundational pillar of any system that you want to have that is effective, efficient, and ultimately enduring. So I joined ICANN with new curiosity about what that technical community meant in the internet space and the unique identifiers. I joined it with an absolute fascination for the multi-stakeholder model, disillusioned as I am about the multilateral institutions and the great challenges facing society globally. And I have become its greatest advocate. Late adopters are often the most passionate. So what I came to realize over the last several years ICANN is not the be-all and end-all. It is merely the senior player within the unique identifier space, the domain name system. that is the ultimate coordinator, not only within a very engaged, broadly global-based community, but also with a number of collaboration partners in the iPlayers. So the model is much more deep, nuanced, respectful, intelligent, broadly-based, bottom-up than I could have ever contemplated. And I’ve enjoyed every moment of being within the community and being educated by the community and learning from the community. So having said that, the short point, the short question you asked me, and I’ve been listening to Paulos and Joyce, and if I could synthesize those two together, I’d have the perfect description of what ICANN does. But ICANN’s mission is essentially to coordinate the global internet system of unique identifiers, ensuring a stable, secure and unified online experience. In practice, this means ensuring that there is one internet, one unique hierarchical namespace based on a unique route. If you were to go a little deeper, you’d break it down into three buckets. There’s the technical aspects, there’s the ensuring the stable and secure operation of the internet’s unique identifier systems by coordinating the allocation and assignment of names in the root zone of the domain name system. There’s facilitating the coordination, operation and evolution of the DNS root name server system. There’s coordinating the allocation and assignment at the topmost level of internet protocol numbers and autonomous system numbers. And then you’ve got the whole policy side of the multi-stakeholder model, which is not perfect, but it’s a very impressive model. And it could be more efficient, it could be more.


Ajith Francis: Thanks, Chris. I think these interventions sort of highlight how complex the technical layer is and how there are different parts of it. And given that there are so many different entities that are involved in the provisioning of the internet, I think the question of collective governance and coordination becomes extremely important. Coordination both at the level of operational matters but also at the level of policy. So I’m going to turn now to Ellie and Frodo to sort of giving us your thoughts on how the internet governance ecosystem has benefited a secure, stable and open internet rooted in human rights. But also what does the multi-stakeholder model mean to you? Maybe we can start with Ellie and then go to Frodo.


Ellie McDonald: Thank you, Ajif. I’m also not a member of the technical community, so perhaps I can just briefly introduce why I think I’m here, who I am. So I work for Global Partners Digital, we’re a civil society and human rights organisation and we work to ensure that the frameworks, norms and standards that underpin digital technologies are rights respecting and also developed in an inclusive way. So as part of this work we’ve engaged in multi-stakeholder venues like the IETF, earlier mentioned, multilateral ones like the ITU and of course the IGF, this multi-stakeholder discursive beautiful but flawed forum that we find ourselves in now. So maybe I can build a bit on what the other panellists have said and particularly ISRA. I think what’s particularly special about this ecosystem is that it does provide these unique spaces where you could have on one hand an engineer, a company, government representative and a human rights defender to come together and to shape all of the various things that we’ve just discussed. I think there are clear examples of how this can work in practice. I’m sure that the panelists can furnish us with a lot. I’ll just give a couple. One that comes to mind is the public interest technology group in the IETF. I think in being established, this offered a really safe space to be able to exchange information for different layers of knowledge to come together. So the technical, the advocacy, the normative, the human impacts to be brought together. It permitted, has permitted, public interest actors to engage at the earliest stage of development and that’s really critical to ensure human rights respect by design. Another example that comes to mind is the evolution and the establishment of the HTTPS protocol. I think this is a really beautiful example of how that can work in practice. Both identifying problems, issues that needed to be addressed, both technical ones but also the resulting issues in terms of surveillance. Having the appropriate technical solution and then being able to test it with end users and other experts who could say whether it would be fit for purpose. I would like to add that I don’t think, I don’t think anyone here would say that that means that these spaces are functioning perfectly. They’re resource intensive and that makes it particularly hard for under-resourced communities to engage. There are naturally asymmetries of knowledge that make it difficult too. Sometimes issues of access and then of course the challenge of translating between different lexicons. The IGF, which I think we’ll come to shortly, I think is a really special place where that translation work happens. and all of that to say despite there being these kind of issues of reforms that need to be made and doesn’t mean that we should abandon these spaces they’re tremendously useful and helpful and yeah I’m back and pause there.


Ajith Francis: Thanks Ellie. Frodo?


Frodo Sorensen: Thank you Hadjet and thanks for a very interesting exchange of views so far. Norway is a strong supporter of the multi-stakeholder approach for internet governance and digital cooperation and furthermore Norway is also a strong supporter of the ICANN as the core institution for technical internet governance and the IETF as a standardization body for internet technology. It is paramount that the internet remains open free resilient and interoperable. This should be the core of the discussion of the ongoing discussion about the multi-stakeholder model and the future of the IETF. How can the model and the forum be designed to support an open and secure internet? It is also worth noting that the broader topic of digital cooperation is closely linked to the open and secure internet since this underpins the running of applications and sharing of information which facilitates human interaction, public discourse and economic activity. So why is the governance at the technical layer so special? Governance of the internet infrastructure is based on the running of technical equipment that requires insight of the technical community to ensure that the internet works stable, robust and interoperable. The internet may become fragmented if some of the stakeholders are excluded from the internet governance discussion and in case the technical community is not sufficiently involved the administration of internet resources and thereby the internet itself may destabilize. and furthermore the value of the internet may become weakened since it restricts the usability of this global network. In particular the underlying technical layer of the internet is fundamental for the functioning of the internet. This ensures interoperability of the core functions of the internet and furthermore without an open and secure and interoperable internet infrastructure the applications built on top of it may become restricted. Ultimately without an open and secure internet it may threaten human rights, in particular freedom of speech and freedom of association and furthermore the value of the internet to support democratic processes globally may become undermined if the internet communication is restricted. In summary the expertise from the technical community is instrumental to the running of the internet. Involvement of the technical community for the global internet governance is important for an informed discussion based on technical realities. This is a prerequisite for maintaining an open and secure internet.


Ajith Francis: Thanks Frodo. If anybody wants to pose questions please feel free to use the mic and I’ll come to you very shortly. I think what Frodo points to is very interesting because there is this often this risk that’s often sort of thrown out which is this question of internet fragmentation and fragmentation at two layers both at the technical fragmentation but also policy and regulatory fragmentation. So with that sort of broad sort of overarching perspective I open the floor to any of you to sort of share your perspectives on why are we having this conversation today, what is the role that the technical community and civil society and government stakeholders sort of play in this conversation. So, I go to the questions right after this. Joyce, do you want to go first?


Joyce Chen: Sure. So, the question was, why are we having this conversation now? I think it speaks to the urgency of protecting the Internet’s global open nature amid rising geopolitical, technical and regulatory pressures. 2025, this year, is an inflection point. We are reviewing WSIS in an environment of growing geopolitical tensions, driving nations to assert digital sovereignty, leading to proposals for national firewalls, data localization laws, even alternative or alternate DNS systems, developments that risk breaking the Internet into isolated silos. The rise of cybersecurity threats, misinformation, abuse of platforms, they’re all prompting calls for more centralized control. There are increasingly hostile actors that weaponize the use of the Internet to disrupt lives and push political agendas. We have yet to reach global agreement on governance frameworks for new and emerging technologies like AI. All this to say that there are real risks, that if we don’t actively preserve the Internet’s open and global architecture, we risk losing it. Internet organizations and coalitions such as the TCCM, the Technical Community Coalition on Multi-Stakeholderism, I love that I said that all in one go, are ensuring that governance decisions are informed by technical realities. The technical community brings evidence-based operational expertise that is essential to preserving a global, secure and resilient Internet.


Ajith Francis: Israel?


Israel Rosas: Yeah, if I may, the thing is that I kept thinking in different things that my fellow colleagues have shared, and I keep stealing ideas from Joyce, from Ellie McDonald all the time, I have to say, because in a previous meeting Joyce mentioned that the IEF is a space of influence where stakeholders influence each other. And then Ellie mentioned that this is a translation space and I think that both are true and complementary because For instance what we are doing with the policy makers program for the ITF It is not that we are asking the policy makers to become Technologists and to participate in in the deep roots of the technical operation. We want them to understand the first of all how the Policies are developed how the group works how the the ecosystem works in that part But at the other at the same time we want the same From the technologists to understand that the policy makers have valid concerns and at the end of the day Perhaps to your question. We are having this kind of conversations and we are facing threats of fragmentation Because I don’t know why I’ve seen a trend to avoid disagreement and in fact disagreement is good is positive because Different stakeholders may have different views different interests, but if we pursue the same objective And if we start talking breaching our Disagreements is through that consensus discussion It’s through the discussion that we can reach consensus and that discussion of course is going to be longer It’s going to take longer. It’s going to be probably more complex It’s going to need more translation more influence like in these spaces in other meetings in other spaces But at the end of the day the result is going to be more resilient without fewer non-intended consequences and Securing an interoperable Internet a single Internet a global Internet So I think that that’s why it’s important like to take reference of different ways of implementation of the multi-stakeholder model And and I think this kind of opportunities are really good for that. Thanks.


Ajith Francis: Thanks. I’m gonna go to Ellie Do you wanna jump in and then Chris and then we go to the question on the floor?


Ellie McDonald: excellent Sorry, I keep forgetting to turn myself down and Yeah, maybe I can speak a bit to the governance aspect of the question. I think and This panel is about WSIS I think one of the reasons we’re having this discussion is because that is a space where this will be stress tested and in fact come under quite severe threat. I think probably we all see the risk of how this could play out in the months ahead and not only with respect to this process but I think Joyce also mentioned discussions about the governance of emerging technologies and I think in discussions of the AI mechanisms, that kind of final eleventh hour negotiations, we see risks of more state-centric process to the appointment of experts, exclusion of military applications from the scope of the assessments that they’ll do, lack of genuine multi-stakeholder accountability. I know many of you in the room are working to mitigate those risks but I suppose I really loved Isra’s positive take but to give more of the risk take I think in the midst of everything else that’s happening, the conflict, the challenges at the moment, it’s really important that we also keep close attention to these processes and that we don’t lose anything in the course of the next year and that we’re not next year sitting without this forum that we’ve already kind of praised in so many ways.


Ajith Francis: Thanks. Chris?


Chris Chapman: Just to add to that and synthesising some of these thoughts, I often discuss the prospect that with 200 countries say in the world, sovereign powers, at last count I got to 420 digital media regulators around the world. That was the last count, that was about three years ago when I laboriously went through it. Now It’s, we’re on, we have seen whack-a-mole instances over the last five to ten years where legislatures, regulators just make arbitrary decisions completely ignorant of the implications of what they do with the unintended consequences when they enter into decision making and have unfortunate impact on the network and the operations layer within which we operate. And I think this is just going to, my apprehension about that will increase. So whereas we think we’re travelling okay, I’m quite positive about the outcome over the next few months. I feel a very good vibe right throughout the IGF from what I’m hearing and seeing. But collectively, our work is just starting and we’re going to have to double down. We’re going to have to invest, reinvest in mutual trust through our collaborations because we ain’t seen nothing yet.


Ajith Francis: Thanks. I’m going to take the question from the floor and then I come to you, Paulos. Please go ahead and make your question.


Audience: Hello, this is Mia Kuehlewin from the Internet Architecture Board in the IETF. And I don’t have a question, I want to just comment, I want to emphasise how important it is for us to get this broad input from all kinds of people because it actually makes our standards and makes the internet better. It makes it possible to take all requirements into the development process as much as we can so we don’t get surprised later on. But it’s also very essential for getting the protocols deployed because we are not like a government that can enforce anything. It’s like voluntary employment. And only if you consider everything, people will actually use it at the end and it will be a success. So this is very essential for us. And I also want to underline the openness of these fora because, as was said, there are some barriers, there are different languages, both of these spaces.


Ajith Francis: Thank you very much for taking the floor and I’m sorry to have kept you waiting. Paulos?


Paulos Nyirenda: Thank you, moderator. For us in Africa, maybe I should talk a little bit about how important it is now to be talking about governance of the technical layer. As you know, our registry in Africa for IP addresses, AFRINIC, is having tremendous governance-related problems at the moment that have resulted in, for example, annulling board elections just a few hours ago. So management and governance of the technical layer is particularly pertinent to our region as our internet registry, similar to APNIC for Asia, is going through these problems. We would appreciate raising the issues about multi-stakeholder, bottom-up mode of governance, because this is causing us a significant amount of trouble at the moment, at least technical layer. Thank you.


Ajith Francis: Thanks, Paulos. That’s a fair perspective. Yeah, go ahead, Joyce.


Joyce Chen: Thanks. This is Joyce. And thanks very much, Paulos, for bringing that up. It is a very critical issue and It really requires urgent attention from the internet community, not just the internet technical community, the community at large. And what I would like to applaud is that because this crisis has come to our doorstep, we have collectively decided that there needs to be a lot of renewal of the processes and policies that we have taken for granted since the beginning. And so if you look at the evolution of the internet, and especially at the technical layer, it’s always been on the best effort basis. It’s always been very voluntary basis. We’re all just trying our best to keep the lights on, essentially. A lot of the work took many years to professionalize, and it has taken a long time for the community to sort of refine the way that we do things, refine the way that we do governance. And so I might point you to this process that’s going on now, which is the review of the RIR governance document. This is a global effort. It is being run by the ICANN ASO, the Address Supporting Organization. And it really looks at the process of establishing and de-recognizing RIRs. So this is a fundamental document of RIR governance, and I highly encourage you to look into this. That’s all. I just wanted to add on to these remarks because I think it’s important that, yes, we have a problem. We are facing a crisis, but the community is coming together to find solutions for it before solutions are found for us.


Ajith Francis: Thanks, Joyce. So I’m very conscious of the time because we’re 15 minutes left, and I want to switch gears to talk about the IGF, which is the venue that we’re at. And the IGF, but also the multi-stakeholder model, has worked really, really well for the internet governance ecosystem, but also particularly for the technical layer. There’s an interesting question that’s emerging, which is, you know, is the IGF fit for purpose with regards to a lot of the new emerging digital governance issues that are sort of emerging at the moment? And this is in the context of a new sort of framing that’s emerging that’s being increasingly used between the governance of the internet, which is the actual standards, protocols, the namings and number system, but also governance on the internet, which is at the governance of the application layer. So I’d be very curious to get, Frodo, particularly your perspective on how you see the role of the IGF in this sort of emerging new context. Thank you.


Frodo Sorensen: The IGF has been a successful prototype for the implementation of the multistakeholder approach in the UN system. And one could build on this to seek to strengthen the multistakeholderism in other parts of the UN system, the CSTD, for example, by broadening representation of different stakeholder groups. The multistakeholder model helps to build trust between those who otherwise would not have a common space for discussion. It strengthens the legitimacy and the relevance of the governance process by allowing those affected by the decisions to be involved in shaping them. The complexity of the internet is constantly increasing, and this leads to a continuous need for the insight from the technical community, which can provide a supplement to the government’s societal perspective. Some have criticized the IGF for not having decision-making powers, but this is a part of the careful design. IGF is a global forum for building capacity, identifying, and discussing internet-related issues. However, there is a need to make IGF outcomes more accessible and useful for policymaking. In addition to the core issues of internet governance, which are closely related to the internet infrastructure, another aspect of the WSIS processes and the IDF has been in focus, often referred to as the Digital Cooperation, originally referred to in the Tunis Agenda as Enhanced Cooperation. WSIS and IDF have constantly also covered Digital Cooperation. The agendas of IDF meetings and WSIS forums have included various relevant topics in the field of Internet ecosystem and digital services. This implies that the global digital compact in practice largely is a duplication of this activity in the WSIS framework. Digital Cooperation within WSIS-IDF has covered various areas related to the use of the Internet as opposed to the underlying infrastructure. Examples of such areas are cyber security, Internet openness, data governance, platform economy, regulation, as well as artificial intelligence. There is a need to better connect WSIS and the GDC. Otherwise, we risk duplicated and fragmented efforts, which is unnecessary, since both initiatives have similar goals. It should be possible to better coordinate the interplay between the two.


Ajith Francis: Thanks, Frodo. I’m really understanding this tension between governance, often governance, on, I think, there’s this question of how does the technical community perceive this tension, and also civil society. So I’d be curious to get Joyce and Isra and Ellie McDonald, your perspective on this topic. Do you want to? Isra, do you want to? Yeah, I can. And I’m still


Israel Rosas: stealing ideas from others, because something that I’ve been hearing from our community is several references to, yes, the IEF is for purpose, and it’s been for many years. The thing is that if we take a time machine, like, I don’t know, five years, seven years before this one, probably the conversation wouldn’t be around artificial intelligence, but blockchain. And I don’t know what would have happened if we would have renamed this as, I don’t know, like the Blocking Governance Forum or something like that. The thing is, we have a working definition of internet governance that works, that is still valid, that mentions emerging technologies, whatever emerging technologies are in 2015, 2018, 2015, 2030, I don’t know. So, as that working definition is valid, the IEF remains valid to tackle different issues. And one of the results of that or one of the signals of that is that we’ve been having conversations at the IEF about artificial intelligence and different technologies way before the Global Digital Compact existed, way before the high-level panel on digital cooperation was floated as an idea in the United Nations. So, the community, or at least at my perspective, is reacting to these topics without necessarily receiving the signal from the governments, from the UN, for instance. So, my sense is that that’s going to keep happening within the current configuration. And that’s why I was referring at the beginning about who is the IETF or who is APNIC, for instance. If there’s a decision that needs to be made, it is not that you need to reach out to a person in APNIC to make a decision, because it’s not unilateral, it’s based on the processes that are designed in the community. The same with the IETF, the same with ICANN, the same with the CCTLDs. It is not that a single person can decide on something. So, the community is also working on how to keep shaping and evolving the agenda of the meetings, the agenda of the international activity that we shouldn’t forget about, or even the NRI. So, the short answer, yes, I think it’s a purpose. And just building on what Frodo mentioned on the difficulty of tracking the results, if this is a space of influence and translation, ICANN and the International Society published a paper on the footprints of the IETF. trying to track those impacts at the local level just to have like more elements for the discussion.


Ajith Francis: Joyce, do you want to add to that?


Joyce Chen: Sure, so I struggled to come up with ideas, more ideas, from like so many days of, you know, WSIS panels and discussions and what else do I have to add to this conversation? Is there something new that I could say? And I think across the days we’ve already heard for about, you know, calls for sustainable funding of the IGF, more resources, you know, rebranding the IGF to the DGF, you know, etc. And the reality is we are asking the IGF to do many things without really thinking about whether the current IGF structure is able to enable all this to happen. I would like to hear more proposals around how we can streamline IGF processes and intersessional work and how we can help to prioritize the work of the IGF and to give it more focus. One of the strengths of the IGF is that it is incredibly flexible. Every year we are able to frame conversations around new and, you know, hot topics. This year, for example, it’s all about AI and the program itself reflects this that we are all discussing issues to do with AI, for example. But the issue that I want to point out is that we are very good at picking up things, but we don’t know how to put them down, you know, to make space for other pressing issues. We’re trying to juggle everything and we’re trying to please everyone. And to me, this is a disservice to everybody because it’s impossible to dive deeply into particular topics. And I’m saying this because the internet technical community are one of the top financial contributors to the IGF. I was sitting in the donors meeting, the IGF donors meeting yesterday, and I think it was mentioned that the internet technical organizations actually comprise 30% of the overall IGF trust fund. And that’s big. You know, we believe in the mission of the IGF. We’re doing everything we can to support the multi-stakeholder community. And its perseverance is critical for the ongoing legitimacy of the multi-stakeholder internet governance ecosystem. However, over the years, we are seeing fewer technical topics being discussed at the IGF. The space for the technical community, I feel, is growing smaller. And I understand that the technical topics are dry. It’s hard to make something dry seem interesting in a policy space. And we struggle with this. Even though these are topics that are core to the functioning of the internet. So, in summary, I think that the IGF could benefit from greater focus, you know, efforts to prioritize its work. I hope to see more concrete proposals on how it could streamline better against certain agreed-upon priorities. And whether or not the IGF moves toward being action-oriented or it remains non-prescriptive, it will still require some housekeeping to remove some of the bloat. I think it bears reminding that all things digital are made possible by the internet, whether we are the DGF or the IGF. We’re really all just talking about the internet and the use of the internet.


Ajith Francis: Thanks, Joyce. Ellie, and then I’m going to go to Chris. But if you could keep it very, very brief, because we’re running short of time and I know we have an online question.


Ellie McDonald: Okay, I’ll try my best. Yeah, I would definitely underline a lot of what the other panelists said. I think we did some research as Global Partners Digital, sorry to shamelessly plug, but we looked a bit at the breadth of stakeholder positions on the IGF. We wanted to do this because we thought it could be quite useful to see where the convergence lies and also where there might be similarly intended attempts to operationalize different changes. and Joyce. I’m going to start by saying that I think the IGF is a really good example of how it changes to its form and structure. I say this because I think there was really a remarkable level of convergence about certain elements. I think as others had said, the bottom-up nature of the IGF is really so critical because it allows various communities to come with different lexicons, different ideas and to bring ideas to the table. The most important thing is to be successful and I believe I should again because I should be brief to pick up on something Froda said, I think we should also be mindful of the danger of being too restrictive about this mode, this multistakeholder model, that it has huge benefits that we wish


Ajith Francis: to multi it in several sectors, both visually and 강 should have the last play before-


Chris Chapman: I would endorse Froda’s comments, and even though it’s , as I said at the beginning, I’m new to egf and therefore hesitant to be prescriptive, I share choice’s cry from the heart about what needs to be done, and I think it’s a good thing that the ICANN has continued to support it, and I think it’s a good thing that the ICANN has continued to support it, and I think it’s one hundred and one percent supportive of the renewal of the IGF adequate resourcing, proper mandates, it’s the only place globally where stakeholders can come together as peers, and it’s from our- and the ICANN will continue to support it as it has


Alicia Sharif: been for a long time, and I think it’s a good thing that the ICANN has continued to support it as it has been for a long time. he’s asking his question in light of Wizards Plus 20 and ongoing discussions on strengthening the resilience of the global internet and Nicholas wonders if we are witnessing the next global wave of internet hardening through security extensions like RPKI and DNSSEC so these are both things that use cryptography to try and prevent kind of root hijacking on one hand and also adding cryptographic signatures to DNS records on the other so quite technical so he’s saying that we’ve seen precedents such as a US federal enforcement of RPKI for routing security and increasing DNSSEC mandates but we cannot look at this in isolation the post-quantum era looms ahead requiring us to rethink cryptographic agility and resilience at the root and edge and so Nicholas’s question is how do we ensure that security extensions reinforce trust and interoperability in a truly open internet and what guard rails should we be building now ahead of the Wizards Plus 20 outcomes?


Ajith Francis: Thanks. Israel?


Israel Rosas: I have a quick reaction if I may. I think that we shouldn’t mix those topics because the Wizards Plus 20 is a high-level process where for instance we can agree that we want a more secure more trusted internet. How? Well that’s for the community to work in specific spaces because for instance the interesting thing is that here we could identify I don’t know we could say that RPKI is important and each of us are going to make different things to support the deployment of RPKI. Different things and all of them are going to be valid and complementary so in summary I would say that it’s important to keep having those conversations the important part is that I’ve seen that RPKI is a widely community-driven process and if some governments are recommending its adoption it’s important also to realize that governments are also operating networks that are part of the internet so again multi-stakeholder implies governments. It is not that governments aren’t other stakeholders so just it’s a good reminder of that.


Ajith Francis: Thank you very much, Israel. And I can see the red light blinking, so it’s we’re at time. I want to say thank you very, very much to all of my fellow panelists. This was really I enjoyed having this conversation, and I hope the audience took something away from it as well. So thank you very much and have a good rest of the day. Thank you.


I

Israel Rosas

Speech speed

167 words per minute

Speech length

1435 words

Speech time

513 seconds

Internet standards and protocols are developed through open, multi-stakeholder processes where anyone can participate, not just outsource solutions to technical bodies

Explanation

Israel argues that organizations like the IETF should not be seen as separate entities to outsource work to, but rather as communities that anyone can join. He emphasizes that any organization with technical people can participate in IETF discussions through meetings, mailing lists, and consensus-building processes.


Evidence

Internet Society’s policy makers program that invites government officials to attend IETF meetings to see how standards are developed, including the ‘humming in the rooms’ consensus mechanism that surprises policy makers


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Legal and regulatory


Agreed with

– Joyce Chen
– Chris Chapman
– Frodo Sorensen
– Audience

Agreed on

Technical community expertise is crucial for Internet stability and governance


Multi-stakeholder model provides spaces of influence and translation where different stakeholders can reach consensus through open disagreement and discussion

Explanation

Israel argues that disagreement is positive and necessary because different stakeholders have different views and interests, but through discussion they can reach consensus. He emphasizes that avoiding disagreement is counterproductive and that longer, more complex discussions lead to more resilient results.


Evidence

References to Joyce Chen’s description of IETF as ‘a space of influence where stakeholders influence each other’ and Ellie McDonald’s characterization as ‘a translation space’


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Infrastructure | Legal and regulatory


IGF remains fit for purpose with valid working definition covering emerging technologies, with community addressing AI and other issues before formal UN processes

Explanation

Israel argues that the IGF’s working definition of internet governance remains valid and covers emerging technologies, whatever they may be at any given time. He points out that the community has been discussing AI and other technologies at the IGF long before the Global Digital Compact or UN high-level panels existed.


Evidence

Comparison of how five years ago the conversation would have been about blockchain instead of AI, and how the IGF has been discussing AI before the Global Digital Compact existed


Major discussion point

Role and Future of the Internet Governance Forum


Topics

Infrastructure | Legal and regulatory


Agreed with

– Joyce Chen
– Chris Chapman
– Ellie McDonald
– Frodo Sorensen

Agreed on

IGF remains valuable and fit for purpose despite needing improvements


Disagreed with

– Joyce Chen

Disagreed on

IGF focus and prioritization approach


Security extensions like RPKI should be kept separate from high-level WSIS+20 discussions, with community-driven deployment processes remaining independent of government mandates

Explanation

Israel argues that high-level processes like WSIS+20 should focus on agreeing on principles like wanting a more secure internet, while leaving the technical implementation details to the community. He emphasizes that RPKI is a community-driven process and that government recommendations for adoption are valid since governments also operate networks.


Evidence

Distinction between high-level agreement on security goals versus technical implementation details, noting that governments are also network operators and part of the multi-stakeholder model


Major discussion point

Technical Security and Internet Hardening


Topics

Cybersecurity | Infrastructure


Disagreed with

– Alicia Sharif (relaying online question)

Disagreed on

Approach to technical security implementation in policy processes


P

Paulos Nyirenda

Speech speed

95 words per minute

Speech length

628 words

Speech time

395 seconds

Domain Name System converts human-readable addresses to IP addresses, managed through hierarchical structure with ICANN coordination and multi-stakeholder oversight

Explanation

Paulos explains that while people use easy-to-remember addresses for internet services, computers use numbers, requiring the DNS as an intermediary. The DNS has a hierarchical structure with ICANN managing the root and top-level domains, supported by many operators worldwide, demonstrating the multi-stakeholder nature of internet governance.


Evidence

Examples of top-level domains like .com and country codes like .mw for Malawi or .no for Norway, ICANN’s supporting organizations (GNSO, CCNSO), Government Advisory Committee (GAC), and end-user representation through ALAC


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Ellie McDonald
– Frodo Sorensen

Agreed on

Multi-stakeholder model is essential for Internet governance


Multi-stakeholder governance faces challenges in Africa, particularly with AFRINIC registry problems highlighting need for renewed focus on governance processes

Explanation

Paulos highlights that AFRINIC, the African internet registry, is experiencing significant governance problems that resulted in board elections being annulled. He emphasizes that this situation demonstrates the particular importance of multi-stakeholder, bottom-up governance for the African region.


Evidence

AFRINIC board elections being annulled just hours before the session, causing significant trouble for the region’s internet registry management


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Infrastructure | Legal and regulatory


J

Joyce Chen

Speech speed

151 words per minute

Speech length

1392 words

Speech time

551 seconds

Regional Internet Registries manage IP address allocation through community-based, bottom-up, consensus-driven policy development processes

Explanation

Joyce explains that APNIC, as one of five regional internet registries, manages IP addresses and autonomous system numbers through an open, transparent policy development process. This process is community-based for the region, by the region, ensuring fair and efficient distribution of internet number resources.


Evidence

APNIC’s policy development process being open to all (not just members), bottom-up, consensus-based, transparent, and documented, with the goal of protecting global internet integrity through consistent technical standards


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Chris Chapman
– Frodo Sorensen
– Audience

Agreed on

Technical community expertise is crucial for Internet stability and governance


2025 represents critical inflection point with geopolitical tensions driving digital sovereignty proposals that risk fragmenting internet into isolated silos

Explanation

Joyce argues that 2025 is a crucial year for internet governance due to rising geopolitical tensions that are driving nations toward digital sovereignty measures. These include proposals for national firewalls, data localization laws, and alternative DNS systems that could break the internet into isolated parts.


Evidence

Rising cybersecurity threats, misinformation, platform abuse, hostile actors weaponizing the internet, and lack of global agreement on AI governance frameworks


Major discussion point

Current Threats and Challenges to Internet Governance


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Crisis situations like AFRINIC governance problems demonstrate urgent need for community-driven solutions and review of fundamental governance documents

Explanation

Joyce acknowledges the AFRINIC crisis as requiring urgent attention from the broader internet community, not just technical organizations. She emphasizes that this crisis has prompted collective renewal of processes and policies that had been taken for granted since the internet’s early days.


Evidence

The review of the RIR governance document being conducted by ICANN’s Address Supporting Organization (ASO), which examines processes for establishing and de-recognizing Regional Internet Registries


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Infrastructure | Legal and regulatory


IGF needs greater focus and streamlining, as technical community contributes 30% of funding but sees decreasing space for technical topics in discussions

Explanation

Joyce argues that while the IGF is flexible and able to address new topics like AI, it struggles with prioritization and focus. She notes that the technical community provides significant financial support but feels their topics are getting less attention, despite being core to internet functioning.


Evidence

Technical organizations comprise 30% of the IGF trust fund according to the donors meeting, but technical topics are becoming less prominent in IGF discussions despite being fundamental to internet operations


Major discussion point

Role and Future of the Internet Governance Forum


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Chris Chapman
– Ellie McDonald
– Frodo Sorensen

Agreed on

IGF remains valuable and fit for purpose despite needing improvements


Disagreed with

– Israel Rosas

Disagreed on

IGF focus and prioritization approach


C

Chris Chapman

Speech speed

129 words per minute

Speech length

967 words

Speech time

447 seconds

ICANN coordinates global internet unique identifiers ensuring stable, secure operation through multi-stakeholder model involving technical, policy and community aspects

Explanation

Chris explains that ICANN’s mission is to coordinate the global internet system of unique identifiers to ensure a stable, secure, and unified online experience. This involves technical coordination of DNS, root servers, and IP numbers, combined with policy development through the multi-stakeholder model.


Evidence

ICANN’s role in coordinating allocation and assignment of names in the root zone, facilitating DNS root name server system coordination, and coordinating top-level IP numbers and autonomous system numbers


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Joyce Chen
– Frodo Sorensen
– Audience

Agreed on

Technical community expertise is crucial for Internet stability and governance


ICANN’s multi-stakeholder model, while imperfect, represents an impressive approach that could be more efficient but serves as foundation for internet coordination

Explanation

Chris acknowledges that ICANN’s multi-stakeholder model is not perfect and could be more efficient, but emphasizes its impressive nature as a broadly-based, bottom-up system. He describes it as more deep, nuanced, respectful, and intelligent than he had initially anticipated.


Evidence

His personal experience joining ICANN with curiosity about the multi-stakeholder model and becoming its greatest advocate after learning from the community over several years


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Paulos Nyirenda
– Joyce Chen
– Ellie McDonald
– Frodo Sorensen

Agreed on

Multi-stakeholder model is essential for Internet governance


Rising regulatory pressures from 420+ digital media regulators worldwide making arbitrary decisions without understanding technical implications

Explanation

Chris warns about the increasing number of digital media regulators globally making decisions without understanding their technical implications. He describes this as ‘whack-a-mole’ instances where legislatures and regulators create unintended consequences for network operations.


Evidence

His count of 420 digital media regulators worldwide as of three years ago, and examples from his experience across various telecommunications and media organizations


Major discussion point

Current Threats and Challenges to Internet Governance


Topics

Legal and regulatory | Infrastructure


IGF provides unique global space where stakeholders meet as peers, deserving continued support and adequate resourcing for its renewal

Explanation

Chris strongly endorses the IGF as the only place globally where stakeholders can come together as peers. He expresses ICANN’s continued support for IGF renewal with adequate resourcing and proper mandates.


Evidence

His positive experience at his first IGF and ICANN’s long-standing financial and institutional support for the forum


Major discussion point

Role and Future of the Internet Governance Forum


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Joyce Chen
– Ellie McDonald
– Frodo Sorensen

Agreed on

IGF remains valuable and fit for purpose despite needing improvements


E

Ellie McDonald

Speech speed

155 words per minute

Speech length

938 words

Speech time

362 seconds

Multi-stakeholder spaces allow engineers, companies, governments and civil society to collaborate, with examples like IETF public interest technology group enabling human rights by design

Explanation

Ellie argues that multi-stakeholder spaces provide unique opportunities for diverse actors to shape internet governance together. She highlights how these spaces allow different types of knowledge and expertise to come together, enabling human rights considerations to be built into technical development from the earliest stages.


Evidence

The IETF public interest technology group providing a safe space for technical, advocacy, normative, and human impact knowledge to combine, and the evolution of HTTPS protocol as an example of addressing both technical and surveillance issues


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Human rights | Infrastructure


Agreed with

– Israel Rosas
– Paulos Nyirenda
– Joyce Chen
– Chris Chapman
– Frodo Sorensen

Agreed on

Multi-stakeholder model is essential for Internet governance


WSIS+20 process poses risks of more state-centric approaches that could exclude multi-stakeholder accountability and threaten existing governance models

Explanation

Ellie warns that the WSIS+20 process and similar discussions about AI governance mechanisms show concerning trends toward more state-centric processes. She points to risks including exclusion of military applications from assessments and lack of genuine multi-stakeholder accountability.


Evidence

Examples from AI governance discussions showing state-centric appointment of experts, exclusion of military applications from scope, and eleventh-hour negotiations that undermine multi-stakeholder principles


Major discussion point

Current Threats and Challenges to Internet Governance


Topics

Legal and regulatory | Human rights


IGF’s bottom-up nature allows different communities to bring various perspectives, though care needed not to be too restrictive about multi-stakeholder model application

Explanation

Ellie emphasizes the importance of the IGF’s bottom-up nature in allowing various communities with different lexicons and ideas to contribute. She warns against being too restrictive about how the multi-stakeholder model is applied, noting its benefits for multiple sectors.


Evidence

Global Partners Digital research on stakeholder positions showing remarkable convergence on certain elements and the importance of maintaining flexibility in multi-stakeholder approaches


Major discussion point

Role and Future of the Internet Governance Forum


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Frodo Sorensen

Agreed on

IGF remains valuable and fit for purpose despite needing improvements


F

Frodo Sorensen

Speech speed

132 words per minute

Speech length

732 words

Speech time

332 seconds

Technical community expertise is essential for internet stability, interoperability and preventing fragmentation that could undermine democratic processes

Explanation

Frodo argues that governance of internet infrastructure requires technical community insight to ensure stable, robust, and interoperable operations. He warns that excluding stakeholders, particularly the technical community, could lead to internet fragmentation and destabilization that ultimately threatens human rights and democratic processes.


Evidence

Norway’s strong support for ICANN and IETF as core institutions, and the connection between open, secure internet infrastructure and applications built on top of it, ultimately supporting freedom of speech and association


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Human rights


Agreed with

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Audience

Agreed on

Technical community expertise is crucial for Internet stability and governance


Norway strongly supports multi-stakeholder approach and ICANN/IETF as core institutions, emphasizing that technical community involvement prevents destabilization

Explanation

Frodo states Norway’s strong support for the multi-stakeholder approach in internet governance and digital cooperation, specifically backing ICANN and IETF as core institutions. He emphasizes that technical community involvement is crucial to prevent internet destabilization and maintain its global value.


Evidence

Norway’s official policy position supporting multi-stakeholder internet governance and the importance of maintaining an open, free, resilient, and interoperable internet


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Paulos Nyirenda
– Joyce Chen
– Chris Chapman
– Ellie McDonald

Agreed on

Multi-stakeholder model is essential for Internet governance


Internet fragmentation risks emerge when stakeholders are excluded from governance discussions, particularly if technical community involvement is insufficient

Explanation

Frodo warns that the internet may become fragmented if some stakeholders are excluded from governance discussions. He specifically emphasizes that insufficient involvement of the technical community could destabilize internet resource administration and weaken the internet’s overall value by restricting its global network usability.


Evidence

The fundamental role of the technical layer in ensuring interoperability of core internet functions and how restrictions on internet communication can threaten human rights and democratic processes


Major discussion point

Current Threats and Challenges to Internet Governance


Topics

Infrastructure | Human rights


IGF serves as successful prototype for multi-stakeholder approach in UN system, building trust and legitimacy by involving affected parties in decision-making

Explanation

Frodo argues that the IGF has been a successful prototype for implementing the multi-stakeholder approach within the UN system. He suggests this model could be used to strengthen multi-stakeholderism in other UN parts like the CSTD by broadening stakeholder representation and building trust between groups that otherwise wouldn’t have common discussion spaces.


Evidence

The IGF’s role as a global forum for capacity building and discussing internet-related issues, and its careful design as a non-decision-making body that focuses on making outcomes more accessible for policymaking


Major discussion point

Role and Future of the Internet Governance Forum


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Ellie McDonald

Agreed on

IGF remains valuable and fit for purpose despite needing improvements


A

Alicia Sharif

Speech speed

169 words per minute

Speech length

188 words

Speech time

66 seconds

Online moderation handles technical questions about internet security extensions and post-quantum cryptography challenges

Explanation

Alicia relays an online question from Nicholas about internet security extensions like RPKI and DNSSEC in the context of WSIS+20 discussions. The question addresses technical security measures and post-quantum cryptography challenges, asking about maintaining trust and interoperability in an open internet.


Evidence

Specific technical examples including RPKI for routing security, DNSSEC for DNS record authentication, US federal enforcement of RPKI, and the looming post-quantum era requiring cryptographic agility


Major discussion point

Technical Security and Internet Hardening


Topics

Cybersecurity | Infrastructure


Disagreed with

– Israel Rosas
– Alicia Sharif (relaying online question)

Disagreed on

Approach to technical security implementation in policy processes


A

Audience

Speech speed

204 words per minute

Speech length

158 words

Speech time

46 seconds

Community input essential for technical standards development and deployment, as voluntary adoption requires broad consideration of all requirements

Explanation

The audience member from the Internet Architecture Board emphasizes the importance of broad input from various stakeholders in technical standards development. They explain that unlike government enforcement, technical standards rely on voluntary adoption, which only succeeds when all requirements are considered and people actually choose to use the standards.


Evidence

The IETF and IAB’s reliance on voluntary deployment rather than government enforcement, and the need to avoid surprises by incorporating diverse perspectives early in the development process


Major discussion point

Technical Security and Internet Hardening


Topics

Infrastructure | Legal and regulatory


Agreed with

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Frodo Sorensen

Agreed on

Technical community expertise is crucial for Internet stability and governance


A

Ajith Francis

Speech speed

148 words per minute

Speech length

1451 words

Speech time

586 seconds

Technical architecture of the Internet is not a monolith but a set of federated entities, operators, and actors working together to keep the Internet accessible

Explanation

Ajith emphasizes that the Internet’s technical underpinnings are complex and distributed rather than centralized. He argues that understanding this technical layer and its different components is critical for policy and governance discussions, even though end users don’t need to know the technical details for daily Internet use.


Evidence

The fact that end users can navigate the Internet without understanding its technical workings, but policymakers need this understanding for governance decisions


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Legal and regulatory


Understanding of technical layer components is extremely critical for policy and governance questions, despite users taking technical underpinnings for granted

Explanation

Ajith argues that while it’s reasonable for end users to take the Internet’s technical infrastructure for granted, policymakers and governance actors must understand the technical layer’s complexity. This understanding is essential for making informed decisions about Internet governance and policy.


Evidence

The observation that end users don’t need to know how the Internet works to use it, but governance requires understanding the actual technical components


Major discussion point

Technical Architecture and Governance of the Internet


Topics

Infrastructure | Legal and regulatory


There is tension between governance of the Internet (technical standards, protocols, naming systems) versus governance on the Internet (application layer governance)

Explanation

Ajith identifies an emerging distinction in digital governance between governing the Internet’s technical infrastructure versus governing activities and applications that operate on top of the Internet. He questions whether the IGF is adequately equipped to handle both dimensions of this governance challenge.


Evidence

The framing of governance ‘of’ versus ‘on’ the Internet as an emerging way to distinguish between infrastructure governance and application-layer governance


Major discussion point

Role and Future of the Internet Governance Forum


Topics

Infrastructure | Legal and regulatory


Multi-stakeholder model and collective governance coordination is essential given the complexity of Internet technical architecture involving multiple entities

Explanation

Ajith argues that because the Internet’s technical layer involves many different federated entities and operators, effective governance requires coordination among multiple stakeholders. This coordination must happen both at operational and policy levels to maintain Internet stability and accessibility.


Evidence

The distributed nature of Internet technical architecture with multiple entities, operators, and actors all contributing to Internet operations


Major discussion point

Multi-Stakeholder Model and Internet Governance


Topics

Infrastructure | Legal and regulatory


Agreements

Agreement points

Multi-stakeholder model is essential for Internet governance

Speakers

– Israel Rosas
– Paulos Nyirenda
– Joyce Chen
– Chris Chapman
– Ellie McDonald
– Frodo Sorensen

Arguments

Internet standards and protocols are developed through open, multi-stakeholder processes where anyone can participate, not just outsource solutions to technical bodies


Domain Name System converts human-readable addresses to IP addresses, managed through hierarchical structure with ICANN coordination and multi-stakeholder oversight


Regional Internet Registries manage IP address allocation through community-based, bottom-up, consensus-driven policy development processes


ICANN’s multi-stakeholder model, while imperfect, represents an impressive approach that could be more efficient but serves as foundation for internet coordination


Multi-stakeholder spaces allow engineers, companies, governments and civil society to collaborate, with examples like IETF public interest technology group enabling human rights by design


Norway strongly supports multi-stakeholder approach and ICANN/IETF as core institutions, emphasizing that technical community involvement prevents destabilization


Summary

All speakers strongly endorse the multi-stakeholder model as fundamental to Internet governance, emphasizing its open, bottom-up, consensus-driven nature that allows diverse stakeholders to participate as equals in decision-making processes.


Topics

Infrastructure | Legal and regulatory


Technical community expertise is crucial for Internet stability and governance

Speakers

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Frodo Sorensen
– Audience

Arguments

Internet standards and protocols are developed through open, multi-stakeholder processes where anyone can participate, not just outsource solutions to technical bodies


Regional Internet Registries manage IP address allocation through community-based, bottom-up, consensus-driven policy development processes


ICANN coordinates global internet unique identifiers ensuring stable, secure operation through multi-stakeholder model involving technical, policy and community aspects


Technical community expertise is essential for internet stability, interoperability and preventing fragmentation that could undermine democratic processes


Community input essential for technical standards development and deployment, as voluntary adoption requires broad consideration of all requirements


Summary

Speakers agree that technical community involvement is not optional but essential for maintaining Internet stability, with their expertise being fundamental to preventing fragmentation and ensuring interoperability.


Topics

Infrastructure | Legal and regulatory


IGF remains valuable and fit for purpose despite needing improvements

Speakers

– Israel Rosas
– Joyce Chen
– Chris Chapman
– Ellie McDonald
– Frodo Sorensen

Arguments

IGF remains fit for purpose with valid working definition covering emerging technologies, with community addressing AI and other issues before formal UN processes


IGF needs greater focus and streamlining, as technical community contributes 30% of funding but sees decreasing space for technical topics in discussions


IGF provides unique global space where stakeholders meet as peers, deserving continued support and adequate resourcing for its renewal


IGF’s bottom-up nature allows different communities to bring various perspectives, though care needed not to be too restrictive about multi-stakeholder model application


IGF serves as successful prototype for multi-stakeholder approach in UN system, building trust and legitimacy by involving affected parties in decision-making


Summary

All speakers support the IGF’s continued existence and value, while acknowledging it needs improvements in focus, streamlining, and resource allocation to better serve its mission.


Topics

Infrastructure | Legal and regulatory


Similar viewpoints

Both speakers express concern about current geopolitical pressures and policy processes that threaten the open, multi-stakeholder nature of Internet governance through more centralized, state-centric approaches.

Speakers

– Joyce Chen
– Ellie McDonald

Arguments

2025 represents critical inflection point with geopolitical tensions driving digital sovereignty proposals that risk fragmenting internet into isolated silos


WSIS+20 process poses risks of more state-centric approaches that could exclude multi-stakeholder accountability and threaten existing governance models


Topics

Legal and regulatory | Infrastructure


Both speakers warn about the dangers of regulatory decisions made without proper technical understanding, which can lead to unintended consequences and potential Internet fragmentation.

Speakers

– Chris Chapman
– Frodo Sorensen

Arguments

Rising regulatory pressures from 420+ digital media regulators worldwide making arbitrary decisions without understanding technical implications


Internet fragmentation risks emerge when stakeholders are excluded from governance discussions, particularly if technical community involvement is insufficient


Topics

Legal and regulatory | Infrastructure


Both speakers acknowledge the AFRINIC governance crisis as a critical example of why robust multi-stakeholder governance processes are essential and need continuous attention and improvement.

Speakers

– Paulos Nyirenda
– Joyce Chen

Arguments

Multi-stakeholder governance faces challenges in Africa, particularly with AFRINIC registry problems highlighting need for renewed focus on governance processes


Crisis situations like AFRINIC governance problems demonstrate urgent need for community-driven solutions and review of fundamental governance documents


Topics

Infrastructure | Legal and regulatory


Unexpected consensus

Need for disagreement and open discussion in multi-stakeholder processes

Speakers

– Israel Rosas

Arguments

Multi-stakeholder model provides spaces of influence and translation where different stakeholders can reach consensus through open disagreement and discussion


Explanation

Unexpectedly, there was explicit advocacy for disagreement as a positive force in governance processes, arguing that avoiding disagreement is counterproductive and that longer, more complex discussions through disagreement lead to more resilient outcomes.


Topics

Infrastructure | Legal and regulatory


Technical community as significant financial contributor to IGF

Speakers

– Joyce Chen

Arguments

IGF needs greater focus and streamlining, as technical community contributes 30% of funding but sees decreasing space for technical topics in discussions


Explanation

It was unexpected to learn that technical organizations comprise 30% of the IGF trust fund, highlighting their significant financial investment in multi-stakeholder governance despite feeling their topics receive less attention.


Topics

Infrastructure | Legal and regulatory


Overall assessment

Summary

There is remarkably strong consensus among all speakers on the fundamental value of the multi-stakeholder model, the essential role of technical community expertise, and the continued importance of the IGF. Areas of agreement include the need for open, bottom-up governance processes, the risks posed by fragmentation and state-centric approaches, and the requirement for technical expertise in Internet governance decisions.


Consensus level

Very high level of consensus with no fundamental disagreements identified. The implications are positive for Internet governance, suggesting broad stakeholder alignment on core principles, though speakers acknowledge implementation challenges and the need for continuous improvement in processes and institutions.


Differences

Different viewpoints

IGF focus and prioritization approach

Speakers

– Joyce Chen
– Israel Rosas

Arguments

IGF needs greater focus and streamlining, as technical community contributes 30% of funding but sees decreasing space for technical topics in discussions


IGF remains fit for purpose with valid working definition covering emerging technologies, with community addressing AI and other issues before formal UN processes


Summary

Joyce argues the IGF has become too broad and unfocused, trying to ‘juggle everything’ and ‘please everyone’ while technical topics get less attention despite significant technical community funding. Israel counters that the IGF’s flexibility and broad working definition remain valid and effective, with the community naturally adapting to address emerging issues without needing structural changes.


Topics

Infrastructure | Legal and regulatory


Approach to technical security implementation in policy processes

Speakers

– Israel Rosas
– Alicia Sharif (relaying online question)

Arguments

Security extensions like RPKI should be kept separate from high-level WSIS+20 discussions, with community-driven deployment processes remaining independent of government mandates


Online moderation handles technical questions about internet security extensions and post-quantum cryptography challenges


Summary

The online questioner (via Alicia) seeks integration of technical security measures like RPKI and DNSSEC into WSIS+20 outcomes with specific guardrails. Israel argues for separation, maintaining that high-level processes should focus on principles while leaving technical implementation to community-driven processes, emphasizing that government recommendations are acceptable since governments are also network operators.


Topics

Cybersecurity | Infrastructure


Unexpected differences

Technical community representation and space within IGF

Speakers

– Joyce Chen
– Israel Rosas

Arguments

IGF needs greater focus and streamlining, as technical community contributes 30% of funding but sees decreasing space for technical topics in discussions


IGF remains fit for purpose with valid working definition covering emerging technologies, with community addressing AI and other issues before formal UN processes


Explanation

This disagreement is unexpected because both speakers represent technical organizations (APNIC and Internet Society) that are closely aligned in the internet governance ecosystem. Joyce’s criticism of the IGF’s direction and Israel’s defense of its current approach represent a rare public divergence within the technical community about the forum they both financially support and participate in.


Topics

Infrastructure | Legal and regulatory


Overall assessment

Summary

The discussion showed remarkably high consensus among speakers on fundamental principles of internet governance, with most disagreements being tactical rather than strategic. The main areas of disagreement centered on IGF reform approaches and the relationship between technical implementation and policy processes.


Disagreement level

Low to moderate disagreement level with significant implications for IGF evolution. While speakers largely agreed on multi-stakeholder principles and the importance of technical community involvement, the split within the technical community itself about IGF direction suggests potential challenges for maintaining unified support for the forum’s current model. The disagreements reflect broader tensions between preserving flexibility versus achieving focus, and between community-driven versus policy-integrated approaches to technical governance.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers express concern about current geopolitical pressures and policy processes that threaten the open, multi-stakeholder nature of Internet governance through more centralized, state-centric approaches.

Speakers

– Joyce Chen
– Ellie McDonald

Arguments

2025 represents critical inflection point with geopolitical tensions driving digital sovereignty proposals that risk fragmenting internet into isolated silos


WSIS+20 process poses risks of more state-centric approaches that could exclude multi-stakeholder accountability and threaten existing governance models


Topics

Legal and regulatory | Infrastructure


Both speakers warn about the dangers of regulatory decisions made without proper technical understanding, which can lead to unintended consequences and potential Internet fragmentation.

Speakers

– Chris Chapman
– Frodo Sorensen

Arguments

Rising regulatory pressures from 420+ digital media regulators worldwide making arbitrary decisions without understanding technical implications


Internet fragmentation risks emerge when stakeholders are excluded from governance discussions, particularly if technical community involvement is insufficient


Topics

Legal and regulatory | Infrastructure


Both speakers acknowledge the AFRINIC governance crisis as a critical example of why robust multi-stakeholder governance processes are essential and need continuous attention and improvement.

Speakers

– Paulos Nyirenda
– Joyce Chen

Arguments

Multi-stakeholder governance faces challenges in Africa, particularly with AFRINIC registry problems highlighting need for renewed focus on governance processes


Crisis situations like AFRINIC governance problems demonstrate urgent need for community-driven solutions and review of fundamental governance documents


Topics

Infrastructure | Legal and regulatory


Takeaways

Key takeaways

The internet’s technical architecture is a complex federated system requiring multi-stakeholder coordination across standards/protocols (IETF), domain names (ICANN), and IP addresses (RIRs)


The multi-stakeholder model has proven effective for internet governance by enabling engineers, companies, governments, and civil society to collaborate as peers in open, bottom-up processes


2025 represents a critical inflection point with rising geopolitical tensions, digital sovereignty proposals, and regulatory fragmentation threatening to break the internet into isolated silos


The IGF remains fit for purpose as a unique global forum where stakeholders can meet as peers, though it needs greater focus, streamlining, and adequate resourcing


Technical community expertise is essential for maintaining internet stability and preventing fragmentation, with technical organizations contributing 30% of IGF funding despite decreasing space for technical topics


WSIS+20 process poses risks of more state-centric approaches that could undermine existing multi-stakeholder governance models


Crisis situations like AFRINIC governance problems highlight the urgent need for renewed focus on fundamental governance processes and community-driven solutions


Resolutions and action items

Continue supporting IGF renewal with adequate resourcing and proper mandates


Maintain technical community financial support for IGF (currently 30% of trust fund)


Participate in ongoing review of RIR governance document led by ICANN ASO to address fundamental governance issues


Engage in WSIS+20 discussions to protect multi-stakeholder model from state-centric threats


Better coordinate interplay between WSIS framework and Global Digital Compact to avoid duplication


Strengthen multi-stakeholder representation in other UN system parts like CSTD


Unresolved issues

How to streamline IGF processes and prioritize work while maintaining flexibility


How to increase space for technical topics within IGF discussions despite their perceived ‘dry’ nature


How to resolve AFRINIC governance crisis affecting internet registry operations in Africa


How to prevent internet fragmentation amid rising geopolitical tensions and digital sovereignty proposals


How to ensure security extensions like RPKI and DNSSEC reinforce trust while preparing for post-quantum cryptography challenges


How to make IGF outcomes more accessible and useful for policymaking


How to address resource barriers that prevent under-resourced communities from engaging in multi-stakeholder processes


Suggested compromises

Keep high-level policy discussions (WSIS+20) separate from technical implementation details while ensuring community-driven processes remain independent


Build on IGF’s successful multi-stakeholder prototype to strengthen multi-stakeholderism in other UN system parts rather than replacing existing structures


Focus on translation and influence spaces where different stakeholder communities can reach consensus through open disagreement and discussion


Maintain IGF’s flexibility for addressing emerging technologies while implementing better housekeeping to remove bloat and create focus


Coordinate WSIS and Global Digital Compact efforts to avoid fragmented and duplicated initiatives while preserving their complementary strengths


Thought provoking comments

We can all be the IETF in some way… any organization any team having technical people within their organizations they can join these conversations at the IETF and there are mechanisms to attend the meetings in person online to participate mailing list… these organizations are showing how the open model of voluntary adoption of standardization can work

Speaker

Israel Rosas


Reason

This comment reframes the technical community from being seen as separate entities that solve problems for others to being inclusive spaces where all stakeholders can participate. It challenges the common perception of technical organizations as closed or exclusive.


Impact

This shifted the discussion from describing what technical organizations do to emphasizing how they operate inclusively. It established a theme of openness and participation that other panelists built upon throughout the session, particularly influencing later discussions about multi-stakeholder engagement.


We are very good at picking up things, but we don’t know how to put them down, you know, to make space for other pressing issues. We’re trying to juggle everything and we’re trying to please everyone. And to me, this is a disservice to everybody because it’s impossible to dive deeply into particular topics.

Speaker

Joyce Chen


Reason

This is a brutally honest critique of the IGF’s operational challenges that goes beyond typical diplomatic language. It identifies a fundamental structural problem – the inability to prioritize and focus – that affects the forum’s effectiveness.


Impact

This comment introduced a critical turning point in the discussion, shifting from largely positive assessments of multi-stakeholder governance to acknowledging serious structural limitations. It prompted other panelists to engage more critically with IGF reform needs and added urgency to the conversation about the forum’s future.


The internet technical community are one of the top financial contributors to the IGF… internet technical organizations actually comprise 30% of the overall IGF trust fund… However, over the years, we are seeing fewer technical topics being discussed at the IGF. The space for the technical community, I feel, is growing smaller.

Speaker

Joyce Chen


Reason

This reveals a concerning disconnect between financial contribution and representation, highlighting how the IGF may be losing focus on its core technical governance mission while becoming more generalized.


Impact

This data point significantly deepened the conversation by providing concrete evidence of the IGF’s drift from its technical roots. It added weight to concerns about the forum’s direction and influenced the discussion about whether the IGF should remain focused on internet governance versus broader digital governance.


I don’t know why I’ve seen a trend to avoid disagreement and in fact disagreement is good is positive because Different stakeholders may have different views different interests, but if we pursue the same objective… It’s through the discussion that we can reach consensus

Speaker

Israel Rosas


Reason

This challenges the common assumption that consensus-building requires avoiding conflict, instead arguing that productive disagreement is essential for robust governance. It reframes conflict as a feature, not a bug, of multi-stakeholder processes.


Impact

This comment provided a philosophical foundation for defending multi-stakeholder processes against criticism. It influenced how other panelists discussed the challenges facing internet governance, encouraging them to view current tensions as potentially productive rather than purely threatening.


For us in Africa, maybe I should talk a little bit about how important it is now to be talking about governance of the technical layer. As you know, our registry in Africa for IP addresses, AFRINIC, is having tremendous governance-related problems at the moment that have resulted in, for example, annulling board elections just a few hours ago.

Speaker

Paulos Nyirenda


Reason

This brought urgent real-world consequences into what could have been an abstract discussion, demonstrating that technical governance failures have immediate impacts on internet access and stability in entire regions.


Impact

This intervention grounded the entire discussion in concrete reality, showing that governance challenges aren’t theoretical but are actively affecting internet infrastructure. It prompted Joyce Chen to acknowledge the crisis and discuss community responses, adding urgency to the conversation about governance reform.


We shouldn’t mix those topics because the Wizards Plus 20 is a high-level process where for instance we can agree that we want a more secure more trusted internet. How? Well that’s for the community to work in specific spaces

Speaker

Israel Rosas


Reason

This articulates a crucial principle about the appropriate division of labor between high-level policy processes and technical implementation, arguing against conflating political agreements with technical specifications.


Impact

This comment provided clarity on how different governance layers should interact, helping to resolve potential confusion about the role of WSIS+20 versus technical community processes. It reinforced the theme of respecting different stakeholder roles and expertise domains.


Overall assessment

These key comments fundamentally shaped the discussion by introducing three critical tensions: the gap between IGF’s inclusive ideals and operational realities, the challenge of maintaining technical focus amid broader digital governance pressures, and the need to balance high-level policy coordination with technical community autonomy. Israel Rosas’s comments about inclusive participation and productive disagreement provided philosophical grounding for defending multi-stakeholder approaches, while Joyce Chen’s frank assessment of IGF limitations introduced necessary self-criticism. Paulos Nyirenda’s intervention about AFRINIC’s crisis brought urgent real-world stakes into the conversation, preventing it from becoming too abstract. Together, these comments created a more nuanced, honest discussion that acknowledged both the value and vulnerabilities of current internet governance arrangements, ultimately strengthening the case for reform rather than replacement of existing institutions.


Follow-up questions

How can we make IGF outcomes more accessible and useful for policymaking?

Speaker

Frodo Sorensen


Explanation

This addresses a key limitation of the IGF that has been criticized – while it’s successful as a discussion forum, there’s a need to better translate its outcomes into actionable policy guidance


How can we better connect WSIS and the Global Digital Compact to avoid duplicated and fragmented efforts?

Speaker

Frodo Sorensen


Explanation

Both initiatives have similar goals but risk creating parallel processes that could undermine effectiveness and waste resources


How can we streamline IGF processes and intersessional work to give it more focus and help prioritize its work?

Speaker

Joyce Chen


Explanation

The IGF’s flexibility is a strength but it struggles to prioritize topics and tends to accumulate issues without resolution, leading to bloat and reduced effectiveness


How do we ensure that security extensions like RPKI and DNSSEC reinforce trust and interoperability in a truly open internet, and what guard rails should we build ahead of WSIS Plus 20 outcomes?

Speaker

Nicholas (online participant)


Explanation

This addresses the technical challenge of internet hardening through security measures while maintaining openness and interoperability, particularly important given the post-quantum cryptography transition


How can the multi-stakeholder model be strengthened in other parts of the UN system beyond the IGF?

Speaker

Frodo Sorensen


Explanation

The IGF has been a successful prototype for multi-stakeholder governance in the UN system, and there’s potential to apply these lessons to other UN bodies like the CSTD


How can we address the governance crisis at AFRINIC and strengthen RIR governance globally?

Speaker

Paulos Nyirenda and Joyce Chen


Explanation

The governance problems at the African internet registry highlight vulnerabilities in the technical infrastructure governance model that need urgent community attention


How can we better track and measure the impacts of multi-stakeholder internet governance processes at the local level?

Speaker

Israel Rosas


Explanation

There’s a need for better evidence and metrics to demonstrate the effectiveness of multi-stakeholder governance, as referenced by the ICANN and Internet Society paper on IETF footprints


How can we ensure more technical topics are discussed at the IGF despite their perceived ‘dry’ nature?

Speaker

Joyce Chen


Explanation

Despite technical organizations being major funders of the IGF, there’s a concerning trend of fewer technical discussions, which undermines the forum’s core purpose of internet governance


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #54 Advancing Lesothos Digital Transformation Policies

Open Forum #54 Advancing Lesothos Digital Transformation Policies

Session at a glance

Summary

This discussion focused on Lesotho’s digital transformation journey and the country’s efforts to advance digitalization through policies and practices. The session was part of the Internet Governance Forum 2025, featuring speakers from Lesotho’s government, parliament, and development partners. Minister Nthati Moorosi outlined Lesotho’s mission to build a connected, secure, inclusive and resilient digital society by 2030, emphasizing key areas like digital payments, digital identity, and creating opportunities from rural villages to urban centers.


Principal Secretary Kanono Ramasamule detailed the country’s digital transformation strategy, which is anchored on five pillars: enabling environment, digital government, digital infrastructure, digital population skills, and digital business. He highlighted significant progress including cabinet approval of the National Digital Transformation Strategy, completion of ICT governance frameworks, and partnerships with India’s Ministry of Electronics and ICT. The country has achieved 100% broadband coverage, though challenges remain with affordability and device access.


Member of Parliament Lekhotsa Mafethe discussed efforts to digitize Lesotho’s National Assembly, including making parliament paperless and enabling remote participation by MPs. He emphasized the importance of separating cybersecurity and computer crimes bills to better address each area’s specific needs. UNDP representative Nthabiseng Pule addressed the critical digital skills gap, particularly among women, and outlined plans to establish training centers in 40 villages with digital champions to provide community-based digital literacy programs.


The discussion concluded with calls for international partnerships and collaboration, as speakers acknowledged that Lesotho cannot achieve its digital transformation goals in isolation and actively seeks support from development partners and neighboring countries.


Keypoints

## Major Discussion Points:


– **Lesotho’s National Digital Transformation Strategy**: The country has developed a comprehensive five-pillar strategy focusing on enabling environment (policies/regulations), digital government through Digital Public Infrastructure (DPI), digital infrastructure (achieving 100% broadband coverage), digital population (skills development), and digital business to address youth unemployment and economic growth.


– **Digital Parliament Initiative**: Parliament is being digitized to become paperless, enable remote participation by MPs, provide live streaming of proceedings, and improve public access to parliamentary processes, while working on crucial legislation like cybersecurity and AI policy bills.


– **Infrastructure and Connectivity Challenges**: Despite being landlocked, Lesotho has achieved connectivity through partnerships (shareholder in submarine cables, agreements with South Africa), satellite services, and establishing internet exchange points, though affordability and device access remain significant barriers.


– **Skills Development and Digital Inclusion**: Major focus on addressing the digital skills gap, particularly among women, youth, and vulnerable populations, through village-based training programs with digital champions and partnerships with organizations like UNDP to establish training centers in 40 pilot villages.


– **Call for International Partnerships**: Strong emphasis on seeking collaboration with global partners, development organizations, and neighboring countries to accelerate digital transformation, share best practices, and secure funding for scaling initiatives.


## Overall Purpose:


The discussion aimed to showcase Lesotho’s digital transformation progress at the Internet Governance Forum 2025, share the country’s strategic approach and achievements, identify challenges and opportunities, and actively seek international partnerships and collaboration to advance their digital agenda by 2030.


## Overall Tone:


The tone was consistently positive, collaborative, and forward-looking throughout the session. Speakers demonstrated pride in their achievements while maintaining transparency about challenges. The atmosphere was professional yet welcoming, with genuine enthusiasm for partnership opportunities. The tone remained optimistic and solution-oriented from start to finish, emphasizing cooperation and shared learning rather than dwelling on obstacles.


Speakers

**Speakers from the provided list:**


– **Seletar Tselekhwa** – System Librarian at the National Institute of Lesotho, Session Moderator


– **Nthati Moorosi** – Minister of Information, Communication, Science and Technology and Innovation


– **Kanono Ramasamule** – Principal Secretary (PIA’s Principal Secretary)


– **Lekhotsa Mafethe** – Member of Parliament from Lesotho, Member of Prime Minister’s Ministries Committee in the National Assembly, Member of APNIC


– **Nthabiseng Pule** – UNDP representative, Minister of ICT Advisor (referred to as “Menter Wile” in the transcript)


– **Audience** – Multiple audience members including:


– Abdukarim – Professor of wireless telecommunications from the University of Illinois in Nigeria


– Celine Bal – IGF Secretariat


– Togo Mia – Representative from South Africa’s Internet Governance Forum multi-stakeholder committee, Civil society representative


**Additional speakers:**


– **Dr Tahleho T’seole** – Mentioned as participating online (referenced but no direct quotes in transcript)


Full session report

# Comprehensive Report: Lesotho’s Digital Transformation Journey – Internet Governance Forum 2025


## Executive Summary


This comprehensive report examines a detailed discussion on Lesotho’s digital transformation initiatives presented at the Internet Governance Forum 2025 in Norway. The session, moderated by Seletar Tselekhwa, System Librarian at the National Institute of Lesotho, brought together key government officials, parliamentary representatives, and international development partners to showcase the country’s strategic approach to building a digitally inclusive society by 2030. This marked Lesotho’s second participation in the IGF, with the session broadcast live in Lesotho.


The discussion featured Minister Nthati Moorosi outlining Lesotho’s mission to create a “connected, secure, inclusive and resilient digital society,” Principal Secretary Kanono Ramasamule detailing the comprehensive five-pillar strategy underpinning this transformation, Member of Parliament Lekhotsa Mafethe providing insights into parliamentary digitalisation efforts, and UNDP representative Nthabiseng Pule addressing critical challenges around digital skills gaps and device accessibility. Dr Tahleho T’seole was scheduled to participate online but did not contribute to the discussion.


The presentations revealed coordinated efforts across government institutions and development partners, with speakers acknowledging both significant achievements in infrastructure coverage and persistent challenges in accessibility and digital skills development.


## Strategic Framework and Vision


### National Digital Transformation Strategy


Lesotho’s digital transformation strategy represents a comprehensive approach anchored on five fundamental pillars, as articulated by Principal Secretary Kanono Ramasamule. The enabling environment pillar focuses on developing robust policies and regulatory frameworks, with significant progress including cabinet approval of the National Digital Transformation Strategy and completion of ICT governance frameworks. The data management policy is under validation, alongside AI policy development, and the Protection of Personal Data Act is under review. The African Union and GIZ are collaborating on data governance policy development.


The digital government pillar emphasises Digital Public Infrastructure (DPI) implementation, which Ramasamule positioned as crucial for facilitating the Africa Continental Free Trade Area’s digital trade protocol, stating that “the implementation of this protocol lies solely on implementation of DPI.” The government is upgrading its data center that “had remained dormant for many years” and establishing enterprise architecture and interoperability frameworks through consultant contracts.


The digital infrastructure pillar has achieved remarkable success, with Lesotho now boasting 100% broadband coverage despite its landlocked status. This was accomplished through terrestrial connections via South Africa and satellite services, with internet exchange points being established for local traffic management. The digital population pillar addresses skills development through universal service fund programmes, while the digital business pillar aims to tackle youth unemployment through digital entrepreneurship opportunities.


### Policy Development and Implementation


The policy framework development has progressed significantly, with multiple initiatives underway simultaneously. Ramasamule detailed the validation of broadband infrastructure sharing policy and ongoing work on various regulatory frameworks. The European Union is making arrangements for developing digital blueprints with Estonia, though this partnership is still being finalized.


Partnerships with international organizations have been instrumental, including collaboration with India’s Ministry of Electronics and ICT. The government has established clear timelines for implementation, with the data network upgrade contract scheduled for conclusion early the following week, demonstrating momentum in implementation efforts.


## Parliamentary Digital Transformation


### Digitalising Legislative Processes


Member of Parliament Lekhotsa Mafethe outlined ambitious plans for transforming Lesotho’s National Assembly into a fully digital institution. The objectives include making parliament paperless, enabling remote participation by MPs, providing live streaming of proceedings, and implementing digital voting systems. These initiatives aim to enhance public access to parliamentary processes whilst improving operational efficiency.


The parliamentary transformation extends beyond mere digitisation to encompass fundamental changes in how legislative work is conducted. Plans include electronic attendance systems, digital document management, and enhanced public engagement through online platforms.


### Legislative Priorities and Strategic Decisions


A significant development emerged regarding cybersecurity legislation, where Mafethe revealed that the parliamentary committee, through multi-stakeholder engagement, determined that previously bundled cybersecurity and computer crimes bills should be separated. This represents a strategic shift from the original government approach, with Mafethe arguing that “we need to unbound both bills and let each bill have its own policies and merits on its own side so that we can define with ease each own merits for public consumptions without the other overshadowing the other.”


The parliament is also working on AI policy legislation, with Mafethe offering a philosophical perspective on technology integration: “It is quite essentially important for us to realise that human intervention and AI, I believe, should coexist for generations to come since neither can operate without the other.” This thoughtful approach demonstrates sophisticated understanding of emerging technology challenges.


## Infrastructure Achievements and Accessibility Challenges


### Connectivity Success Despite Geographic Constraints


Despite being landlocked, Lesotho has successfully addressed connectivity challenges through partnerships with South Africa and satellite services. The country has achieved 100% broadband coverage, with infrastructure development including internet exchange points and expanding connectivity to government institutions. Plans are underway to extend connectivity to councils and district administrator offices during the third quarter.


The government has also established bilateral agreements (BNC) with South Africa and is developing satellite applications MOUs with other countries, demonstrating regional cooperation in addressing connectivity challenges.


### Persistent Accessibility Barriers


However, significant challenges remain in translating infrastructure coverage into meaningful access for citizens. Principal Secretary Ramasamule acknowledged that whilst broadband coverage is universal, “challenges with affordability and access to devices” persist. UNDP representative Nthabiseng Pule provided stark statistics, revealing that “less than 5% of the population has access to laptops, creating significant barriers for youth productivity and digital participation.”


This device accessibility challenge was illustrated through a poignant anecdote shared by Pule about a young person whose laptop had reached end of life, saying “even this one is borrowed. I need, definitely to be productive, I need a new laptop. But how I get it, I do not know.” This demonstrates how the digital divide affects individual potential and productivity.


## Digital Skills Development and Inclusion


### Addressing the Skills Gap


The digital skills landscape in Lesotho presents significant challenges. Pule revealed that “only 14% of population has moderate digital skills whilst the majority have very low skills, requiring massive capacity building efforts.” A significant gender digital skills gap exists, with women being less digitally skilled compared to men.


To address these challenges, multiple approaches are being implemented. The government’s strategy involves digital literacy programmes through the universal service fund, targeting teachers who will cascade training to children and parents. Additionally, arrangements have been made between the Ministry of Education and service providers to offer discounted data to students.


### Community-Based Training Initiatives


UNDP’s partnership with the government focuses on establishing training centres in 40 villages, with digital champions providing community-based digital literacy programmes. These centres will specifically target women, youth, and people with disabilities. Pule explained that this village-based approach recognises resource limitations, with funding constraints limiting the initial rollout to 40 villages instead of nationwide coverage.


The training programmes are designed to be practical and relevant to daily life, focusing on skills that can immediately improve productivity and economic opportunities. However, the challenge of reaching out-of-school youth remains, as current discount programmes primarily benefit students within the formal education system.


## Infrastructure and Connectivity Context


### Electricity Access Challenges


Supporting the digital transformation goals requires addressing basic infrastructure needs. According to Pule, current electricity access is provided through 10 mini-grids serving 10,000 households, with plans to reach 75-100% population coverage by 2030 to support broader digital transformation objectives.


### Connectivity Partnerships and Technical Details


The connectivity achievements have been supported by various partnerships and technical initiatives. Nthabiseng Pule mentioned Lesotho’s participation in the Wayok submarine cable shareholding, which contributes to the country’s connectivity options alongside terrestrial and satellite connections.


The GIGA project mapping and financing model completion with the Ministry of Education represents ongoing efforts to enhance educational connectivity, complementing the universal service fund initiatives for school connections.


## International Partnerships and Collaboration


### Strategic Development Partnerships


Throughout the discussion, speakers emphasized that Lesotho’s digital transformation success depends heavily on international partnerships. The country has established relationships with multiple international partners, including India’s Ministry of Electronics and ICT for technical cooperation, and ongoing arrangements with the European Union for policy development support.


The potential partnership with Estonia focuses on digital blueprint development, though arrangements are still being finalized. These partnerships demonstrate Lesotho’s strategic approach to leveraging global expertise whilst building local capacity.


### Regional Cooperation


Regional cooperation, particularly with South Africa, has been crucial for addressing connectivity challenges. The bilateral agreements (BNC) with South Africa include provisions for cross-border digital initiatives, with plans to test cross-border ID verification and data exchange systems.


## Implementation Timeline and Priorities


### Immediate Actions


Several concrete action items emerged with specific timelines. The government data network upgrade contract is scheduled for conclusion early the following week, demonstrating immediate implementation momentum. The digital agency (CDU office) establishment is moving forward to focus specifically on digital transformation implementation.


Expansion of connectivity to councils and district administrator offices is planned for the third quarter, with cybersecurity bills scheduled for parliamentary consideration. The national addressing system implementation timeline has been accelerated from the original 2026 schedule to begin by the end of the year or early the following year.


### Medium-Term Objectives


The GIGA mapping and financing model completion with the Ministry of Education represents a medium-term priority, alongside the testing of cross-border systems with South Africa. The establishment of 40 village digital champion training centres through the UNDP partnership represents a significant commitment to community-based capacity building.


## Key Challenges and Considerations


### Device Access and Affordability


The challenge of providing affordable device access, particularly laptops, remains a significant barrier to meaningful digital participation. With less than 5% of the population currently having access to laptops, this represents a critical constraint on digital transformation goals.


The related challenge of ensuring affordable connectivity for out-of-school youth also requires attention, as current discount programmes primarily benefit students within formal education systems.


### Scaling and Sustainability


Funding constraints limit many initiatives to pilot phases, with the digital skills training programme initially reaching only 40 villages instead of nationwide coverage. The challenge of scaling successful pilots to national programmes remains a significant concern requiring continued international support and innovative approaches.


## Legislative Approach and Policy Coordination


### Parliamentary Strategy


The parliamentary committee’s decision to separate cybersecurity and computer crimes legislation represents a thoughtful approach to policy development, focusing on ensuring each bill receives appropriate consideration without one overshadowing the other.


The parliament’s work on AI policy legislation demonstrates forward-thinking approaches to emerging technology governance, with emphasis on human-AI coexistence rather than replacement paradigms.


## Implications and Lessons


### Model for Small Landlocked Countries


Lesotho’s success in achieving universal broadband coverage despite geographic constraints provides valuable insights for other landlocked countries. The combination of strategic partnerships, terrestrial connections through neighboring countries, and satellite services demonstrates that geographic limitations can be addressed through creative approaches.


### Coordinated Multi-Stakeholder Approach


The coordination demonstrated among government, parliament, and international development partners offers insights for other countries seeking to implement comprehensive digital transformation strategies. The alignment across institutions provides a foundation for sustainable implementation.


### Balancing Ambition with Pragmatism


Lesotho’s approach of setting ambitious goals whilst implementing pragmatic solutions based on available resources demonstrates mature policy development. The willingness to start with pilot programmes and scale gradually offers lessons for other developing countries facing similar resource constraints.


## Conclusion


The discussion revealed a comprehensive and coordinated approach to digital transformation that extends beyond technology adoption to encompass fundamental changes in governance, service delivery, and citizen engagement. The presentations demonstrated alignment among government institutions and development partners, combined with honest acknowledgement of challenges and constraints.


Lesotho’s experience demonstrates that small, landlocked countries can achieve significant progress in digital infrastructure through strategic partnerships and innovative approaches. The emphasis on inclusion, particularly for women, youth, and vulnerable populations, shows commitment to ensuring digital transformation benefits all citizens.


The specific action items and timelines discussed suggest momentum behind implementation efforts, whilst the acknowledged challenges highlight areas requiring continued attention and innovative solutions. The call for continued international partnerships reflects recognition that digital transformation requires sustained commitment and support.


Overall, the discussion presented Lesotho as a country with clear strategic direction, practical implementation approaches, and strong partnerships, whilst honestly acknowledging the significant challenges that remain in translating infrastructure achievements into meaningful access and digital participation for all citizens.


Session transcript

Seletar Tselekhwa: Okay, thank you so much. My name is Seletar Tselekhwa from Lesotho. I am the System Librarian at the National Institute of Lesotho. Thank you for coming and we are going to have some speakers online and in person. As you can see, we have the Minister of ICT, Honourable Nthati, and we have the Honourable Lekhotsa Mafethe as our moderator, as the speaker on-site. And we are going to have Dr Tahleho T’seole, who is going to be online, and we are going to have PIA’s Principal Secretary, Mr Kanono Ramasamule. Basically, our session today is on Advancing Lesotho’s Digital Transformation through Policies and Practices. So what we are going to do here is just to share the progress that we have done as a country and the challenges, the opportunities, and also we want to collaborate with you in terms of the digitization and how to upgrade our understanding in the digital platforms. And also, we just want to introduce you to Lesotho’s Digital National, Digital Priorities and present the country’s policy direction through the Digital Transformation Strategy and with the recent initiatives that were done under the Lesotho ICT Ministry and showcase the implementation example that we have done. Also encourage dialogue across sectors to bring together the government academia and development partners and civil society to reflect on the progress. We also want to explore institutional roles in digital ecosystems and also want to align with the global agenda like the WSIS and also how we can work together as the IGF also. Thank you so much. Let me welcome the Minister of Information, Communication, Science and Technology and Innovation to give us the key note address. Thank you.


Nthati Moorosi: Thank you, Dr. Lizazi, the moderator for this session. It is a great honor for me to be welcoming you all to the Lesotho session of the Internet Governance of 2025. We invited you here today to share our Lesotho digital transformation journey so that you can help us reflect on it, share experiences and best practices, and form partnerships that can help us create a meaningful and lasting impact for our people, Lesotho as they are known. Our mission is clear. We want to build a connected, secure, inclusive and resilient Lesotho by 2030. Towards this mission, our national digital transformation strategy is the compass towards closing the digital divide, fostering innovation and creating a society where every citizen can fully participate in the digital age. Ultimately, the Kingdom of Lesotho will see the increased economic growth, improved public services and enhanced social inclusion. Agent areas for digital transformation are payments, digital identity, and unlock opportunities for every Lesotho, from mountainous villages to the capital classrooms and marketplaces. With your expertise, your investment, your shared commitment, we can strengthen connectivity, foster cybersecurity, foster skills, and grow an innovation ecosystem that benefits all. Let us walk forward together towards a digitally empowered Lesotho where no one is left behind. Thank you very much.


Seletar Tselekhwa: Thank you so much, Minister Ntati, and we thank you so much for the well-resourced presentation and keynote address, and now we are going to Mr. Kanonorama Sharmole, who is going to give us the Lesotho digital policies and initiatives, specifically he is going to focus on the ICT policies that we are on and what we want to achieve in the future. So, Mr. Kanonorama, the stage is yours.


Kanono Ramasamule: Thank you. Thank you very much, Ntati, and let me start by recognizing the presence of the honorable ministers and the distinguished guests that have joined this session. From the ministers, let me start by just making a few comments on the minister’s remarks. She talked about the need to drive digital public infrastructure as our approach towards digital transformation. She also mentioned the issue of e-commerce. We all know that Lesotho has a big challenge with youth unemployment, and we believe digital transformation will provide some solutions to these challenges. And you asked me to elaborate on the policies that we are putting in place, but let me just start with our digital transformation strategy. Our digital transformation strategy is anchored on five pillars, the first pillar being the enabling environment where we define the policies and the legislations and the regulations that are required to drive digital transformation. The second one is the digital government. Digital government, this is where we have actually taken the decision to drive it through DPI approach. I just walked out of another session on DPI where we are actually discussing where we are as a country and the challenges we are facing. The minister is right to say we are looking for partners to assist us on this journey. The other pillar, the third one, is the digital infrastructure. The minister mentioned that we recently licensed the satellite service provider to improve our connectivity in the country. We are proud to say we now have 100 percent broadband coverage in Lesotho, but there are still challenges regarding affordability and also access to devices. The fourth one is digital population. Now, this is where we are saying in order to make sure that we drive inclusion, we have to make sure that people are adequately skilled at all levels, from the basic level to advanced level in digital skills. The last one is the digital population, rather the digital business. This is now where we think we have the potential to actually change the lives of our people through digital. We may know that the African Union some time back endorsed what is now a mature framework for free trade in Africa, which is Africa Continental Free Trade Area. I think two years ago, the heads of state endorsed the protocol on digital trade. We view as Lesotho that the implementation of this protocol lies solely on implementation of DPI. We believe DPI can implement the digital protocol at scale and securely using DPIs. I will now go straight to my second slide after just giving this brief introduction, just to share what we have done so far and what we hope to achieve in the next three to four years. Let me try to move to the next slide. Mr Kanonu, are you OK? Thank you. Yes. OK, here it is. OK. So far, we have been fortunate that the cabinet has approved the National Digital Transformation Strategy. We are also organising the way we do ICT in the country and in particular within the government. We have completed the ICT governance framework. We have also signed the DPI pilot with MOSIC. We are at a very early stages. Just a few weeks ago, we we approved it. The technical solution will be moving to the second phase of validating data and setting up the hardware for sandboxes required. We are also on the verge of signing the MOU that was approved by the cabinet with our country ministry in India, the Minister of Electronics and ICT. We have also developed and validated a couple of policies, artificial intelligence policy, data management policy and broadband and infrastructure sharing policy. Two weeks ago, we also conducted a DPI awareness workshop. So these are the things we we have done so far. We are currently working on a number of initiatives that will help us accelerate our digital transformation. The first one, we will be upgrading the government data network very soon, expecting the contract to be concluded early next week. We have also started using our data center that had remained dormant for many years. We currently have two services running in the data center. Like I mentioned, the MOSIC pilot is in progress. We are also working on the digital literacy programs through our universal service fund where we are giving digital skills to the teachers. And we hope the teachers will teach the children, the children will teach their parents. We are also completing the contract for a consultant to help us with the enterprise architecture and interoperability framework, because in order to implement e-commerce, not only in Lesotho, but across the border, we need a data exchange platform and framework that are robust. We hope in the third quarter of this year, we’ll expand connectivity to the councils and the DA’s office. Because remember, I said we still have that challenge of accessibility, of connectivity, even though we are 100 percent. So the councils and the schools will be our platform for people who don’t have devices to be able to access Internet and ultimately the government services. We are also working on the government e-services platform where we will now start building services. We are also working with the Ministry of Education on GIGA project. It is moving ahead as planned. We’ll be doing the GIGA mapping and financing model. This will also be complemented by our efforts through the universal service fund. We hope that by the end of the financial year, we’ll have the cyber security bills approved by the parliament. Then we’ll be able to move to the development of our national cyber security strategy. We are also working with the Ministry of Home Affairs to review the current Protection of Personal Data Act. We are also working with the African Union and the GIZ to ensure that we have a very solid data governance policy. We have a number of initiatives that we want to start as early as possible. If you can see on my slide, we have the development of government digital blueprints. We are fortunate that this initiative has actually moved up because with the help of the European Union, we’ll be able to start working on this initiative, I think, in six weeks with the government of Estonia. The EU is making arrangements for our team to have the workshop in Estonia in six weeks. We are also on the infrastructure, working on the expansion of our cyber infrastructure, as the minister said. We also plan to accelerate the connectivity to the health clinics and other government offices. If you can see on the slide here towards the end, especially if you can look for DPI for cross-border, we are talking with the countries in the region, in the southern region, preferably we’ll start with South Africa to start testing cross-border ID verification and data exchange. We believe this initiative will enable the two governments to implement some of the decisions that were made during the BNC earlier this year. The minister mentioned the national addressing system. We are also moving this one up, not in 2026. We’ll probably start working on it towards the end of this year or early next year. The office of the CDU and the implementing of the digital agency, we are also concluding the contract with the consultant, because this is the agency that will now be focused on the digital transformation. At the moment, we are doing all these efforts within the ministry, which is a bit difficult for us, because as a ministry, we deal with a lot of issues. So we think this agency that is focused on digital transformation will move at a faster pace than what we are currently doing. And then we’ll be also looking at other satellite applications. We have the MOUs with other countries that we want to pursue in terms of… of Satellite Applications. As you can see, there is a lot on our plate. As the Minister said, we need a lot of partners to walk this journey with us. Thank you, Nthati Moorosi.


Seletar Tselekhwa: Thank you so much, Nthati P.S. This is really exciting to see that the government of Lesotho is doing well, and we are hoping to work with other countries and with other partners. As the IGF said, our theme is on multi-stakeholder engagement. That says we are trying to find a way to work together as a country, because we can’t work together. Once we are working in silos, we are not going to achieve more, but when we are working together, we are going to achieve more. As the Minister and the P.S. said, most of the time they were focusing on digitization in the communities, the ministerial offices also, but now we are going to focus on the digitization of the parliament. As Lesotho, we want our parliamentarians to be digitalized and be skilled in digitization. So, Mr. Lekhoza Mafet, may you come and share about the progress on the digital transformation of the parliament of Lesotho. Thank you.


Lekhotsa Mafethe: Thank you. Thank you to you, Mr. Lekhoza Mafet. My name is Lekhoza Mafet, a member of parliament from Lesotho and a member of a committee called Prime Minister’s Ministries in the National Assembly and a member of APNIC. So, today, through you, Mr. Moderator, oh, by the way, I think everybody should take note that Lesotho right now, we’re busy trying to engage as much with as possible to ensure that what we have current in policies and what we try to create, we bridge such boundaries where we now start engaging with youth as opposed to giving youth. So, now youth are quite a lot part of this. So, to the ministry, big ups to you for such an engagement. My first topic right now is on digital parliament processes. The objectives, it is to digitize Lesotho’s National Assembly, to make it paperless by placing on our papers bills put forth by members of the National Assembly through committees, bills brought from government ministries, private members’ bills and those by public participation and to give the public access to the same bills digitally. There’s abilities and possibilities to the digitization of parliament, which quite notably currently is to provide access to MPs to participate either in the physical form in the National Assembly or with any mobile suitable device from anywhere in the world. The Ministry of Communications has also committed to initiate a live streaming link for parliament proceedings to be made available through its sessions through a dedicated webpage and other social media tools like YouTube, Facebook and others that might be. Recording and archiving sessions through a digital library to provide a voting button to use remotely by MPs from any location and to develop a leg register for MPs for parliamentary sessions attendance in-house. And by the way, today’s session is a true testament of the digital transformation that the Ministry of Communications has taken, that we are being broadcast today live in our country where there’s no journalist around, but rather through a coordination of the national broadcasting entity including the IGF secretary at the UN and the government of Norway. So it’s for us, it’s actually quite a pleasure to say we’ve done the part that is noticeable today for all to see. On second note, it’s the parliament legal frameworks. As the Minister of Communications had highlighted in one of our sessions the other day and the PS has just noted that there’s a policy in place, an AI policy in place which now it would be ideal for the ministry to bring it to parliament so that we can discuss and bring about a bill to it. Then in itself it would open up a door for tech enthusiasts to start the exploration of building AI-supported technology without any form of doubt. Now, there’s one challenge that we really need to go through because now when you’re talking about digitization, we’re talking about AI, we’re talking about control measures that are supposed to be put in place more especially because now it’s now in a different landscape altogether. So which is our cyber security bill and computer crimes bill? Through multi-stakeholder engagements from both the media fraternities, human rights groups, civil society and government ministries, our committee, the Prime Minister’s Ministry Committees, has realized that we need to unbound both bills and let each bill have its own policies and merits on its own side so that we can define with ease each own merits for public consumptions without the other overshadowing the other. That will still be an initiative that the ministry and all the other stakeholders, I believe that they’re still busy or preparing to engage so that we can get at least one bill in the house to go through in this financial year. And in conclusion, to my fellow MPs present in Norway for the IGF, my fellow Basotho countrymen, Africans at large and other citizens from across the group as a whole, it is quite essentially important for us to realize that human intervention and AI, I believe, should coexist for generations to come since neither can operate without the other. It will not be an ideal situation where inter-probability is left out of the equation by both machine and man as might be a perception for many. And to you, Celine, and the UN Secretariat at large, we only see you smile. Thank you for such an opportunity of putting us on such a big stage where we’re able to put out what Lesotho has achieved through your help, UN, and other entities that has been there and a friend to our country. And to the Norwegian government, I’d like to thank you for the hospitality and the ever-shining bright skies that shine even at night. We thank you for being here.


Seletar Tselekhwa: Thank you so much, Honorable Lekhotsa Mafethe, and indeed, this is a good platform for you as a parliamentarian to show the importance of being a parliamentarian. And you just removed the perception about being a parliamentarian. And we can see that you are a well-resourced parliamentarian who we trust you that you will drive the parliament of Lesotho to a better future. Thank you so much. And now we are going to move to the road map, where we want to go. Menter is a minister of ICT, advisor, and she is one of our resource in the country in terms of digitization. She is going to share about the digital transformation strategy, where we want to go, the agenda 2030, and we know that the IGF itself, it is focusing on the SDGs and what we want to achieve in the next five years. We have just moved from the last 20 years, but now we want to say, in the next five years, where are we going? Menter is saying. Thank you.


Nthabiseng Pule: Good afternoon, everyone. My name is Menter Wile, as explained, and I’m going to just run through what the United Nations system in Lesotho is assisting the ministry’s journey on the digital transformation. I am specifically sitting within UNDP so I will focus mostly on what UNDP is doing. What is currently engaging the ministry’s team mostly is the development of digital skills, the assessment of the youth, the women and people living under vulnerable conditions. The study that we concluded found that most women are not digitally skilled compared to men so we have a gender digital skills gap that must be addressed. To do that the UNDP and the government are partnering to set up training centres in villages. We have identified 40 villages where we can pilot this approach where we place one digital champion in a village of those 40 villages and they will every single day get to work to impart skills to the community focusing in the populations of interest which is women, youth and people with disabilities. We have also noticed that we are doing work that we have not baselined. Instead of being pedantic and trying to start with the baseline we started with the work. Then we will refine and do a digital readiness assessment in the coming months because the low capacity that we are operating under demands that we be pragmatic. We may not always do things by the book but work has to be done and that is what we are doing. We had the principal secretary talk about the challenges with access to electricity. UNDP and other partners are working on addressing the issue of electricity. 10 mini grids have been built in five districts to serve 10,000 households. That has been accomplished but that just puts a dent in the access gap that we have in Lesotho as far as electricity. Currently efforts are underway to scale up the access to electricity with also partners from the EU and other agencies. The aim is to still have the 75 percent of the population at least connected to the electricity by 2030. In conversations with the Ministry of Energy we have agreed that 75 percent may be not ambitious enough and we have tentatively agreed with them that while 75 percent will not be removed from the books the target will be 100 percent on our minds. Now where the most challenge lies is with the skills. The technical skills of people who have to implement these projects need boosting. Also this needs financing. When I was talking about the training of youth and women in the villages the reason we are starting with 40 sites instead of the entire country is because that project has to be funded and the current funding confines us to that number of 40 instead of the entire population. But if we are talking about an entire population where 14 percent have what we call moderate skills and the majority have very low skills you can understand the magnitude of the problem. But we are not deterred. We are working on this. It’s our mission every single day to make it better for Basotu. So I will just conclude by emphasizing the areas where support is needed. The technical infrastructure needs to be improved. We’re talking here about access to devices. We’re talking about data centers. We’re talking about the networks themselves. We are also talking about, as I have indicated, having a digitally proficient workforce within government. If we are transforming it means every government or every public servant has to be sufficiently proficient in using digital channels to do their work, to deliver services or to access government services themselves. We are also talking about the readiness of the entire population. If government is going to be delivering services online it means the entire population must be ready to receive those services through those channels. So the call to action here is for those willing and with the means to partner with us in delivering the digital vision of Lesotho. Thank you. Thank you so much


Seletar Tselekhwa: Menta for saying and we really appreciate about the progress that is done by the country and the vision for the country in terms of digitization and we hope that the government will work together with other organizations, other countries because sometimes you also need to benchmark on what other countries are doing and also to learn from other countries. So now I think we are going to come to the end of our discussion but now we are going to allow questions and comments from the floor.


Audience: I hope you can hear me. Sorry, good afternoon to everybody. My name is Abdukarim. I’m a professor of wireless telecommunications from the University of Illinois in Nigeria. Let me start by first of all commending the panelists from the Honorable Minister to everybody that spoke on those wonderful presentations. I have about two questions. The first one is we need to benchmark in what we’re doing and we need to understand some of the things you guys are doing well so that some of us from other parts of Africa can actually emulate. I know Lesotho is a landlocked country and when you were talking about telecom infrastructure you never said anything about challenges in terms of access to telecoms, bringing in telecommunication services in a landlocked country especially when it comes to like fiber or how do you get connections to the Atlantic cable. That’s number one. The number two is on the Giga project. I know the Giga project is one of the ITU initiatives. Can you share with us the successes of the Giga project because a lot of African countries want to actually key into the Giga project but we also want to learn from those that are already into the Giga project that how is this Giga project been improving the life of ordinary people of Africa. Thank you so much.


Seletar Tselekhwa: Thank you so much for those questions. I’ll just start with the question on the connectivity. How do we get connectivity when we are landlocked?


Nthabiseng Pule: Thank you for that question. Lesotho is a shareholder in the Wayok submarine cable. When the cable came on for a few years even though we’re a shareholder we didn’t have the service lending in the country because we are landlocked and some agreements had to be established with service providers in South Africa to enable us to eventually have the service in Lesotho. Now through bilateral agreements as indicated we are able to get transit through South Africa to wherever we need to get to. But for us satellite is still an important option so that whenever there’s a problem with the terrestrial cables we can still have connectivity to the rest of the world. For that to happen smoothly we have set up an internet exchange point within the country to ensure that at least local traffic can stay local even if we have problems. So among what we consider critical internet infrastructure in Lesotho is the exchange point. For that specific reason we are landlocked and if ever we lose connectivity on the terrestrial links through the other country it is not for up to us. We must neither be relying on our neighbors to speed up falls on the other side of the border, at least we should be able to communicate locally even though satellite photographs nowadays come to their rescue. That is as far as the question of connectivity goes. It hasn’t been easy, it remains difficult, but it is doable in collaboration with neighbors.


Nthati Moorosi: Thank you. I would like to answer the question on the Giga project and to just say that through the support of UNICEF we are only starting to implement the Giga project. If you look at the map of the schools that are connected through the Giga project, there are very few because we are just at the beginning. But over and above the Giga project we also have a universal service fund which is also connecting schools. We started with 20 schools last year. We are going to be rolling out 20 schools every six months or every year depending on the budget that is available. Thank you.


Seletar Tselekhwa: Okay, thank you so much. Celine?


Audience: Thank you very much. My name is Celine Bal from the IGF Secretariat. It’s an honor, Honorable Mohozi, to have you part of this panel and also for the second time of the IGF and also through the rest of the panel and also to Mr. Hamash Amule who is online. So thank you so much for also providing an overview of the different strategies, let it be from the Ministry and also from the Parliament and the UNDP collaboration. So it’s perhaps more a comment than a question, but I really wanted to let you know that the IGF Secretariat has a vast pool of network and because you are doing this call for partnerships, I really want you to know that you can reach out to us depending on the kind of support that is required and we can also connect you with the different stakeholders that are also part of our forum since quite some years and are really interested and willing to work together with the different ministries. And also perhaps interesting for you because you were mentioning the DPI, there is a colleague here who will be organizing later this afternoon from 4.15 to 5.00 p.m. a session on DPI mapping stakeholders. So perhaps this could be also a very interesting initiative to get to know some other DPI related… Exactly. So this is basically just my comment. And one last question also perhaps to Honorable Mafete. You’ve mentioned the initiatives that are going on in the Parliament and to what extent are you actually collaborating also with other members of parliaments and parliaments around the region to see what are good practices, what can you take on from others, what are practices that did not work so well. So to what extent are you actually collaborating across borders? Thank you so much.


Seletar Tselekhwa: Thank you so much Celine and thank you for the comments and suggestions. And Mr. Mafete, just 30 seconds.


Lekhotsa Mafethe: Thank you Celine for the question put forth on the collaboration with other members of parliament and what we are achieving. I think in my case, speaking for myself quite right now, is that through APNIC, through the Internet School of Governance and through the IGF, we’ve been able to learn quite a lot and mostly in policy formulations. We were now able to get ministries in-house and we’re able to actually give them what the world is doing. So that government ministries can push further a field as compared to where they were in the past. So I think in my case is better communication, better implementation. As I said, we’re seeing changes today as opposed to last year when we were sitting on such a platform. We weren’t able to give out as much information as we were able to. But now through such initiatives, we’ve learned a lot. We’ve been able to actually give out a lot to fellow government employees who have not had an opportunity to come here.


Seletar Tselekhwa: Thank you so much, Mr. Mafete. Thank you so much. Yes, Togo?


Audience: Good afternoon. My name is Togo Mia. I’m from South Africa. And I serve on South Africa’s Internet Governance Forum as well, multi-stakeholder committee. For me, I’m also a representative of the civil society. We have a skills institution, technology skills institution in South Africa that’s predominantly working with women and girls from rural and underserved, under-connected areas. So your townships and your Cape Flats areas like this. So from my perspective, I really am curious because I know, Honorable, you mentioned the project you have with the ambassadors in rural areas and the skills and the teaching. I was hoping that you can highlight some more frameworks that you’re using and that one in being able to bridge the connectivity gap. Number one, especially where youth are concerned in providing them affordable access, affordable connectivity. And then also, again, in terms of the skills, and also whether you are able or open to collaboration around these particular activities and engagements. How can we as SDPs and SA connect with you?


Seletar Tselekhwa: Thanks so much, Togo.


Nthati Moorosi: Okay. I think, Togo, you are already raising up your hand to collaborate with us in upskilling our women. Men have said eloquently that the biggest gap is with women, gender disparity that we need to close. And just to say that we would like to learn more from you on the skills institution for women. It’s something that we can easily emulate. South Africa is our neighbor. We can learn from you easily. We would like to share more on that. The other questions I think men have been saying you want to maybe talk to. Thank you.


Nthabiseng Pule: Thank you. I’ll talk to the question of connectivity, affordability of connectivity for youth. Youth is a large spectrum. Some of them are still in school. Some are out of school. The ones still in school get discounted vouchers from the mobile network operators for mobile Internet access. These agreements are done between the Ministry of Education and the service providers. What happens is every student, when they go to buy their data, there’s some identification that they use so that it can be known that they are that category and they can get discounted airtime data. But for the rest of the youth, we are still scratching our heads to say how do we make sure that other youth, the youth out of school, have equal access to digital connectivity. And it’s not just about the data. It’s also about the devices. Youth need the devices. I’ll give you an example. Last year when we were having the local IGF organized, we had some demonstrating and through conversations with them how they managed to develop the system. And you look at the laptop, you see that it has arrived at its end of life. And you wish they could get another one. And you’re like, when are you getting a new laptop? This one is no longer fit. And they say, even this one is borrowed. I need, definitely to be productive, I need a new laptop. But how I get it, I do not know. So those are the kind of questions that we are grappling with. How do we ensure that youth who would otherwise be productive have access to devices? We can negotiate airtime for them, but how do they get the devices? It’s a big question for us. Because in our last study, 2023 household study assessing access to devices, less than 5% of the population have access to a laptop. And that includes youth. Thank you.


Seletar Tselekhwa: Okay, thank you so much. She can speak the whole day and I think you can catch her after the session. Okay. First of all, we need to thank the IGF Secretariat, everyone who is in here. You’re all champions and we are ready to work with you as the country Lesotho. Because Lesotho, we are saying, we want you to come together as a country to say, let’s come together and work together. Thank you so much for the opportunity. Thank you so much. Thank you.


N

Nthati Moorosi

Speech speed

133 words per minute

Speech length

421 words

Speech time

189 seconds

Building a connected, secure, inclusive and resilient Lesotho by 2030 with focus on digital payments, digital identity, and unlocking opportunities for all citizens

Explanation

The Minister outlined Lesotho’s mission to achieve comprehensive digital transformation by 2030, emphasizing connectivity, security, inclusion and resilience. The strategy focuses on key areas like digital payments and identity systems to ensure every citizen can participate in the digital age.


Evidence

National digital transformation strategy serves as the compass towards closing the digital divide, fostering innovation and creating a society where every citizen can fully participate in the digital age, from mountainous villages to the capital classrooms and marketplaces


Major discussion point

Lesotho’s Digital Transformation Strategy and Vision


Topics

Development | Economic | Infrastructure


Agreed with

– Kanono Ramasamule
– Seletar Tselekhwa
– Nthabiseng Pule

Agreed on

Need for partnerships and multi-stakeholder collaboration


K

Kanono Ramasamule

Speech speed

98 words per minute

Speech length

1359 words

Speech time

825 seconds

Digital transformation strategy anchored on five pillars: enabling environment, digital government, digital infrastructure, digital population skills, and digital business

Explanation

The Principal Secretary detailed Lesotho’s comprehensive approach to digital transformation through five strategic pillars. Each pillar addresses different aspects from policy frameworks to infrastructure, skills development, and business transformation.


Evidence

First pillar: enabling environment with policies, legislations and regulations; Second: digital government through DPI approach; Third: digital infrastructure; Fourth: digital population skills for inclusion; Fifth: digital business to change lives through digital means, particularly leveraging Africa Continental Free Trade Area protocol on digital trade


Major discussion point

Lesotho’s Digital Transformation Strategy and Vision


Topics

Development | Economic | Legal and regulatory


Lesotho now has 100% broadband coverage but faces challenges with affordability and access to devices

Explanation

While Lesotho has achieved complete broadband coverage through recent licensing of satellite service providers, significant barriers remain in making connectivity affordable and accessible. The infrastructure exists but practical access is limited by economic and device availability constraints.


Evidence

Recently licensed satellite service provider to improve connectivity, achieved 100 percent broadband coverage, but still challenges regarding affordability and access to devices


Major discussion point

Digital Infrastructure and Connectivity Challenges


Topics

Infrastructure | Development


Agreed with

– Nthabiseng Pule

Agreed on

Infrastructure achievements and remaining challenges


Digital literacy programs being implemented through universal service fund targeting teachers who will teach children and parents

Explanation

Lesotho is implementing a cascading approach to digital skills development through its universal service fund. The strategy involves training teachers first, who will then educate students, who in turn will teach their parents, creating a multiplier effect for digital literacy.


Evidence

Working on digital literacy programs through universal service fund where we are giving digital skills to the teachers, and we hope the teachers will teach the children, the children will teach their parents


Major discussion point

Digital Skills and Capacity Building


Topics

Development | Sociocultural


Agreed with

– Nthabiseng Pule

Agreed on

Digital skills development as critical priority


Cabinet approval of National Digital Transformation Strategy and development of ICT governance framework

Explanation

Lesotho has achieved significant policy milestones with cabinet approval of its national digital strategy and completion of ICT governance frameworks. These foundational policy documents provide the legal and organizational structure for implementing digital transformation initiatives.


Evidence

Cabinet has approved the National Digital Transformation Strategy, completed the ICT governance framework, signed DPI pilot with MOSIC, approved MOU with India’s Ministry of Electronics and ICT, developed and validated AI policy, data management policy and broadband infrastructure sharing policy


Major discussion point

Policy Framework and Implementation


Topics

Legal and regulatory | Development


Partnerships with India’s Ministry of Electronics and ICT, European Union support for digital blueprints development with Estonia

Explanation

Lesotho is actively pursuing international partnerships to accelerate its digital transformation, including formal agreements with India and EU-facilitated collaboration with Estonia. These partnerships provide technical expertise and proven digital governance models that Lesotho can adapt.


Evidence

MOU approved by cabinet with India’s Ministry of Electronics and ICT, European Union support for government digital blueprints development with Estonia, workshop planned in Estonia in six weeks


Major discussion point

International Partnerships and Collaboration


Topics

Development | Legal and regulatory


Agreed with

– Nthati Moorosi
– Seletar Tselekhwa
– Nthabiseng Pule

Agreed on

Need for partnerships and multi-stakeholder collaboration


S

Seletar Tselekhwa

Speech speed

123 words per minute

Speech length

930 words

Speech time

452 seconds

Need for multi-stakeholder engagement and partnerships to achieve digital transformation goals rather than working in silos

Explanation

The moderator emphasized that successful digital transformation requires collaborative approaches involving multiple stakeholders rather than isolated efforts. This aligns with the IGF’s theme of multi-stakeholder engagement and recognizes that comprehensive digital development cannot be achieved by single entities working alone.


Evidence

IGF theme is on multi-stakeholder engagement, trying to find a way to work together as a country, because we can’t work together when working in silos, we are not going to achieve more, but when we are working together, we are going to achieve more


Major discussion point

Lesotho’s Digital Transformation Strategy and Vision


Topics

Development | Legal and regulatory


Agreed with

– Nthati Moorosi
– Kanono Ramasamule
– Nthabiseng Pule

Agreed on

Need for partnerships and multi-stakeholder collaboration


L

Lekhotsa Mafethe

Speech speed

131 words per minute

Speech length

981 words

Speech time

448 seconds

Objectives to digitize Lesotho’s National Assembly to make it paperless and provide public access to parliamentary proceedings

Explanation

The Member of Parliament outlined comprehensive plans to transform Lesotho’s National Assembly into a fully digital institution. The initiative aims to eliminate paper-based processes and increase transparency by providing public access to parliamentary documents and proceedings through digital platforms.


Evidence

Digitize bills put forth by members through committees, bills from government ministries, private members’ bills and those by public participation, give the public access to the same bills digitally


Major discussion point

Parliamentary Digital Transformation


Topics

Legal and regulatory | Sociocultural


Plans for remote participation capabilities, live streaming, digital voting, and electronic attendance systems for MPs

Explanation

The parliament is developing advanced digital capabilities to enable remote participation and improve transparency. These technological solutions will allow MPs to participate from anywhere in the world while providing public access to parliamentary proceedings through various digital channels.


Evidence

Provide access to MPs to participate either physically or with mobile devices from anywhere in the world, live streaming link for parliament proceedings through dedicated webpage and social media tools like YouTube and Facebook, recording and archiving sessions through digital library, voting button for remote use, digital register for parliamentary attendance


Major discussion point

Parliamentary Digital Transformation


Topics

Infrastructure | Sociocultural


Need to separate cyber security and computer crimes bills to address each on its own merits without overshadowing

Explanation

Through multi-stakeholder consultations, the parliamentary committee determined that cyber security and computer crimes legislation should be handled as separate bills. This approach allows each piece of legislation to be evaluated independently and ensures that neither issue overshadows the other in public discourse and policy development.


Evidence

Through multi-stakeholder engagements from media fraternities, human rights groups, civil society and government ministries, the Prime Minister’s Ministry Committee realized the need to unbound both bills and let each bill have its own policies and merits


Major discussion point

Policy Framework and Implementation


Topics

Cybersecurity | Legal and regulatory


Disagreed with

Disagreed on

Approach to cybersecurity and computer crimes legislation


Learning from international parliamentary networks like APNIC and Internet School of Governance to improve policy formulation and implementation

Explanation

The MP highlighted how participation in international networks has enhanced the parliament’s capacity to develop better policies and provide informed guidance to government ministries. This international engagement has enabled knowledge transfer and improved the quality of legislative work in Lesotho.


Evidence

Through APNIC, Internet School of Governance and IGF, able to learn and give ministries what the world is doing, better communication and implementation, able to give out more information compared to last year


Major discussion point

Parliamentary Digital Transformation


Topics

Development | Legal and regulatory


N

Nthabiseng Pule

Speech speed

142 words per minute

Speech length

1260 words

Speech time

529 seconds

Significant gender digital skills gap exists with women being less digitally skilled compared to men

Explanation

UNDP’s assessment revealed a substantial disparity in digital skills between men and women in Lesotho, with women significantly lagging behind. This finding has prompted targeted interventions to address gender inequality in digital literacy and ensure inclusive digital transformation.


Evidence

Study concluded that most women are not digitally skilled compared to men, so we have a gender digital skills gap that must be addressed


Major discussion point

Digital Skills and Capacity Building


Topics

Development | Human rights


Agreed with

– Kanono Ramasamule

Agreed on

Digital skills development as critical priority


UNDP partnership for setting up training centers in 40 villages with digital champions to train women, youth, and people with disabilities

Explanation

UNDP and the government are implementing a village-level digital skills program targeting vulnerable populations. The initiative places one digital champion in each of 40 pilot villages to provide daily training focused on women, youth, and people with disabilities to address digital inclusion gaps.


Evidence

UNDP and government partnering to set up training centres in 40 villages, one digital champion in each village working daily to impart skills to community focusing on women, youth and people with disabilities


Major discussion point

International Partnerships and Collaboration


Topics

Development | Human rights


Agreed with

– Nthati Moorosi
– Kanono Ramasamule
– Seletar Tselekhwa

Agreed on

Need for partnerships and multi-stakeholder collaboration


As a landlocked country, Lesotho relies on partnerships with South Africa and satellite services for international connectivity, including shareholding in submarine cables

Explanation

Despite being landlocked, Lesotho has secured international connectivity through strategic partnerships and investments. The country holds shares in submarine cable infrastructure and has established bilateral agreements with South Africa for transit services, while maintaining satellite backup options.


Evidence

Lesotho is shareholder in Wayok submarine cable, bilateral agreements established with service providers in South Africa, satellite as important backup option, internet exchange point set up to keep local traffic local during connectivity problems


Major discussion point

Digital Infrastructure and Connectivity Challenges


Topics

Infrastructure | Development


Agreed with

– Kanono Ramasamule

Agreed on

Infrastructure achievements and remaining challenges


Less than 5% of the population has access to laptops, creating significant barriers for youth productivity and digital participation

Explanation

UNDP’s 2023 household study revealed extremely limited access to computing devices, with less than 5% of the population having laptop access. This severe device shortage particularly affects youth who need computers for productivity and digital participation, creating a major barrier to digital inclusion.


Evidence

2023 household study assessing access to devices found less than 5% of population have access to laptop, example of youth with end-of-life borrowed laptop unable to get replacement despite needing it for productivity


Major discussion point

Digital Infrastructure and Connectivity Challenges


Topics

Development | Infrastructure


Agreed with

– Kanono Ramasamule

Agreed on

Infrastructure achievements and remaining challenges


Only 14% of population has moderate digital skills while majority have very low skills, requiring massive capacity building efforts

Explanation

The digital skills assessment revealed that the vast majority of Lesotho’s population lacks adequate digital competencies, with only 14% having moderate skills. This finding underscores the enormous scale of capacity building required to achieve meaningful digital transformation and inclusion.


Evidence

Entire population where 14 percent have what we call moderate skills and the majority have very low skills, understanding the magnitude of the problem


Major discussion point

Digital Skills and Capacity Building


Topics

Development | Sociocultural


Agreed with

– Kanono Ramasamule

Agreed on

Digital skills development as critical priority


A

Audience

Speech speed

183 words per minute

Speech length

706 words

Speech time

230 seconds

Questions about how landlocked countries manage telecommunications infrastructure and connectivity to international networks

Explanation

An audience member from Nigeria inquired about the specific challenges and solutions for telecommunications infrastructure in landlocked countries like Lesotho. The question focused on understanding how such countries access international connectivity, particularly fiber optic connections to submarine cables.


Evidence

Question about challenges in terms of access to telecoms in landlocked country, especially fiber or connections to Atlantic cable


Major discussion point

Digital Infrastructure and Connectivity Challenges


Topics

Infrastructure | Development


Interest in collaboration frameworks for bridging connectivity gaps and providing affordable access, especially for youth

Explanation

A South African audience member representing civil society and technology skills institutions expressed interest in collaborating with Lesotho on digital inclusion initiatives. The inquiry focused on frameworks for addressing connectivity gaps and providing affordable access, particularly for youth in underserved areas.


Evidence

Representative from South Africa technology skills institution working with women and girls from rural areas, asking about frameworks for bridging connectivity gap and affordable access for youth, openness to collaboration


Major discussion point

Digital Skills and Capacity Building


Topics

Development | Human rights


IGF Secretariat offering network connections and stakeholder partnerships to support Lesotho’s digital transformation efforts

Explanation

The IGF Secretariat representative offered to leverage their extensive network to connect Lesotho with relevant stakeholders and partners. This offer responds directly to Lesotho’s call for partnerships and demonstrates the IGF’s role in facilitating multi-stakeholder collaboration for digital development.


Evidence

IGF Secretariat has vast pool of network, can connect with different stakeholders part of forum willing to work with ministries, mention of DPI mapping stakeholders session


Major discussion point

International Partnerships and Collaboration


Topics

Development | Legal and regulatory


Agreements

Agreement points

Need for partnerships and multi-stakeholder collaboration

Speakers

– Nthati Moorosi
– Kanono Ramasamule
– Seletar Tselekhwa
– Nthabiseng Pule

Arguments

Building a connected, secure, inclusive and resilient Lesotho by 2030 with focus on digital payments, digital identity, and unlocking opportunities for all citizens


Partnerships with India’s Ministry of Electronics and ICT, European Union support for digital blueprints development with Estonia


Need for multi-stakeholder engagement and partnerships to achieve digital transformation goals rather than working in silos


UNDP partnership for setting up training centers in 40 villages with digital champions to train women, youth, and people with disabilities


Summary

All speakers emphasized that Lesotho’s digital transformation cannot be achieved in isolation and requires extensive partnerships with international organizations, governments, and development partners. They consistently called for collaborative approaches rather than working in silos.


Topics

Development | Legal and regulatory


Digital skills development as critical priority

Speakers

– Kanono Ramasamule
– Nthabiseng Pule

Arguments

Digital literacy programs being implemented through universal service fund targeting teachers who will teach children and parents


Significant gender digital skills gap exists with women being less digitally skilled compared to men


Only 14% of population has moderate digital skills while majority have very low skills, requiring massive capacity building efforts


Summary

Both speakers identified digital skills development as a fundamental challenge requiring targeted interventions. They agreed on the need for comprehensive capacity building programs, particularly focusing on vulnerable populations including women and youth.


Topics

Development | Sociocultural


Infrastructure achievements and remaining challenges

Speakers

– Kanono Ramasamule
– Nthabiseng Pule

Arguments

Lesotho now has 100% broadband coverage but faces challenges with affordability and access to devices


As a landlocked country, Lesotho relies on partnerships with South Africa and satellite services for international connectivity, including shareholding in submarine cables


Less than 5% of the population has access to laptops, creating significant barriers for youth productivity and digital participation


Summary

Both speakers acknowledged Lesotho’s infrastructure progress while highlighting persistent challenges in affordability and device access. They agreed that achieving connectivity coverage is only the first step, with accessibility remaining a major barrier.


Topics

Infrastructure | Development


Similar viewpoints

All three government representatives shared a unified vision for Lesotho’s digital transformation, emphasizing comprehensive strategic approaches that require collaborative implementation across multiple sectors and stakeholders.

Speakers

– Nthati Moorosi
– Kanono Ramasamule
– Seletar Tselekhwa

Arguments

Building a connected, secure, inclusive and resilient Lesotho by 2030 with focus on digital payments, digital identity, and unlocking opportunities for all citizens


Digital transformation strategy anchored on five pillars: enabling environment, digital government, digital infrastructure, digital population skills, and digital business


Need for multi-stakeholder engagement and partnerships to achieve digital transformation goals rather than working in silos


Topics

Development | Legal and regulatory


Both speakers emphasized the importance of proper policy frameworks and legislative processes for digital transformation, highlighting the need for careful policy development and international learning to ensure effective implementation.

Speakers

– Kanono Ramasamule
– Lekhotsa Mafethe

Arguments

Cabinet approval of National Digital Transformation Strategy and development of ICT governance framework


Need to separate cyber security and computer crimes bills to address each on its own merits without overshadowing


Learning from international parliamentary networks like APNIC and Internet School of Governance to improve policy formulation and implementation


Topics

Legal and regulatory | Development


Both speakers recognized the critical importance of addressing digital skills gaps through targeted programs, particularly focusing on vulnerable populations and using cascading training approaches to maximize impact.

Speakers

– Kanono Ramasamule
– Nthabiseng Pule

Arguments

Digital literacy programs being implemented through universal service fund targeting teachers who will teach children and parents


Significant gender digital skills gap exists with women being less digitally skilled compared to men


UNDP partnership for setting up training centers in 40 villages with digital champions to train women, youth, and people with disabilities


Topics

Development | Human rights


Unexpected consensus

Parliamentary digital transformation as integral to national strategy

Speakers

– Lekhotsa Mafethe
– Kanono Ramasamule
– Seletar Tselekhwa

Arguments

Objectives to digitize Lesotho’s National Assembly to make it paperless and provide public access to parliamentary proceedings


Plans for remote participation capabilities, live streaming, digital voting, and electronic attendance systems for MPs


Cabinet approval of National Digital Transformation Strategy and development of ICT governance framework


Explanation

The comprehensive integration of parliamentary digitization into the national digital transformation strategy was unexpected, showing remarkable alignment between legislative and executive branches. The parliament’s advanced digital plans, including remote participation and live streaming, demonstrate sophisticated understanding of digital governance possibilities.


Topics

Legal and regulatory | Sociocultural


Pragmatic approach to implementation despite capacity constraints

Speakers

– Nthabiseng Pule
– Kanono Ramasamule

Arguments

Only 14% of population has moderate digital skills while majority have very low skills, requiring massive capacity building efforts


UNDP partnership for setting up training centers in 40 villages with digital champions to train women, youth, and people with disabilities


Digital literacy programs being implemented through universal service fund targeting teachers who will teach children and parents


Explanation

Despite acknowledging severe capacity constraints and limited resources, both speakers demonstrated consensus on proceeding with practical implementation rather than waiting for ideal conditions. This pragmatic approach of ‘starting work then refining’ shows mature understanding of development challenges.


Topics

Development | Sociocultural


Overall assessment

Summary

The discussion revealed strong consensus across all speakers on Lesotho’s digital transformation vision, the critical need for partnerships, the importance of addressing digital skills gaps, and the pragmatic approach to implementation despite resource constraints. There was remarkable alignment between government, parliament, and international development partners on strategic priorities and implementation approaches.


Consensus level

Very high level of consensus with no significant disagreements identified. This strong alignment suggests well-coordinated national digital transformation efforts with clear buy-in from multiple stakeholders. The implications are positive for implementation success, as the unified vision and collaborative approach provide a solid foundation for achieving the 2030 digital transformation goals. The consensus also demonstrates effective multi-stakeholder engagement in practice, which bodes well for sustainable and inclusive digital development in Lesotho.


Differences

Different viewpoints

Approach to cybersecurity and computer crimes legislation

Speakers

– Lekhotsa Mafethe

Arguments

Need to separate cyber security and computer crimes bills to address each on its own merits without overshadowing


Summary

The parliamentary committee, through multi-stakeholder engagement, determined that previously bundled cybersecurity and computer crimes bills should be separated into distinct legislation. This represents a shift from the original government approach of handling these as combined legislation.


Topics

Cybersecurity | Legal and regulatory


Unexpected differences

Overall assessment

Summary

The discussion showed remarkable consensus among speakers with minimal disagreements. The main area of disagreement was procedural, involving the parliamentary approach to cybersecurity legislation. Most differences were complementary rather than contradictory, with speakers offering different perspectives on shared challenges.


Disagreement level

Very low level of disagreement. The speakers demonstrated strong alignment on goals and strategies for digital transformation. The few differences that emerged were primarily about implementation approaches rather than fundamental disagreements about objectives. This high level of consensus suggests strong coordination among government, parliament, and international partners, which bodes well for successful implementation of Lesotho’s digital transformation strategy.


Partial agreements

Partial agreements

Similar viewpoints

All three government representatives shared a unified vision for Lesotho’s digital transformation, emphasizing comprehensive strategic approaches that require collaborative implementation across multiple sectors and stakeholders.

Speakers

– Nthati Moorosi
– Kanono Ramasamule
– Seletar Tselekhwa

Arguments

Building a connected, secure, inclusive and resilient Lesotho by 2030 with focus on digital payments, digital identity, and unlocking opportunities for all citizens


Digital transformation strategy anchored on five pillars: enabling environment, digital government, digital infrastructure, digital population skills, and digital business


Need for multi-stakeholder engagement and partnerships to achieve digital transformation goals rather than working in silos


Topics

Development | Legal and regulatory


Both speakers emphasized the importance of proper policy frameworks and legislative processes for digital transformation, highlighting the need for careful policy development and international learning to ensure effective implementation.

Speakers

– Kanono Ramasamule
– Lekhotsa Mafethe

Arguments

Cabinet approval of National Digital Transformation Strategy and development of ICT governance framework


Need to separate cyber security and computer crimes bills to address each on its own merits without overshadowing


Learning from international parliamentary networks like APNIC and Internet School of Governance to improve policy formulation and implementation


Topics

Legal and regulatory | Development


Both speakers recognized the critical importance of addressing digital skills gaps through targeted programs, particularly focusing on vulnerable populations and using cascading training approaches to maximize impact.

Speakers

– Kanono Ramasamule
– Nthabiseng Pule

Arguments

Digital literacy programs being implemented through universal service fund targeting teachers who will teach children and parents


Significant gender digital skills gap exists with women being less digitally skilled compared to men


UNDP partnership for setting up training centers in 40 villages with digital champions to train women, youth, and people with disabilities


Topics

Development | Human rights


Takeaways

Key takeaways

Lesotho has developed a comprehensive Digital Transformation Strategy with five pillars (enabling environment, digital government, digital infrastructure, digital population skills, and digital business) aimed at building a connected, secure, inclusive and resilient nation by 2030


The country has achieved 100% broadband coverage but faces significant challenges with affordability, device access (less than 5% have laptops), and digital skills gaps, particularly affecting women who are less digitally skilled than men


Multi-stakeholder collaboration is essential for success – government cannot achieve digital transformation working in silos and needs partnerships with international organizations, neighboring countries, and development partners


Parliament is undergoing its own digital transformation with plans for paperless operations, remote participation, live streaming, and digital voting systems


As a landlocked country, Lesotho has successfully addressed connectivity challenges through partnerships with South Africa, shareholding in submarine cables, and satellite services, demonstrating that geographic constraints can be overcome


Significant policy framework development is underway including AI policy, data management policy, cybersecurity legislation, and DPI (Digital Public Infrastructure) implementation with support from international partners like India and Estonia


Resolutions and action items

Government data network upgrade contract to be concluded early next week


EU-supported workshop with Estonia government on digital blueprints to begin in six weeks


Expansion of connectivity to councils and district administrator offices planned for third quarter


Cybersecurity bills to be approved by parliament by end of financial year


Digital agency establishment contract with consultant to be concluded to focus specifically on digital transformation implementation


GIGA mapping and financing model to be completed with Ministry of Education


Cross-border ID verification and data exchange testing to begin with South Africa


National addressing system implementation to start by end of year or early next year instead of waiting until 2026


40 village digital champion training centers to be established through UNDP partnership focusing on women, youth, and people with disabilities


Unresolved issues

How to provide affordable device access, particularly laptops, to youth and general population when less than 5% currently have access


How to ensure affordable connectivity for out-of-school youth (while in-school students receive discounted data vouchers)


Funding constraints limiting digital skills training to only 40 villages instead of nationwide coverage


Need for technical skills development among government implementers and public servants


Separation and individual treatment of cybersecurity and computer crimes bills which are currently bundled together


Scaling up electricity access beyond current 10 mini-grids serving 10,000 households to reach 75-100% population coverage by 2030


How to ensure entire population readiness to receive government services through digital channels


Suggested compromises

Starting digital skills training with 40 villages as a pilot instead of attempting nationwide coverage immediately due to funding constraints


Using teachers as intermediaries for digital literacy – training teachers who will teach children, who will then teach their parents


Targeting councils and schools as platforms for people without devices to access internet and government services


Setting 75% electricity access as official target while keeping 100% as aspirational goal


Beginning cross-border digital initiatives with South Africa first before expanding to other regional countries


Separating cybersecurity and computer crimes bills to allow each to be addressed on its own merits rather than one overshadowing the other


Thought provoking comments

We believe DPI can implement the digital protocol at scale and securely using DPIs… We view as Lesotho that the implementation of this protocol lies solely on implementation of DPI.

Speaker

Kanono Ramasamule


Reason

This comment is insightful because it connects Lesotho’s national digital transformation strategy to continental African integration through the Africa Continental Free Trade Area’s digital trade protocol. It demonstrates strategic thinking about how local digital infrastructure can serve broader regional economic goals.


Impact

This comment elevated the discussion from national-level digitization to regional integration, showing how Lesotho’s DPI approach could facilitate cross-border trade and cooperation. It set the stage for later discussions about cross-border partnerships and collaboration.


It is quite essentially important for us to realize that human intervention and AI, I believe, should coexist for generations to come since neither can operate without the other. It will not be an ideal situation where inter-probability is left out of the equation by both machine and man.

Speaker

Lekhotsa Mafethe


Reason

This philosophical reflection on AI-human coexistence is thought-provoking because it addresses fundamental concerns about technology displacement while advocating for complementary relationships. Coming from a parliamentarian, it shows legislative awareness of AI governance challenges.


Impact

This comment shifted the discussion from purely technical implementation to ethical and philosophical considerations of digital transformation. It introduced the concept of balanced technology adoption and influenced the conversation toward more nuanced thinking about AI integration in governance.


We have tentatively agreed with them that while 75 percent will not be removed from the books the target will be 100 percent on our minds… But if we are talking about an entire population where 14 percent have what we call moderate skills and the majority have very low skills you can understand the magnitude of the problem.

Speaker

Nthabiseng Pule


Reason

This comment is insightful because it reveals the tension between ambitious goals and practical constraints, while providing concrete data about the digital skills gap. It demonstrates honest assessment of challenges while maintaining ambitious vision.


Impact

This comment grounded the discussion in reality by providing specific statistics about digital literacy challenges. It shifted the conversation toward practical implementation challenges and the need for realistic timelines, influencing subsequent discussions about partnership needs and funding requirements.


We need to unbound both bills and let each bill have its own policies and merits on its own side so that we can define with ease each own merits for public consumptions without the other overshadowing the other.

Speaker

Lekhotsa Mafethe


Reason

This comment demonstrates sophisticated understanding of legislative strategy and stakeholder engagement. It shows how complex digital governance issues require careful policy separation to ensure proper public discourse and avoid conflating different concerns.


Impact

This comment introduced the complexity of digital governance legislation and the importance of multi-stakeholder engagement in policy development. It highlighted the parliamentary perspective on balancing cybersecurity, digital rights, and public participation in the legislative process.


And you look at the laptop, you see that it has arrived at its end of life… And they say, even this one is borrowed. I need, definitely to be productive, I need a new laptop. But how I get it, I do not know.

Speaker

Nthabiseng Pule


Reason

This vivid anecdote powerfully illustrates the device access challenge facing youth in Lesotho. It transforms abstract policy discussions into human reality, showing how digital divide affects individual productivity and potential.


Impact

This personal story shifted the discussion from policy frameworks to human impact, making the challenges more tangible and urgent. It prompted deeper consideration of the practical barriers to digital inclusion and influenced the conversation toward finding concrete solutions for device access.


Overall assessment

These key comments shaped the discussion by elevating it from a simple presentation of government initiatives to a nuanced exploration of digital transformation challenges and opportunities. The comments introduced multiple dimensions – regional integration, ethical AI governance, realistic goal-setting, legislative complexity, and human impact – that transformed what could have been a routine policy presentation into a comprehensive dialogue about sustainable digital development. The progression from technical implementation details to philosophical considerations and human stories created a more holistic understanding of Lesotho’s digital transformation journey, while the honest acknowledgment of challenges alongside ambitious goals demonstrated mature policy thinking that likely enhanced the credibility of Lesotho’s call for international partnerships.


Follow-up questions

How can other African countries learn from and emulate Lesotho’s digital transformation initiatives, particularly given their success as a landlocked country?

Speaker

Professor Abdukarim from University of Illinois in Nigeria


Explanation

This question seeks to understand best practices that can be replicated across Africa, particularly for countries facing similar geographical challenges


What are the specific successes and impacts of the Giga project on ordinary people’s lives in Africa?

Speaker

Professor Abdukarim from University of Illinois in Nigeria


Explanation

Many African countries want to participate in the Giga project but need concrete evidence of its effectiveness and real-world impact before committing resources


How can youth out of school gain affordable access to digital connectivity and devices beyond the current programs for students?

Speaker

Nthabiseng Pule (UNDP)


Explanation

While students receive discounted data vouchers, there’s no clear solution for out-of-school youth who also need digital access for productivity and opportunities


What frameworks and approaches are being used to bridge the connectivity gap, particularly for affordable access for youth?

Speaker

Togo Mia from South Africa


Explanation

Understanding specific methodologies could enable cross-border collaboration and knowledge sharing between South Africa and Lesotho


How can civil society organizations and skills institutions collaborate with Lesotho’s digital transformation initiatives?

Speaker

Togo Mia from South Africa


Explanation

There’s interest in establishing partnerships between South African institutions working with women and girls and Lesotho’s similar programs


How can the gender digital skills gap be effectively addressed through the proposed village-based training centers?

Speaker

Nthabiseng Pule (UNDP)


Explanation

The study found significant gender disparities in digital skills, requiring targeted interventions and measurement of effectiveness


How can Lesotho ensure device accessibility for youth and citizens who cannot afford laptops and other digital devices?

Speaker

Nthabiseng Pule (UNDP)


Explanation

With less than 5% of the population having access to laptops, device accessibility remains a critical barrier to digital inclusion


What specific partnerships and support mechanisms are needed to scale up digital transformation initiatives beyond current funding limitations?

Speaker

Multiple speakers (Minister Nthati Moorosi, Kanono Ramasamule, Nthabiseng Pule)


Explanation

Several speakers emphasized the need for partnerships and funding to expand programs from pilot phases to national scale


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #19 Strengthening Information Integrity on Climate Change

Open Forum #19 Strengthening Information Integrity on Climate Change

Session at a glance

Summary

This discussion focused on information integrity and climate change, examining how disinformation undermines climate action and democratic processes. The panel was organized by the Forum on Information and Democracy in Oslo, bringing together representatives from Brazil, the UN, UNESCO, and civil society organizations to address the intersection of climate science, information ecosystems, and internet governance.


Brazil’s leadership was highlighted through their Global Initiative for Information Integrity on Climate Change, launched during their G20 presidency with UN and UNESCO partnership. The initiative includes a one million dollar pledge to a global fund and plans for a “call to action” leading up to COP30 in Belém. The UN’s Charlotte Scaddan presented the Global Principles for Information Integrity, emphasizing how climate disinformation serves dual purposes: undermining climate action and destabilizing democratic processes through polarization.


UNESCO’s Guilherme Canela stressed the need for a comprehensive ecosystem approach, moving beyond just training journalists to supporting all information producers including scientists, influencers, and advertisers while ensuring their economic sustainability and safety. Research findings from the International Panel on the Information Environment revealed that fossil fuel companies, politicians, and governments are primary sources of climate disinformation, with strategic skepticism replacing outright climate denialism.


A significant focus was placed on the role of advertising in funding disinformation through the attention economy. Harriet Kingaby from the Conscious Advertising Network explained how the opaque digital advertising ecosystem inadvertently funds climate disinformation while blocking legitimate climate content from monetization. The discussion emphasized that most research on climate disinformation comes from the Global North, creating a critical knowledge gap about information integrity challenges in developing countries.


The panel concluded with calls for multi-stakeholder collaboration, increased funding for research in the Global South, protection of environmental journalists and activists, and meaningful engagement with youth who are most affected by both climate change and information manipulation.


Keypoints

## Major Discussion Points:


– **Global Initiative on Information Integrity and Climate Change**: Brazil, UN, and UNESCO launched a collaborative initiative with a dedicated fund (Brazil pledged $1 million) to address climate disinformation globally, leading up to COP30 in Brazil. This includes a “call to action” open to all stakeholders.


– **Economic Drivers of Climate Disinformation**: The advertising industry inadvertently funds climate disinformation through opaque digital advertising systems where brands unknowingly advertise on misleading climate content. The attention economy incentivizes divisive content over quality journalism.


– **Research Gaps and Evidence Base**: There’s a significant lack of research on climate disinformation impacts, particularly in the Global South. A new IPIE report analyzing 300 studies found that fossil fuel companies, politicians, and “scientists for hire” are key sources of strategic climate skepticism replacing outright denialism.


– **Multi-stakeholder Approach**: The discussion emphasized that no single actor can solve information integrity issues alone – it requires collaboration between governments, civil society, media, tech platforms, advertisers, and international organizations, with particular attention to protecting vulnerable communities and engaging youth.


– **Systemic Solutions Beyond Content Moderation**: Rather than focusing on individual pieces of misinformation, the panelists advocated for addressing underlying systems – improving media literacy, supporting environmental journalists’ safety and sustainability, increasing platform transparency, and reforming advertising incentives.


## Overall Purpose:


The discussion aimed to present a comprehensive framework for addressing climate disinformation as both a climate action issue and a democratic governance challenge. The panelists sought to build momentum for coordinated global action leading up to COP30, emphasizing that information integrity is essential for effective climate response.


## Overall Tone:


The tone was professional and collaborative throughout, with a sense of urgency tempered by cautious optimism. Panelists acknowledged the severity and complexity of the challenges while highlighting concrete initiatives and solutions. The discussion maintained a constructive, solution-oriented approach, with speakers building on each other’s points rather than debating. The tone became more interactive and engaged during the Q&A session, with audience questions bringing in diverse global perspectives and practical concerns from the field.


Speakers

– **Camille Grenier**: Moderator from the Forum on Information and Democracy


– **Eugênio Garcia**: Director of the Department for Science, Technology and Intellectual Property at the Brazil Ministry of Foreign Affairs


– **Charlotte Scaddan**: Senior Advisor on Information Integrity at the United Nations Department of Global Communication


– **Guilherme Canela De Souza Godoi**: Director of the Division for Digital Inclusion and Policies and Digital Transformation at UNESCO


– **Harriet Kingaby**: Co-chair of the Conscious Advertising Network, representing Climate Action Against Disinformation (CAD)


– **Fredrick Ogenga**: Member of the International Panel on the Information Environment (IPIE) and member of the Scientific Panel on Information Integrity about climate sciences


– **Audience**: Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Pavel Antonov**: Blue Link Action Network in Bulgaria and APC board member


– **Lee Cobb-Ottoman**: Ministry of Basic Education, South Africa


– **Agenunga Robert**: Democratic Republic of Congo, works at DRC-Uganda border, member of DRC National Assembly


– **Mbadi**: UNHCR, based in Pretoria


– **Larry Maggett**: CEO of Connect Safely, former journalist with CBS News, New York Times, LA Times, and BBC


– **Mikko Salo**: Finnish NGO Faktabar representative


– **Jasmine Ku**: From Hong Kong, former national youth representative of ALCOI


Full session report

# Information Integrity and Climate Change: A Comprehensive Discussion Report


## Introduction and Context


The Forum on Information and Democracy convened a critical panel discussion in Oslo examining the intersection of information integrity and climate change. The session was moderated by Camille Grenier, who opened by highlighting the Forum’s dedicated work stream on this topic, co-led by Brazil and Armenia. The panel brought together representatives from international organisations, governments, and civil society to address how disinformation undermines both climate action and democratic processes.


Grenier emphasised the urgency of the topic, noting that environmental journalists are being murdered and that the Forum has documented systematic attacks on those investigating climate and environmental issues. The discussion featured speakers from Brazil’s Ministry of Foreign Affairs, the United Nations, UNESCO, and various advocacy organisations, alongside active participation from a diverse global audience.


The panel emerged against the backdrop of Brazil’s G20 presidency and the launch of an unprecedented Global Initiative for Information Integrity on Climate Change, marking the first time information integrity has been prioritised at the G20 level. The timing proved particularly significant as the world prepares for COP30 in Belém, Brazil, and COP17 on biodiversity that Armenia will host, creating momentum for coordinated international action on climate disinformation.


## The Global Initiative: A New Framework for International Cooperation


### Brazil’s Leadership and G20 Integration


Eugênio Garcia, representing Brazil’s Ministry of Foreign Affairs, outlined his country’s groundbreaking leadership in establishing information integrity as a G20 priority. The Global Initiative for Information Integrity on Climate Change represents a trilateral partnership between Brazil, the United Nations, and UNESCO, with Brazil demonstrating concrete commitment through a one million dollar pledge to establish a dedicated global fund.


Garcia emphasised that this initiative marks a historic first in G20 discussions, elevating information integrity from a peripheral concern to a central element of climate governance. He referenced the Maceió declaration from the G20, which formally recognised these concerns. The Brazilian approach recognises that effective climate action requires not only sound policies but also a healthy information environment that enables informed public discourse and democratic decision-making.


Garcia introduced the concept of “mutirão,” a Brazilian cultural practice of collective work where communities come together to accomplish shared goals. He explained how this concept informs Brazil’s approach to the global initiative: “We want to build a mutirão, a collective effort where everyone contributes according to their possibilities and capacities.”


The initiative’s structure includes plans for a comprehensive “call to action” leading up to COP30, which will be open to all stakeholders—governments, civil society, private sector, and international organisations—to submit concrete proposals for addressing climate disinformation. Garcia outlined specific goals for this call to action: creating a repository of best practices, identifying gaps in current approaches, and developing concrete recommendations for policy makers. This inclusive approach reflects recognition that information integrity challenges cannot be solved through governmental action alone.


### UN Global Principles for Information Integrity


Charlotte Scaddan from the UN Department of Global Communication presented the Global Principles for Information Integrity, marking the anniversary of these principles. She explained that the principles provide a comprehensive framework for understanding and addressing information challenges across five key areas: societal trust and resilience, healthy incentives, public empowerment, independent free and pluralistic media, and transparency and research.


Scaddan highlighted a crucial insight from recent research: climate disinformation serves dual purposes, simultaneously undermining climate action and destabilising democratic processes through polarisation and institutional distrust. She noted that “we are in effect guinea pigs in an information experiment in which the resilience of our societies is being put to the test,” capturing the unprecedented nature of current information challenges.


The UN’s approach emphasises that climate disinformation has evolved beyond simple denial to more sophisticated strategies of delay and confusion. Research findings indicate that in countries like the United States, mainstream media continues to be a significant source of climate misinformation, challenging traditional assumptions about information distribution channels.


Scaddan also referenced the Global Digital Compact and the WSIS+20 process, noting how information integrity fits into broader digital governance frameworks. She emphasised that “people are most influenced by trusted local voices like pastors, teachers, and community leaders,” highlighting the need for community-based approaches.


### UNESCO’s Ecosystem Approach


Guilherme Canela De Souza Godoi from UNESCO provided critical perspective on the systemic nature of information integrity challenges, drawing from lessons learned in previous climate communication initiatives. He shared his personal experience coordinating climate change and journalism initiatives 20 years ago, acknowledging that earlier efforts, whilst successful in their immediate goals, failed to anticipate the ecosystem-wide approach that disinformation actors would employ.


“When we were talking with them, or even with the scientists, the logic is the scientists want to know how to give better interviews for the journalists. The scientists were not even thinking that there was an important field of research on the issue of information integrity,” Canela reflected, highlighting how past approaches focused too narrowly on individual actors rather than understanding the broader information ecosystem.


UNESCO’s current approach recognises information as a public good requiring support for both supply-side actors (journalists, scientists, content creators) and demand-side empowerment (citizen education, media literacy). This comprehensive framework addresses not only content quality but also the economic sustainability of reliable information sources and the safety of those producing climate-related content.


Canela announced that UNESCO’s Global Fund will provide funding for research and investigative journalism, with an open call closing July 6th. He also mentioned plans to develop a global toolkit on data governance under the Broadband Commission with UNDP.


## The Economics of Climate Disinformation


### Advertising Industry’s Role and Opportunities


Harriet Kingaby from the Conscious Advertising Network provided illuminating insights into how the digital advertising ecosystem inadvertently funds climate disinformation whilst simultaneously blocking legitimate climate content from monetisation. She explained that of every advertising dollar spent, only about 41 cents actually reaches publishers, with the rest consumed by the complex advertising supply chain.


Kingaby presented specific examples of major advertisers appearing on climate disinformation content, including Money Supermarket and Get Your Guide, demonstrating how brands inadvertently fund harmful content due to the opacity of the advertising supply chain. She noted that the same platforms that demonetise legitimate climate content allow disinformation to flourish and receive advertising revenue.


“Most of this you know, I think there’s been plenty of cleverer people than me talking about that at this conference but the twist that I want you to take away is that this situation does not work for advertisers either and that actually creates opportunities for us to create powerful alliances,” Kingaby observed, reframing advertisers from adversaries to potential allies in addressing information integrity challenges.


The advertising supply chain’s opacity means that brands frequently have no visibility into where their advertisements appear or what content they inadvertently fund. This creates both a problem—where advertising revenue supports climate disinformation—and an opportunity for reform through increased transparency and accountability measures.


Kingaby emphasised that the same systems spreading climate disinformation also contribute to mental health crises among young people, creating additional urgency for addressing the underlying economic incentives that prioritise engagement over accuracy.


## Research Gaps and Evidence Challenges


### Global South Representation Crisis


Fredrick Ogenga, representing the International Panel on the Information Environment, presented sobering findings about the geographic concentration of climate disinformation research. Analysis of over 300 studies revealed that current research is concentrated in a handful of countries, with minimal representation from the Global South, Latin America, and Southeast Asia.


“Out of 300 studies reviewed, only one was from South Africa,” Ogenga noted, illustrating the massive research gap that hampers evidence-based policy development in regions most vulnerable to climate change impacts. This disparity not only limits understanding of how climate disinformation operates in different contexts but also risks imposing solutions developed in the Global North onto regions with different information ecosystems and cultural contexts.


The research findings also revealed that fossil fuel companies, politicians, and “scientists for hire” remain primary sources of climate disinformation, but their tactics have evolved from outright denial to strategic scepticism designed to delay climate action rather than prevent it entirely.


### Evolution of Disinformation Tactics


Ogenga’s research highlighted a crucial shift in climate disinformation strategies: “Strategic scepticism is actually replacing climate denialism.” This evolution means that disinformation actors no longer need to convince people that climate change is false; instead, they focus on creating confusion about solutions, timelines, and the urgency of action.


This tactical shift requires corresponding evolution in response strategies, moving beyond simply providing accurate information about climate science to addressing more sophisticated forms of delay and confusion. The challenge becomes particularly acute when disinformation is designed to exploit legitimate scientific uncertainties or policy debates.


## Audience Engagement and Key Concerns


### Youth Participation and Mental Health


The discussion emphasised the critical importance of engaging young people as equal partners rather than token participants in climate information integrity efforts. Charlotte Scaddan stressed that “young people are deeply concerned about climate change but need meaningful engagement as equal partners rather than add-ons to processes.”


Audience member Jasmine Ku from Hong Kong highlighted a concerning gap: youth representatives at regional climate conferences are not incorporating information integrity into their statements and agendas, partly because overwhelming corporate greenwashing creates the impression that information problems are already being addressed.


Harriet Kingaby drew important connections between the attention economy, climate disinformation, and mental health crises among young people. She noted that the same systems that spread climate disinformation also promote despair rather than hope about climate action, exacerbating mental health challenges for the generation most affected by climate change.


### Cultural Context and Local Perspectives


A particularly thought-provoking intervention came from Lee Cobb-Ottoman from South Africa’s Ministry of Basic Education, who observed that “in the region where I come from, climate change is viewed as a white man’s problem. It is viewed as a matter of whether it is cold or it is hot. But in actual fact, what we see is that it means loss of life, loss of property, loss of assets, and displacement.”


This observation highlighted how climate communication can sometimes fail to resonate with local contexts and experiences, emphasising the need for culturally sensitive approaches that connect global climate challenges with local realities and concerns.


### Data Security and Indigenous Rights


Audience member Agenunga Robert raised critical concerns about data security for carbon credit projects and forest monitoring systems, particularly regarding information collected from indigenous territories. The discussion highlighted tensions between the need for transparent environmental monitoring and the protection of vulnerable communities whose data might be misused if compromised.


These concerns reflect broader questions about data governance in climate action, particularly regarding who controls environmental data, how it is stored and protected, and how benefits from data-driven initiatives like carbon credits are distributed among affected communities.


### Journalism Standards and Self-Regulation


Pavel Antonov raised questions about journalism standards and self-regulation in addressing climate disinformation. The discussion touched on the challenges of maintaining journalistic independence while ensuring accuracy in climate reporting, and the role of professional standards in combating misinformation.


## Protection of Environmental Information Sources


The panel addressed escalating threats faced by environmental journalists and climate defenders, with speakers noting increasing patterns of harassment, physical attacks, and digital surveillance targeting those investigating climate and environmental issues. These threats create a chilling effect that reduces the quantity and quality of environmental reporting precisely when such information is most needed.


The discussion revealed that threats extend beyond individual journalists to include scientists, environmental activists, and community leaders who document environmental degradation or advocate for climate action. This systematic targeting of information sources represents a direct attack on the information ecosystem’s capacity to provide reliable climate information.


## Multi-Stakeholder Solutions and Systemic Approaches


### Beyond Content Moderation


The panel consistently emphasised that addressing climate disinformation requires moving beyond reactive content moderation to proactive systemic reforms. Speakers advocated for addressing underlying incentive structures, economic models, and governance frameworks that enable disinformation to flourish.


This systemic approach recognises that individual pieces of misinformation are symptoms of broader structural problems in information ecosystems. Effective solutions must therefore address root causes rather than merely treating symptoms.


### Coalition Building and Industry Engagement


The discussion highlighted opportunities for building coalitions with unexpected allies, particularly in the advertising industry where economic interests may align with information integrity goals. Speakers emphasised that sustainable solutions require broad-based coalitions that include not only traditional advocacy groups but also business actors with economic incentives for change.


Kingaby committed to engaging advertisers at industry events like Cannes to promote information integrity principles, recognising the advertising industry’s potential role as both problem and solution in climate disinformation challenges.


### Community-Based Implementation


Speakers consistently emphasised the importance of community-based approaches that work through existing trust networks rather than attempting to impose external solutions. This approach recognises that information travels most effectively through established social relationships and trusted local voices.


## Concrete Action Items and Next Steps


### Immediate Initiatives


The panel outlined several concrete action items emerging from the Global Initiative. These include launching a Call to Action for COP30 that will be open to all stakeholders to submit proposals on information integrity and climate change, with the goal of creating a repository of best practices and developing concrete policy recommendations.


The initiative plans high-level side events at COP30 in Belém to showcase information integrity initiatives and maintain momentum for coordinated international action.


### Research and Capacity Building


The initiative includes plans to expand research efforts particularly in the Global South to fill critical evidence gaps identified by Ogenga’s research. UNESCO’s funding call represents one concrete mechanism for supporting this expanded research capacity.


### Timeline and Participation


The call to action process will run through the lead-up to COP30, with multiple opportunities for stakeholder engagement. The inclusive approach invites participation from governments, civil society, private sector, and international organisations, reflecting the multi-stakeholder approach that all speakers emphasised as essential.


## Conclusion


The discussion represented a sophisticated analysis of climate disinformation as both a technical challenge and a fundamental threat to democratic governance and climate action. The panel’s emphasis on systemic solutions, multi-stakeholder approaches, and community-based implementation reflects mature understanding of information integrity challenges that goes beyond simple content moderation to address underlying structural problems.


The Global Initiative for Information Integrity on Climate Change, with its concrete funding commitments and inclusive approach to stakeholder engagement, provides a promising framework for coordinated international action. The Brazilian concept of “mutirão” – collective effort where everyone contributes according to their capacities – captures the collaborative spirit needed to address these complex challenges.


However, the discussion also highlighted significant challenges ahead, particularly regarding research gaps in the Global South, protection of environmental information sources, and the need for culturally sensitive approaches that connect global climate concerns with local realities and experiences.


The panel’s recognition of unexpected allies, particularly in the advertising industry, and its emphasis on youth engagement as equal partners rather than token participants, suggests strategic thinking that may prove crucial for building sustainable coalitions for change. As the world prepares for COP30 in Belém, the initiative provides both a framework for action and a recognition that effective climate response requires not only sound policies but also healthy information ecosystems that enable informed democratic participation in climate governance.


The call to action leading up to COP30 represents a concrete opportunity for stakeholders worldwide to contribute to this collective effort, embodying the mutirão approach that Brazil has championed in bringing information integrity to the forefront of international climate governance.


Session transcript

Camille Grenier: Hello everyone, thanks so much for joining us for this very important discussion on information integrity on climate change, a discussion that is at somehow the crossroads of different issues, information integrity of course, climate change of course, but also, and this is why we’re here today in Oslo, internet governance and how the internet has reshaped our information ecosystems and what we can do about it to preserve access to reliable information on an issue, a crucial issue, that is climate change. Information integrity and climate disinformation, you will see this through the presentation of our panelists today, is a very important topic and it really, and we’ll have some evidence from researchers and colleagues, it really delays our ability to tackle climate change and to us at the Forum on Information and Democracy, it is also a democratic issue. It is a democratic issue when climate disinformation is weaponized for political purposes and political gains. It is a democratic issue when journalists who are investigating climate change, environmental issues, are targeted, are threatened, and in the worst case are murdered. And it is a democratic issue when access to information, to facts, to knowledge is undermined and we can clearly see today that all and a lot of knowledge institutions are being targeted for doing their work. And this is why at the Forum on Information and Democracy we launched a dedicated work stream with the signatory states of the Partnership for Information and Democracy on ensuring information integrity on climate change and other environmental issues. This work stream is co-led by Brazil and Armenia as Brazil will host, as a lot of you may know, COP 30 in November and as Armenia will host COP17 on biodiversity next year. The reason we launched this work stream is because we want to ensure that the answers that will be brought to this specific challenge also respect democratic principles. Democratic principles including transparency of powers, plurality of information, access to reliable information, and something that is really dear to our heart, the political and ideological neutrality of the information and communication space and the entities that structure this information space. Because I have the privilege of being the first one to talk, I will just stress on one specific point that is related to information integrity, the fact that to have information integrity we need people and institutions that provide reliable, independent, trustworthy information. And I would like to stress the importance of environmental journalists and the media that are doing this work of investigating climate change and environmental issues. And to us, we need to ensure that they can do their job freely and safely, that we have access to their facts and to their findings through social networks and throughout the information ecosystems, and that their work is also sustainable and we’re doing a lot of work on media sustainability these days. So thanks a lot. I would like to thank again all our panellists today, we’ll have a very important presentation on different efforts that are being done notably the Global Initiative for Information Integrity on Climate Change that is led by actually three first panellists, Brazil, the UN, and UNESCO. And I would just like to start with you, Eugenio Garcia, the Director of the Department for Science, Technology and Intellectual Property at the Brazil Ministry of Foreign Affairs. If you could start by presenting Brazil’s leadership in this area. the Global Initiative and how the idea emerged and what are the challenges that the initiative aims to address and what is such a priority for Brazil. Thank you.


Eugênio Garcia: Yes, thank you. Thank you very much. Glad to be here. I think for Brazil it’s clear that the climate crisis is real, it’s urgent and not something for tomorrow. And in Brazil we have been severely affected by extreme weather events. For example, in 2023 the drought in the Amazon rainforest was possibly the worst in history. And also the flooding in the south of Brazil last year, 2024, of course was a tragedy with many people displaced. But this is to show that these extreme weather events, they are affecting the life of people in a very direct way, because it’s not something that we think of global warming as something that’s not felt by people on the ground. And we have last year the G20, the Brazilian presidency of the G20, and we thought that would be, in terms of testing the waters, to see if we could include information integrity in the program, in the agenda for the, specifically the digital economy working group. And we had four priorities for this working group. One was universal and meaningful connectivity. The second, infrastructure, digital public infrastructure. Then the third, artificial intelligence. and the governance of AI, and the fourth priority that we presented to the G20 members was precisely information integrity. We didn’t know exactly the reaction or the feedback we would get from this discussion, but in the end we reached a consensus, and it was positive because it was the first time that the G20 addressed the information integrity. And we had a ministerial meeting in the city of Maceió, it’s in the north-east of Brazil, where you find these four priorities including information integrity. For those interested, you can download the Maceió declaration, just go make a search and it’s available in English for you to read later and how this topic was addressed by G20. Then, in the meantime, we have been discussing with the United Nations Secretariat in New York, in particular the Department of Global Communications, and UNESCO in terms of joining forces, and also other stakeholders. But the idea was to launch the global initiative on information integrity and climate change during the summit of the G20 in November 2024 in Rio de Janeiro. And that’s what we did in partnership with the UN and UNESCO, and I think for not only the Brazilian government, but this is a top priority. We pledged for the global fund, I think Guilherme will explain later the details of how the global fund is structured for this global initiative, but we pledged one million dollars that is, let’s say, adding to the G20. to what we say that’s important, but also showing that our commitment is really something that we mean, something extremely important. So I think in terms of coordinating efforts and talking to other stakeholders in different international organizations or different forums, for example G77, now we have COP30 coming in Belém do Pará, it’s next November as we know, and it’s a huge responsibility. So we want to create this global movement so that we have information like there’s a growing awareness of the importance of addressing this issue in terms of how we address climate change. And in terms of bringing this topic everywhere, let’s say the Global Digital Compact also mentioned information integrity. It’s interesting to highlight this because it was adopted by the UN in 2024 and UN member states committed to work together to promote information integrity, tolerance and respect in the digital space. And strengthen international cooperation to address the challenge of mis-slash-disinformation hate speech online. And by the way, also in the elements paper of the co-facilitators for the WSIS plus 20 process, they also mentioned information integrity, highlighting that stakeholders should promote information integrity, tolerance and respect in the digital space. This is dialogue with the GDC and protecting the integrity of democratic processes, strengthening international cooperation, and also trying to mitigate the risks of information manipulation in a manner consistent with international law. with International Law. So I think this is part of a global effort with our partners and in terms of engaging with COP30, I think that’s now we are trying to focus our action to, in terms of reaching November with some concrete initiatives. So you know that the president of COP30, Ambassador André Corrêa do Lago, in his first letter he mentioned the idea of a call to action, that we, it’s a mutirão. Mutirão is a Portuguese word from the indigenous communities in Brazil. The idea of having a collective endeavor, because in the village everybody would help each other to, in the spirit of coming together to deliver results. So the village would, each one would bring something, sometimes bring tools or materials or even skills to reach this collective endeavor, that’s the idea for mutirão. So we are now planning for the global initiative on information integrity on climate change, call to action. Of course other, I think my colleagues will also address some of the initiatives that are ongoing, for example we have a meeting in Bonn, I think Charlotte will talk about this, trying to integrate the global initiative and the action for climate empowerment. But this call to action that we will be launching soon is trying to integrate this agenda. with the COP30 and the goals, I think it would be useful to highlight some of them because these are joint efforts to contribute to this, what we hope this is a global movement with very concrete actions to promote information integrity, such as gathering and sharing data, rigorous research and evidence and knowledge on risks to climate information integrity including disinformation and impacts on climate action in line with the GDC. Also sharing practical tools, methods and materials to strengthen resilience against disinformation and promote information integrity on climate change. Developing communications campaigns, strategies and efforts designed to raise public awareness and foster a global culture of information integrity, including through trusted voices and influencers. These are the goals of this call to action. Fostering media sustainability, including its economic viability to cover environmental and climate change related issues. Supporting the protection of environmental journalists, activists, communicators and scientists. Protecting scientific data and data sets related to climate change. Promoting transparency and accountability in digital advertising to help address financial incentives for climate disinformation and foster climate information integrity. Fostering target media information in digital literacy related to climate change. And also donating financial resources to UNESCO’s Global Fund to help gather evidence and strategic communications including in professional journalism. To conclude, I think this call to action, that is the next step in the global initiative related to COP30, will contribute to including information integrity on climate change in the COP30 process by uniting efforts across borders and sectors and representing a pivotal step towards a global movement, as I said, for promoting information integrity on climate change.


Camille Grenier: So I will stop here, but I think that’s the idea, not what we are trying to achieve. And this call to action will be open to all stakeholders? Open to all stakeholders, so we will soon launch this and have the details and we will have a period where we will be assessing the contributions that we expect to receive and we are optimistic that we have received many proposals. So that’s the first call to action on the road to COP30. And I think that one thing that I find really remarkable with the work that Brazil is doing is bridging communities together, different communities. And the community working in journalism, on internet governance, and now on climate change. And I think bringing this community to our topics of information integrity is also very, very valuable. And I’m sure that the UN is also quite well-placed to do this kind of work. So Charlotte Skadden is Senior Advisor. If you could put on the slides, please, for Charlotte, who is a Senior Advisor on Information Integrity at the United Nations Department of Global Communication. And before you start, happy birthday, Charlotte. Because today is… It’s not my birthday. I mean, it’s not Charlotte’s birthday, sorry. It’s the first anniversary of the Global Principles on Information Integrity that were published exactly one year ago. I thought you were going to tell her age. No, I would not. Sorry. So, Charlotte, can you tell us a little bit more about the UN role in strengthening climate information integrity globally with a focus on policy, public diplomacy and communications, of course, in court in the UN Global Principles for Information Integrity?


Charlotte Scaddan: I will. And thank you, Camille, for mentioning the anniversary, which of course I’m going to touch on. And frankly, to me, it’s actually more important than my own birthday with the amount of effort that we put into developing the global principles. So I’ll just start by giving some context. As you all know, the UN has, I’m sure you’re very familiar with our work on climate change. And it’s a huge priority for us, climate action. But more recently, information integrity has also become a major priority. And of course, there’s a range of initiatives going on around the UN related to information integrity. But I wanted to touch on two today. And one, of course, is the Global Principles for Information Integrity that, as you mentioned, were launched a year ago today by the Secretary General, Antonio Guterres. And the principles, for those of you who aren’t familiar with them, are, I think, a groundbreaking framework for action for a safer, more inclusive information ecosystem. And they put forward, we put forward five of these principles, recommendations for different stakeholders around five principles. And you see them on the screen here. They are societal trust and resilience, healthy incentives, public empowerment, independent, free and pluralistic media, and last but not least, transparency and research. So the principles frame information integrity as an information ecosystem where freedom of expression is fully enjoyed and where accurate, reliable information, free from discrimination and hate, is available to all in an open, inclusive, safe, secure information environment. And this entails a pluralistic information ecosystem. , the global information space that fosters trust, knowledge, and individual choice for all. So, actually, in the past year since we launched, the response has been really overwhelming. We’ve seen government, civil society, media, businesses, and other really harness these principles through activities and efforts around the world. So we go into work on information integrity and to implement the global principles with our eyes wide open. Because the challenge before us is formidable. And the threat landscape is vast. Risks include mis and disinformation. I think everyone is generally familiar with those. As well as hate speech and harassment. But we also see other risks that are more structural or political. So suppression of independent news media. And suppression of academic and civil society work and voices. Denying of access to information. The defunding or removal of public sources of information. And, of course, top of mind now. The threat landscape is vast. And the threat landscape is vast. And the threat landscape is vast. And, of course, top of mind now are risks related to emerging tech. Emerging technologies. Including Gen AI. Because we see tech companies continue to integrate AI into our everyday applications at breakneck speed. They are not slowing down, as we know. People are now increasingly reliant on this tech to shape their understanding of the world and everything that’s happening in it. But while Gen AI tools are proliferating in the public domain, they can’t uniformly be used in the public domain. And that’s a challenge. Because while Gen AI tools are proliferating in the public domain, they can’t uniformly be relied on for accurate information. And we see ongoing tests and studies that show that, you know, these tools frequently do not distinguish between rigorous science on the one hand and dirty data or outright nonsense on the other. And yet, people are accessing this flawed data, but they’re not equipped to assess its veracity or reliability, which can contribute to one of the most devastating consequences of AI. And that’s a challenge. So, this is a major leadership leap in AI and the value that we see from it. But people are now increasingly relying on Gen AI tools and testing them. And yet, we see ongoing tests and studies that show that, you know, these tools frequently do not distinguish between rigorous science on the one hand and dirty data or outright nonsense on the other. And yet, people are accessing this flawed data, but they’re not equipped to assess its veracity or reliability, which can contribute to what we see as a deeply concerning trend, that is, the lack of trust in any information source and in the information ecosystem more broadly. People just don’t know what they’re doing. what’s real, what to believe. We are in effect guinea pigs in an information experiment in which the resilience of our societies is being put to the test. So in short, the spectrum of risks is broad and it goes way beyond just the false information itself. The issues impacted are also wide ranging and it really touches on all areas of the UN’s work and what we need for a sustainable future and functioning democracies. When it comes to climate, the motivations and impacts are twofold. Purveyors of climate disinformation and hate obviously seek to undermine climate action, right? And we’ve seen the fossil fuel industry and others including state actors pour billions into this over decades. But we also see climate change used as a wedge issue to polarize, to disrupt, to destabilize democratic processes, particularly around elections and we always will see a spike around pivotal societal moments. We know enough to be able to make these conclusions but the evidence base ranges and while there’s some strong research from major academic institutions and civil society organizations, including IPIE as we’ll hear shortly, much of this research is concentrated in a handful of countries where the support and funding have been focused up to now. And as many of us here know really well, this support has been under attack and politicized, especially in recent months with researchers, civil society and others targeted. So that’s why our focus on research is really key. From our own limited research efforts, we’ve identified a range of tactics used in attempts to undermine climate-related information. The narratives used as part of these tactics, which you see a list here of a sample of them and I’ll just mention a few. They range from there being no scientific consensus around climate to climate change is a manufactured political tool and a scapegoat for domestic policy issues and failures to globalist elites. using climate issues as a means of justifying totalitarian policies and even to control the weather. And, you know, sometimes when I say that last one about controlling the weather, I get kind of sniggers, but in fact, you know, this is actually coming from leading political figures. And these claims are used to kind of steadily erode trust in academic scientific institutions, in the UN, in COP and the COP process, and also to isolate people to certain information sources, which are often very localized. And what we’ve seen is that underlying a lot of these tactics is what we call the us versus them, the constructed enemy, adversarial stance. It’s painting those who support climate action as elites serving only their own interests. And these behaviors are not in the fringe. They’re in the mainstream of information spaces, and they’re being used by influential figures, both state and non-state actors. So what can we do? Well, as laid out in the global principles, first and foremost, we need multi-stakeholder action. It’s a very UN term, I think, but it’s really a valid one. Obviously, and this brings me to the next example, you know, a really leading example of this is our global initiative that we just heard about, and it’s really a major priority for us at the UN as we approach COP. Our response has to be multifaceted and include prevention and mitigation measures across the information ecosystem. This includes strategic communications and advocacy, of course, political engagement, human rights-based policy, and community engagement. We need to recalibrate our previous thinking about the information ecosystem and the information landscape so that we better understand people’s relationship to information sources today, which is often playing out in very niche spaces and at the community level. What many institutions have long thought, and I include the UN in that, have long thought as mainstream media is no longer mainstream. The mainstream has shifted, and that’s true across many geographies. we need to immediately identify and fill information voids because if we don’t fill them, the disinformation actors will, and they’ll do it quickly and without hesitation. And we need to think longer-term about building trust and how we can keep attention by carefully considering our tone and language around climate issues and going about gauging communities with humility and respect so that we avoid reinforcing that us-versus-them trap. When it comes to structural obstacles, what has become really crystal clear to us is that we need to expand and engage our circle of stakeholders. And that includes the advertisers who fund the digital ecosystem and as such have unique power. They can act quickly and effectively to mitigate harms and influence digital platforms in ways that we cannot. And that’s why just a few days ago, I was with colleagues from organizations represented here on stage at CanLion, which is the biggest and most important annual gathering of the ad industry. And we took our message of information integrity to that key audience. And I’m really happy that Harriet is here to explain the advertising angle in more detail. It’s somewhere where it’s an angle that we’re really going to be focusing on a lot in coming months. So I’ll just end by saying, you know, we don’t have time to spare. I mean, the urgency and the scale of this challenge require active coalitions and collaborations so that we can increase global resilience. And we need to find those entry points for action before it’s too late. So I’ll leave you with that. Thanks.


Camille Grenier: Thank you. Thank you so much Charlotte. I think one thing I take from the presentation is really to have this sort of broad approach gathering all the stakeholders, including advertisers. I’m sure we’ll get back to that in a moment. Guilherme from UNESCO. Guilherme Canela is the Director of the Division for Digital Inclusion and Policies and Digital Transformation. You’ve been also a leader in that space and also working on… who has done a lot on freedom and safety of journalists. We’re really glad to have you on stage and to have you talk a little bit complementing what has already been said on the Global Initiative, maybe more specifically around UNESCO’s approach on this and about the fund that has been mentioned already, and also the work that UNESCO is doing with interconnected issues, notably on safety of journalists, for example.


Guilherme Canela De Souza Godoi: Thank you, Camille. It’s a pleasure to be here with all these fantastic colleagues in the panel. Let me start by telling you a story. Twenty years ago, when I was not an international bureaucrat and I was doing intelligent things in my life, I coordinated an initiative about climate change and journalism in the media. And it’s interesting to look 20 years back, this was for Latin America, on what were our mistakes on that time. Because we didn’t notice, and this was a huge mistake, that we couldn’t face the issue looking into just one of the actors. So we lose what the bad actors already knew at that time, the ecosystem logic of this. So the initiative, that one that I did 20 years ago, was very successful in its goals. Our initiative was to train journalists to speak, to cover better climate change. But when we were talking with them, or even with the scientists, the logic is the scientists want to know how to give better interviews for the journalists. The scientists were not even thinking that there was an important field of research on the issue of information integrity, to use an anachronistic word, because we didn’t have that expression at that time. So what we were betting our horses is that if we train the information producers at the time, the journalists, we will solve the problem. And this was a huge mistake of our part because we were not prepared enough to think the rest of the ecosystem. So the initiative here and what Charlotte was describing in terms of the information integrity concept is very much related to this broader idea that information is a public good. And then in UNESCO we simplified that with three pillars, right? If we consider information as a public good we need to empower the citizens to interact with the ecosystem, right? It’s education, it’s media and information literacy, so on and so forth. But this is a necessary condition but it’s not a necessary and sufficient condition because it’s unfair to say to my uncle or to my grandmother, well don’t circulate this thing you received in the whatsapp or whatever, knowing that on the other side of the fence you have trillion dollar companies, either fossil fuel companies or companies that are relying a lot on this attention economy. So it’s unfair to put only on the shoulders of the society to solve the problem, but it’s necessary. So this is qualifying the demand. The other pillar is that we need to qualify the supply. So 20 years ago when I was doing those intelligent things we thought that qualifying of the supply was to support the journalists, but that was wrong. We need to support the scientists, we need to support the influencers, we need to support the advertisers, all those that are sending inputs to the system and to guarantee that this support is enough for to do this in a reliable and accurate manner, but also they need to survive. Advertisers are doing quite well but the journalists are not, or the scientists. So we need also to deal with the economic issue, but also with the safety issue. 20 years ago the journalists were telling us, well there is an issue. and the other groups, we have a lot of people who are saying we have an issue here and there, but no one was saying I’m being attacked because I’m covering climate change. Now we hear that all the time. And it’s not only attacked online, which is already a big issue. They are being attacked physically. They are being attacked with massive slaps everywhere. So there is also a complexity here in protecting the supply side of the story. So we have to look at the other side of the story. And the other side of the story is what Charlotte was explaining. How we deal with the transmission chain, right? With the ecosystem that includes the social media companies, the governance, now the AI and whatever. So it’s not one thing or the other. It’s one thing and the other. And that’s why it’s so complex. So the initiative wants to look into this, recognizing this complexity, but telling, look, there is one particular element that is missing, right? And that is the lack of governance mechanisms. And particularly in the global south, as I guess you are going to speak about your recent findings, that’s, yes, we have anecdotical evidence that there is disinformation out there. Lots of people are debunking this disinformation. But we don’t really know what’s going on behind the scenes, right? Who is funding this disinformation? What are the systems of distribution? What are the conflicts of interest that are there? And the fund that we launched, there is an open call for those interests, Global Initiative on Information Integrity, open call, you will find it, you can apply until July the 6th, closing the propaganda element. The idea is precisely how we can collect more evidence to support our work, our work in the strategic communications in the U.N. and UNESCO and others, the work of foreign information and democracy and non-devocacy, and the work of governments, on governance, and so on and so forth. So that’s the idea. And basically, we are going to fund research and investigative journalism in these areas. So, to conclude a bit and coming back to my initial story of 20 years ago, when I was training those many journalists in Latin America and discussing climate change, for not a second in any of those hundreds of trainings I did with my team, we were including in the conversation of the journalists the need for them, and today for others, to understand information integrity as part of the problem. They were only looking into the climate change component. And this, it’s not working. We need to look into this connection between those two words. And for me, this is a bit of the beauty of this, and also it’s already a positive message. Because not only us, when I say Brazil, UN and UNESCO, there is, and as Eugênio mentioned, the elements paper of the WSIS is including information integrity, the Global Digital Compact did. So, in that sense, I think there is room for optimism, because there is a concern, and this concern is raising different elements across the board. Thank you.


Camille Grenier: Thank you so much, Guilherme. So, supply, demand and distribution. And as we’ve seen on and on, more access, better access to data, to have a better understanding of what’s actually happening out there, and in these really opaque information ecosystems. Let me now turn to Harriet, and I think we have some slides again, if we can have them. Harriet King-Gabi, co-chair of the Conscious Advertising Network, and also representing today CAD, which is the Climate Action Against Disinformation. And as already mentioned, with you will, and it’s a very interesting presentation, take a deep dive into ad-funded risks to climate information integrity.


Harriet Kingaby: Thank you so much and thank you for the warm welcome today. So for those of you that don’t know, the Conscious Advertising Network is a broad coalition of over 190 brands, advertising agencies and civil society groups. We exist to essentially ensure effective advertising works for everybody. And happily I can say that we are a very practical application of the multi-stakeholder approach that I’ve heard a lot about at this conference. What we essentially do is we know that advertising is causing human rights issues and we know that civil society has the deep knowledge there. We know that advertisers understand the advertising ecosystem extremely well and so we bring those groups together to try and look for solutions. Essentially, as Charlotte said, we’ve just come back from the Cannes Festival of Advertising and although the language that is used is very different, although the way the issues are presented is very different, I can assure you that the issues of information integrity were discussed there incredibly passionately and almost as passionately as I’ve seen them discussed here. I have to say, business is framing them in terms of the business case and I want to unpack that a little bit for you today. But I want to just start out by addressing the elephant in the room, which is that obviously advertisers are on one side of things part of the problem. This is a quote from the IPCC report, Climate Change Impacts Adaption and Vulnerability, that talks about the vested political, organised and financed misinformation and contrarian climate change communication, which is undermining information integrity around climate change. Yes, the advertising industry itself is producing some of that disinformation. Yes, it’s working with clients such as fossil fuel clients that are part of the problem. However, I want to just tell you that this is only a part. of the way that advertising interacts with information integrity and I hope that I can convince you that advertisers can also be a part of the solution today. So this is a hideous graphic but I think it illustrates the attention economy quite well. Advertising is essentially the funding model behind the attention economy and therefore the reason that addiction is designed into the system. Advertising funds the media, it funds the platforms, it funds our more traditional media ecosystem and online the longer we can be kept scrolling, the longer our attention can be kept, the more ads can be served to us and therefore the more profit the platforms can make. And what this has done is this has completely changed the incentive structures behind the production and distribution of content. So quality used to be pretty high up on the agenda, informing citizens, entertaining citizens but in fact now the emphasis on content production is about keeping us hooked, sucking us in and keeping us hooked essentially. And this is creating unhealthy incentives for the production of content that has devastating consequences for information integrity around really important issues such as climate change. Most of this you know, I think there’s been plenty of cleverer people than me talking about that at this conference but the twist that I want you to take away is that this situation does not work for advertisers either and that actually creates opportunities for us to create powerful alliances that can really, really take on some of this system. So let me explain if I can get the clicker to work. Essentially this is too small for you to read maybe but what you need to know is that the global advertising market is enormous. We’re talking one trillion. and Gigi Hadid, thank you so much for joining us. Advertising has grown by over $1 billion as of this year. And it’s growing. And in particular, the digital component of this system is growing. Now, what this means is the problems that we’re talking about today are being amplified. And accelerated. And therefore, they are becoming really business critical for advertisers to understand and to tackle. Much as democracy relies on a sense of shared reality, on trust, advertising also relies on trust. So, where you have a fall in trust in our information ecosystems, advertising also starts to become less effective. So, what this means is businesses are paying more for less return on their advertising spend. And falling trust in information, essentially, is a shared problem that we’re both looking at. Now, at the heart of this problem is the fact that the advertisers are paying more. And they’re paying less. And they’re paying less. And they’re paying less. And they’re paying less. And they’re paying less. And the other part of this problem is the fact that the advertising ecosystem is so incredibly opaque. If you think about how we consume media now, you know, my journey through the media ecosystem will be completely different to Guillermo’s today. It will be different to Charlotte’s. It will be different to all of yours. And the technology required to track me around my personalized media journey and to serve me ads is enormous. There are many actors in this system, many technology companies that do things like this. And they’re all doing it. And they’re all doing it. And they’re all doing it. And there are many actors in this system, many technology companies that do things like collect my data, process that data, work out what ads to serve me, kind of do online auctions to make sure I see the ads that I’m supposed to see. And unlike other corporate supply chains, unlike the supply chain to make this shirt, for example, advertisers have no idea where their advertising is going and what it is funding. There’s no know your customer law. There’s no mapping of those supply chains. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. And so, what happens is people take advantage. the platforms are taking large amounts of money. And this is coming at the cost of publishers. So we’ve heard about the fall in advertising revenue to publishers, and the impact that’s having on the news system. We heard about news deserts this morning. And there was a report that found that out of every dollar that an advertiser spends and puts into this system, only $0.41 of it actually reaches the publishers. And that used to be much, much more. So the rest of it is swallowed up by organizations like Google, and frankly, organizations we’ve never heard of. And all of this compounds the problem. If you’ve got less reporting on important topics, you’re degrading information integrity, you’ve got less trust. So this is fueling the production and distribution of climate disinformation. And it’s tempting to think that businesses can be opposed to climate action. And sometimes this is true. But there are vast categories of businesses that need us to solve the climate crisis in order to survive. Think about the businesses that make coffee and chocolate or wine. Think about the insurance industry, for example. And essentially, advertisers alone don’t have the tools to solve this problem. They need to work with us, everyone here, in order to make that happen. So I want to show you what I mean, the issues that this system is causing. So writing on this is small for those of you who can’t see it. The advertiser, it’s a screen grab from YouTube. So the advertiser here is called Money Supermarket. They help UK consumers to get quotes from their insurance companies. So they work a lot with insurance companies. And they are advertising on disinformation that suggests that Hurricane Hélène, which went through the US, is somehow linked to the US government controlling the weather. Now, as Charlotte mentioned. This kind of disinformation creates distrust in the institutions that are supposed to help us when extreme weather events happen. It’s also pretty bad for the insurance industry because it slows efforts to sort out the problem. Here is YouTube advertising on its own channels against content spreading disinformation about the clean-up efforts from the Valencian floods that killed hundreds of people. And again, the clean-up efforts were undermined by climate disinformation. This is white percentage points of the Spanish GDP. This situation is not good for business, it is not good for us, but it is not good for them either. And here is Get Your Guide on Oeste, a Brazilian channel with nearly 3 million followers, promoting a narrative which was used during the 2023 Rio Grande disaster as a way to discredit government clean-up efforts. Again, this slowed efforts to help. And, you know, this is still earning money two years later. The reason that I draw your attention to this is this is not content that is organic, this is content that is earning money for its creators. So I’ve singled out YouTube here, but I can find you examples on TikTok and other channels of this, and I can find you examples of creators earning money from this. Nobody wants this system to continue. Not us, not you, not advertisers. Because essentially, not only are they wasting their money, but they’re also putting their brands at reputational risk. So, together, we need to address this. Not one single actor has the power to sort this out. But we do need to start moving conversations on with the platforms away from content, away from individual pieces of content and whack-a-mole, to talking about systems change and the business model. So, UN Global Principles and Information Integrity are a fantastic way of doing that, of looking at that through this systemic lens. So, I’m going to wrap up, I promise. Final slide. The things I want to talk about today are about essentially taking on the information economy. In order to engage advertisers in this situation, we need to be talking about the system which is drawing people in in order to serve them more ads, and therefore prioritizing content which is divisive or untrue. Next, we need to drive transparency through this incredibly opaque system at scale. And I’ll talk more later on about a case study we have of how that can lead to great business outcomes as well as outcomes for society. And also, once they get a handle on their supply chain, they can start to think about investing in pluralistic media and news. So, to summarize, this is good for business and it’s good for us. So, working together in these multi-stakeholder approaches can only bring us better solutions.


Camille Grenier: Thank you so much for this deep dive into the economics of disinformation, much to do indeed with all the different related stakeholders. You mentioned that democracy, we need a sense shared of reality. And that makes a nice transition to our next guest, Frederic Oguenga. Frederic, you are a member of the IPIE, the International Panel on the Information Environment. And you were a member of the Scientific Panel on Information Integrity about climate sciences. You worked for several months, I think nine months, to get to the report that was published last week. And we would like to have this deep dive on what does research tell us about climate disinformation? Because I think that we also need this. This is a global assessment, if we are to come up with appropriate answers. Frederic, the floor is yours. And I think we have to say it again. There you are. Yeah, so, interesting. Thank you so much for the organizers to, first of all, welcome us in Oslo. The IPIE is an entity that looks at the integrity of information environment across the board.


Fredrick Ogenga: So, climate information environment is also one of those things that we look at. And so, what I’m going to talk about today is a report that we compiled over a period of 12 months, looking at about 300 articles and using qualitative and computational methods, as well as quantitative methods to arrive at, which includes also data visualization. But I won’t really go dive into that for the case of time. I want to tell you about really what we found out. We used the linear model of communication by Harold Laswell, which really talks about, because what I’m gathering here in the panel today is there is a crisis in the information environment on climate change. There is clearly a crisis, and that crisis is what made us wonder. And as panelists, which was led by Jensen, Klaus Jensen from the University of Copenhagen, we realized that if you use the communication model, then you realize that there are people who produce those messages, the source, you know, senders of the message, the initiators of the message, and then the channel and the messaging, and then the consequences of their messaging, and then, of course, after that, what then can we do about that kind of messaging? So if you look at that… We are talking about who the actors were, the messages and the channels that they used, their key audiences, and what the consequences of their messaging was, and what solutions we ought to have in that process. So 300 articles were looked at, and then after looking at those articles, we basically inspected the gaps and came up with recommendations. So the first culprits in terms of sources or actors in the datafication of the information environment on climate change is actually fossil fuel companies, corporations, we have politicians, we do have governments, and also some states. We also have legacy and social media, and scientific hands for hire, these are people who are hired to write something about climate change in favor of a particular position. And for example, if I just take a quick example, in countries like the US and North America, we found out that the mainstream media is still led in terms of spreading false information about climate, and this kind of information varied based on the type of media. If it was a conservative media or right-leaning media, you’d see the kind of messaging that come about in those kinds of platforms, and their messages were actually those that disputed the science about climate change, and this is where we find people like the US president, which we saw in the previous slide. So there we also realize that not much is known about the impact of what social media does. in terms of the messaging in social media and how social media therefore impacts on audiences in ways that can be measured and analysed to come up with countermeasures. And so that is one of the gaps that we saw. So this also has been touched around here because we do have a level where there’s no transparency in digital platforms, especially on those who, the people who are owners, the owners of the platforms and those who produce data in those platforms. And so eventually, the messaging, overall messaging that emerged is that strategic scepticism is actually replacing climate denialism. So people are strategically frustrating or obscuring climate science in order to delay the climate response, to cut into the chains of those in the policy line of coming up with measures that are supposed to address climate change and effectively derailing climate interventions. And again, we saw that classic media, of course, is leading in that arena. And I talked about how really it’s still not clear on how social media is impacting directly on audiences. So if this is the scenario, then the kind of messages that we see, the message of obscurity, contrarianism and the climatist cataclysm is something that we see impacting on the information environment on climate change. And these are things that we need to address and deal with. So the key audiences that are supposed to be in the leading front, in the front lines. of climate interventions, who are usually actually policy makers, are derailed and they are targeted. They are targeted by misinformation, and this misinformation also feeds into the policy chain and eventually affects climate intervention. So something needs to be done there. And therefore, effectively, generally, what this does, and panelists have mentioned this, is that it erodes public trust on climate science and also trust in institutions that are responsible for addressing climate change. So it has a double, a dual effect, and if that happens then it becomes challenging to address climate change going forward. So the IPIE in our report, we came up with a few recommendations and about measures that we can take to address the crisis of information integrity on climate science. And one of those measures, some of them have been mentioned, is we need to educate people, we need some level of education, whether it’s on the science of climate change or even the medium that transmits climate change. Because from the model that I began talking about, the linear communication model, we saw that media and channel go hand in hand. So if you educate audiences, our stakeholders, about the media that transmits climate change information and the science of climate change, then we might really go towards getting some level of success in climate intervention. We also have to look at the regulatory and policy environment. and many other stakeholders and government and civil societies coming out and pointing out those people who are responsible for doing what is called greenwashing and to what extent can you litigate on those issues. It’s a grey area, it’s a contentious area and the data is still minimal. So that’s one area that I think it’s an opportunity to explore further in terms of research. And then, I think I’ve touched them all, yes, there’s one that I didn’t talk about which is called counterpublics. Counterpublics are simply, I think, I heard about it from my Brazilian colleague and from my UNESCO colleague here, about how we bring different voices from different spaces to form a coalition of counterpublics who are responsible to counter false discourses and misinformation from these powerful forces that are driving misinformation about climate change. So it’s a coalition of collaboration about defending truth about climate science and also scientists that are responsible for defending climate science and the integrity of information about climate science. I think I’ll leave it there and explore more in the Q&A. Thank you so much.


Camille Grenier: Thank you so much, Frederic. please do take a look at the report. It’s a rather longish report, 127 pages, I think, but there is a very streamlined executive summary that also brings the main conclusion and policy recommendations which are very, very useful in this specific field. We still have a little bit more than half an hour, and we wanted to make sure that we also have an interactive section. We’ve been talking a lot about multi-stakeholderism, so if there are some questions in the room, I think there is already one person and then one there. So maybe we can take the two, and the first two questions, and have…


Audience: Yes, can you please introduce yourself? Thank you very much. My name is Pavel Antonov with Blue Link Action Network in Bulgaria and the APC board. This is seriously becoming my favorite panel since the morning when I heard the one on integrity of journalism. My network supports climate defenders in the Climate Coalition, and as a journalist, it’s extremely interesting to listen to all the solutions you presented. What we have come across is that the climate defenders, as well as other activists around there, seem to… we seem to automatically sort of say they’re the good guys, and we expect always automatically that the problems lie from the industry and from elsewhere, which is true in most cases. But what we realize is that there is a certain lack of norms, lack of standards, that even the climate activists are coming to complain about at some point. They say the information environment has become so volatile, so fractured, we can’t even operate healthily in it. So we thought, what if we come back to the norms of journalism as they used to be 20 years ago? The ones that were saying you need to double-check your sources and offer the opposite point of view to your opponent, even if you disagree, and so on. And we offer this to the broader public, but to the climate defenders, to the civil society as a start, and see how they could abide with that same norms. Will they stop seeking for clicks? Will they stop communicated ungrounded information, would they be willing to take this responsibility? I wanted to hear what do you think about this? Do you think that might actually work as a self-regulatory approach? Thank you. Thank you so much. Hello, my name is Lee Cobb-Ottoman. I come from South Africa. I work for the Ministry of Basic Education. In the parts of the world where I come from, climate change is viewed as a white man’s problem. It is viewed as a matter of whether it is cold or it is hot. But in actual fact, what we see is that it means loss of life, loss of property, loss of assets, and displacement. Now, what are the ethics in information sharing and information dissemination when it comes to climate change and climate action and education for sustainable development? Given that, we don’t want to incite fear, but we’re working with a society that will not do anything unless it responds to a problem. And so you want to then create that picture that this is the problem that you are facing as the African continent and as the world. This is the problem. You’re not inciting fear. Yet, when you find people who are really doing great work about teaching Africans like myself about the impact of climate change in our lives, you’re always going to be seen as somebody who’s inciting fear and anxiety on people. So what are the ethics? What do we need to do? And how far can we go really to ensure that there is that balance of teaching people about this thing as being an issue of sustainability, but also an issue of safety? Thank you so much. Thank you. My name is Agenunga Robert. I come from Democratic Republic of Congo. I work at the border with DRC and Uganda. My submission or question is so much concerned with the information I have noted from different panelists. In the region where I come from, there is massive data collection going on for carbon credit project. Data being collected from indigenous territories, data from community forest. My concern is not about who collect this data, but my worry is where are they keeping this data? What protocols are in place for them to store this data without harming indigenous people or forest dependent community? Because in case this data get leaked or is compromised, it will have a very dangerous consequences on indigenous people and forest dependent community. Because I have seen it in Congo where land is being stolen and forest given away to foreign investors to mine critical minerals needed to power AI and other things. So, that is very painful. But also, when you look at the work of the UN Special Rapporteur on situation of environmental defenders under ARUS Convention, Mr. Michel Faust, he noted recently that environmental activists and human rights defenders who work on issues of climate justice are not only targeted physically, but also emotionally. and the other one is that they have no legal intimidation but also their surveillance and their data is spied by government. In Brazil, in the Democratic Republic of Congo and also in Indonesia, these are three major forest places where activists are at risk. And I don’t know at UNESCO level, a colleague from UNESCO, what international mechanisms are in place to ensure that members’ countries abide by so that data concerning forest, indigenous territories or even security of defenders working on forest and climate justice is protected. Myself, I worked for 12 years as a digital security trainer helping indigenous people in the Congo Basin to communicate and work safely. But beginning from January 2024, I became a member of parliament in the DRC National Assembly but I see myself more as an activist who took himself to the parliament because in government there is a lot of bureaucracy and I continue with the activism helping human rights defenders, environmental activists and indigenous people to protect their land. But now we have the big threat is the project by Brazil which is called, it is called the Tropical Forest Forever Facility. My colleague from Brazil should be aware of this. This is a multi-billion project scheduled to be launched in Brazil at the COP, the next COP. This project, if developed, it means indigenous people or forest-dependent community can receive 4% of contribution being made from the sale of carbon credit generated from their forest. And for this to be quantified, data need to be collected on the carbon potential of forestry. And last month we had a meeting of three forest world basin in Brazzaville in Congo, people from Indonesia, Brazil, and Congo basin and also Rwanda gathered in Brazzaville for a week. And the question here was data integrity, who is using this data, how will a local chief know that his forest is the one providing global environmental benefit, yet he doesn’t have the data. And what make Brazzaville and other partner government to only remit 4% of the total benefit arriving from, you know, carbon credit of arriving from climate benefit being provided by forest countries providing benefit of climate mitigation.


Camille Grenier: Thank you. Thank you so much for your question. I like how we touched upon the vast majority, the vast topics, all the different topics that we can include in the concept of information integrity ranging from journalism and the ethics of journalism to data governance. So, before turning and giving the floor to other questions, I don’t know if some of you would like to react and answer some of the questions, would you like to go first?


Charlotte Scaddan: Sure. Perhaps I could speak to the second question about fear, which I think is, I can’t actually see the gentleman who asked, yeah, there you go. Thank you for asking that question. It’s an excellent one. And I can tell you it’s one that as communicators on climate we have been grappling with for a very long time. And what we have learned over a period of many years is that, you know, yes, fear can be a motivator, but fear is not going to inspire people. And I think there are ways that one can communicate the urgency and gravity of the climate crisis without necessarily resorting to fear. and the other is the real world. People need to understand the real, the situation that we’re facing. There’s no getting around it. But what they also need to know is what they can do about it or what the government can do about it. What actions they can actually take. They need to see specific examples of especially community-led actions. I think that’s what people can really relate to. In terms of the impacts, I think that’s what we need to look at. I think that when we talk about climate change, it becomes this overwhelming topic, right? You just want to shut down. Because you feel that you can’t, you feel that you can’t do anything about it on an individual level. And that’s true, right? We need to look to the fossil fuel industry and governments and others to take action. But I think that there are steps that we can all take, including if we live in democracies, we can vote, right, for candidates who support climate action. But we also need to look at the impact of climate change on people’s lives, right? And we need to look at what are some of the individual impacts, the economic impacts, but also the economic benefits of climate action, right? How does it affect people’s wallet? How does it affect people’s daily lives? And their quality of life. So, I think that, yes, it’s important to stress the reality


Camille Grenier: of the situation. And that might be scary. But equally, it’s important at the same time to offer really solid solutions and the way forward. So, I think that’s the way forward. Thank you very much.


Fredrick Ogenga: So, I would want to combine two questions. One that was asked about journalism and going back to the tenets of journalism to address the climate change problematic. And also with that aspect of use of data and how sure are you that you are using the right data. So, my question, first of all, in this report, we asked ourselves in the report what is the relationship with This is really the measure, the threshold of information integrity about climate science. And we came up with a few issues that are familiar. You know, things like accuracy, you know, transparency, reliability of data. Transparency, accuracy, and reliability of data, and how consistent that data is. Because climate science data has also been inconsistent. If you say global warming is bringing about climate change, then tomorrow you are saying something else. You’re becoming inconsistent. And that’s the standard we see in climate misinformation. So for that, I think we also observed in the study that minimal studies have been done in the global south. In fact, we came about, out of 300 studies, we only uncovered one study from South Africa that touches on the metrics that we are measuring. So global south minimal data, Latin America, and Southeast Asia. Very minimal data. Most of the data was coming from North America, Europe, China, and Russia. So that tells us something about also our interest in wanting to venture into research, to produce homegrown data that can inform us. And to that extent also comes the question of infrastructure. Because when you want to solve climate, when you want climate solutions out of data, then you also need to make that data secure. And therefore the question of infrastructure comes in where we lack data centers that are reliable. And if you have to have magnitude of data on data sets in climate science, then you need to host them elsewhere.


Guilherme Canela De Souza Godoi: And so how sure are you? very briefly a few things. On the question on standards of journalism and ethics, etc., of course this is very important. I don’t know if you were here when I was speaking. It’s not about one thing or the other. It’s one thing and the other. So these standards, these self-regulatory elements are important and necessary, but they won’t solve parts of the problem that are crucial to address the issue. For example, the transparency of the social media companies or the AI companies, or demanding them to have human rights-based content moderation and content curation mechanism. So we just need to be careful to say that let’s invest in this thing and forget the other, because it’s not going to work. It’s a complex puzzle and unfortunately we need all the pieces of this puzzle. But on the standards, we also need to be careful that when we are talking about the standardized criteria for journalism of getting the other side, this is all fantastic. The best that can happen to climate discussion, it’s not activism journalism, it’s independent journalism. But independent journalism also means that you can’t compare reports of the IPCC that shows with 99% of reliability that we have a problem and put this as if it was equal to the other 1% of people saying that we don’t have an issue, right? I’m in favor that the journalists speak with… the others, but also underlining to the reader or to the listener that there is an unbalance here that is not the same, right? So this is super important. On what you, the question from the gentleman from South Africa, when you said this is perceived as a white man problem, this is precisely what we want with this initiative in stimulating more research and more investigative journalism from the Global South. Why this narrative is like that and where this is coming from, what is the impact? Because what you are saying here is something I didn’t knew. It shows that is different the problem in terms of information integrity in where you are, probably is different in Brazil, is different in Indonesia and so on. So this is super important in what we are trying to stimulate. And finally on the data story, we are launching under the Broadband Commission with UNDP and others, a global toolkit on data governance, precisely addressing these kind of issues. And then on your last question in terms of violence and others against the journalists, etc. Last year we dedicated the World Press Freedom Day entirely to this discussion, and then we can talk later, but we produced a roadmap talking with the journalists, the scientists, and how to address also these issues of attacks against those voices, those critical voices that are speaking about climate change or climate disinformation and other environmental issues. Thank you.


Camille Grenier: Thank you Guilherme. Maybe a quick remark from Harriet Kingaby Eugenio or could we go back, no?


Harriet Kingaby: I wanted to make the link back as well to, I think mainly around the first question, but I wanted to give an example of the unhealthy incentives that are linked to some of these issues. So I wanted to take the idea of quality of journalistic standards and how these are being impacted by the incentives we talked about. We did a piece of research about five years ago, and we looked at safe climate content online. So, you know, the most the most robust, entertaining, shared climate content online. And we found that actually something like 70 percent of that content couldn’t monetize through advertising. So that means that there’s no economic incentive for news organizations to produce that content. And it’s why it’s so important that we get advertisers around the table, because what they’re doing is they’re blocking climate content because they think it’s too political or too risky for their brand to appear next to. So what this does is it completely disincentivizes the really great quality journalism around one of the most important issues of our time. We need to get them around the table and reinforce the idea. And we’ve got people that can make the business case that actually their advertising performs well. We also need to reinforce the idea that it’s of the utmost importance that they actually go and advertise there because we need better standards. It’s also really important because then the third thing is that the platforms aren’t incentivizing this content because it’s not, you know, because it doesn’t keep necessarily keep people on the kind of platforms in the way that they want, because, you know, they’re after eyeballs and attention and addiction, right? So it’s really important that we then get the advertisers around the table with the platforms to go, actually, this is incredibly important for us.


Camille Grenier: It’s incredibly important for everyone in this room. And you need to do something about it. And we’ve seen that can create change. So Google introduced their first climate disinformation monetization policy because of that dynamic. So, yeah. Thank you so much. I’m not trying to answer all the questions. I would be happy to discuss some of the specific points that were raised, but talk to the African colleagues in the audience.


Fredrick Ogenga: I think we need more countries from the global south to join this effort because we have the global initiative on information integrity. and Climate Change. Some countries have joined. I remember Chile, Denmark, France, Morocco, Sweden, United Kingdom. Several others also expressed interest in joining, but we need more developing countries as well. And everything that we need is political commitment. We don’t ask anything else, but political commitment to join forces to address climate change and information integrity as a package, as something that we should see this both ways, and how we can move forward in this regard. Absolutely, thanks for raising the point. And to come back to the question, if you could make them short so that we can have some time to respond. Thank you so much. Sure. Good afternoon. My name is Mbadi. I’m from UNHCR,


Audience: based in Pretoria. And as you know, UNHCR, we deal a lot with refugees and asylum seekers. And these are oftentimes groups of people that have very limited, reliable information, especially with climate risks. So in a country like Botswana, for example, where we have the encampment policy, how then do you think we can practically ensure that these groups of people are exposed to reliable information? Because in an encampment space, that I think is a place where misinformation can spread widely. So what are some of the practical ways that we can counter that? Thank you. Thank you so much for the question. Hi, my name is Larry Maggett. I’m CEO of Connect Safely, which is a Silicon Valley-based NGO that educates parents and young people about various aspects of internet online safety. And we partner very closely with Meta, Google, Apple, Amazon, and many of the technology companies that we work with. in and around Silicon Valley. I’m also a former journalist with CBS News, New York Times, LA Times, and BBC and still write a syndicated column and do a national radio show for CBS. And I, we work very closely with young people and the one thing that seems to be at least anecdotally evident is that young people are very concerned about climate change. I will probably pass on before the world is in serious decay, but the young people are going to have to live with it and potentially die with it, which is tragic. And I’m trying to figure out in our work, working on internet safety and working with our youth advisory council, how we can energize those young people and take advantage of their energy and their concern to channel some of their activities in ways that will actually have an impact on decision makers, policy makers, industry, and government. Thank you so much. Thank you. My name is Mikko Salo. I’m representing a Finnish NGO, Faktabar, who has been working on the fact-checking and mostly on the digital information literacy and especially in AI literacy. But very much on a follow-up of the previous person, one of the most promising thing that we did was we were fact-checking Greta Thunberg at the time. And as we know, she was more, I mean, she has had both of the issues very closely, and she was mostly right on everything, but of course became a huge campaigns, very polarizing and all that. But I was wondering what have you learned about the Greta Thunberg case, because she was really empowering the youngsters and perhaps the COVID killed the movement in the moment because people were really, young people were really doing something because now they are becoming vegetables with the technology, addicted to that one. But that was something. There is something very promising, but I wonder what could be done differently, because there is something right. Thank you. Thank you so much. One last question very quickly, please. Hi, this is Jasmine Ku from Hong Kong. So last year I was the national youth representative of ALCOI, the Regional Conference of Youth, endorsed by Yuan Yangguo on climate change. So one thing that I found, it’s a reflection for me in the conference, when we are drafting youth statements for the region, we have not considered and thought about information integrity as a problem on climate change when we are drafting the agenda and things. So for me, my reflection and also a question is, how could we possibly bring this topic into youth statements? Because the thing is, our region, the thing that they did not consider problematic enough is because there are a lot of corporate greenwashing and solutions provided. And the thing is, this kind of information has been overflowing, and then people believe that the problem has been tackled very well. That’s why it’s never on the agenda in a youth statement. So it’s just my question. Thank you very much.


Camille Grenier: Thank you so much for these brilliant questions on the place of young people in these debates, and on how to make sure that reliable information is accessible to all the different communities. There is very quickly one question online that is addressed directly to Professor Genga, and I will try to sum it up. But basically, it argues that if we have such a lack of data, how can we build actual policies? And probably in the global majority world, how can we make sure that with the lack of data, we can build factual and actual policies? We still have eight minutes and five speakers, so maybe we can do a quick round-up. And if you’d like to address, I really like the question related to the youth, access to reliable information and policy development. Who wants to go first?


Guilherme Canela De Souza Godoi: I can start very quickly. Yeah, Guilherme. So, I mean, on the question of refugees and migration, etc., I think the colleagues, but you can add, I guess, Charlotte, in the UNHCR in Geneva, they are doing lots of interesting things on that. But there are interesting lessons learned from the past, using radio in refugee camps to debunk these and to stimulate information sharing, etc. But the essential issue here that I think is very important with this question is that we need to put a lot of focus on groups that are in a situation of vulnerability and marginalization, whoever they are. So, the issues of multilingualism, of the special protections that are needed, are super important in this conversation. Regarding the several questions on youth and Greta, for those who don’t know, I would strongly recommend for you to look into the Guardian project that’s called the 89% project, which basically is very solid research saying that 89% of the people in this planet do believe that we have a climate problem. The question is why they are not taking action. And some of our hypotheses are related to how the disinformation is shifting. It’s not any longer about saying there is not a problem. It’s much more sophisticated than that. But we do need to understand the characteristics of this disinformation to then sophisticate our own actions on that. And then it comes to the last point I wanted to comment, stealing a bit of the thunder. I would like to ask you a question regarding what the panel has produced on the data issue, the lack of evidence to produce evidence-based policy. So I do this call for action to you, apply to the global initiative, because this is precisely what we want to see. We want to see more data being produced, more information being produced, so that we can circulate more information to produce evidence-based policy. And of course, for the donors in the Rumo Online, contribute to the fund, because then we can fund more research and data production.


Camille Grenier: And I think we can thank Brazil for putting the first million in the fund, which is very, very crucial, specifically in the funding landscape that we all operate in.


Charlotte Scaddan: So we still have five minutes. Yeah, just because I want to speak to the excellent question about offline engagement. And I think just to build on what Guilherme was saying about vulnerable and marginalized groups, when we started out on the process of coming up with the global principles, we initially were just going to focus on the digital space, but quickly realized through the course of our global consultations that actually we needed to take a much broader approach because, of course, as we all know, there are many people who have inadequate connectivity or no connectivity at all, who aren’t in a position to engage digitally, who are still impacted by the disinformation and hate that’s spread online. But I think one of the things I touched on in my remarks was what we term community engagement. And this also is really valid, not just in terms of the UN’s work on refugees, but our work in peacekeeping environments and all over the world, that we have to be in communities listening to people. That has to happen face to face. And one thing that’s important for us all to remember is that we talk about news, we talk about major influences, we talk about digital. But most people are most influenced by the people they know. They’re influenced by their pastor or priest, by their teacher, by their local community leader, by their uncle, by their cousin. And that is how we can effectively reach people, by identifying those local community voices who are trusted, local community leaders, and sharing reliable information with them so that they can then amplify in a way that’s going to be engaging. So I would say that’s a really important point about offline. And then just on youth, I would say, and I’m not a youth anymore, sadly, but we need to engage with youth in a meaningful way by actually listening to them on an equal footing. Bring them to the table. Not being an add-on to a process, but having them integrated from the start. Because there’s a lot we can learn from them, they’re digital natives and we’re not.


Camille Grenier: Thank you. Absolutely. So we’ve come to the last point. Eugênio.


Fredrick Ogenga: Real quick, my final remarks. I fully agree with the need to engage young people. And we have this call to action that we expect individuals, organizations to submit concrete proposals on information integrity and climate change. I think we need to build momentum because COP30 is going to be the culmination of these global efforts. And we’re also planning to have high-level side events in Belém to showcase these initiatives. And of course, this is an open invitation to you all, the audience here, also people online to stay tuned because this call to action will be released soon. And we are glad to see that many stakeholders have already expressed interest in sending


Camille Grenier: proposals and will be, of course, available for any follow-up as needed. Thank you so much for opening the doors of COP30 also to this community working on information integrity. It’s really, really important. I just wanted, I felt like if I go without responding to that question from, I think it’s an online audience, yeah, who asked about where there’s lack of data, the global south, then what do we do?


Fredrick Ogenga: Yeah. Well, this I think it’s an online audience, yeah, who asked about where there’s lack of data, the global south, then what do we do? Yeah. This is a challenge that emerges from a trajectory of orality because Africans are oral by nature, but it doesn’t mean that they are not climate experts. And for example, the Ogiek community in Kenya, a community that is known for preserving forests. So the capitalistic way of looking at climate interventions really disregards their local wisdom on how to address the climate challenge using carbon credit programs that are more from global multinationals as opposed to grassroot approaches. So my suggestion would be, what are the grassroot approaches that we can find, you know, through primary research? So as opposed to what we did, because we did a systematic review, that is not going to the field and engaging with the local, the locality, so that you can dive into the local knowledge repository to get data that then can inform a practical, more meaningful practical interventions. And therefore, after you do that, you simply monitor and evaluate, iterate, and develop literature about it so that it can guide your interventions going forward. Luckily, that is what is lacking. So my suggestion would be that we need to look into ways in which we can partner with those who are capable of making us be in a position where we can co-create together and then be able to use locally available resources and infrastructures to come up with data that can bring, for example, greening, you know, just planting trees, it’s not rocket science. But how many are we that are doing tree planting or maybe fruit tree planting at the grassroot level? We take it for granted. So it’s at that level. It’s time we stop taking those things for granted, we document them and we see how those things can help us in finding our own knowledge and data about where we want to move forward with the problem of climate change.


Camille Grenier: Thank you so much. Harriet, one last?


Harriet Kingaby: I will be very quick and I’ll answer the point about young people. So I talk about these issues quite a lot and the last time I talked about these issues I was on stage after a young woman called Adele Zeynep Walton who has just written a book. And she wrote this book because her younger sister unfortunately took her own life after being served content online through exactly the kind of patterns that I’ve described that encouraged her to do so. And there is a crisis in the mental health of young people and that is exacerbated by what is happening to them online. And I got very cross yesterday after a panel, I won’t lie, because someone said we need to wait for regulation around this and we absolutely do, we need regulation of this space. But people are getting hurt in the interim and we need to do everything we can to move very quickly to solve these problems. And that includes helping our young people to feel hopeful about the future, releasing them from the systems that are addicting them and pulling them online and helping them to despair. So I’ll just leave you with that, we solve everything we talked about today and we will also go some way to looking at the mental health crisis in young people today. Thank you so much for these last remarks. There is a word that comes and Eugenio mentioned, mutirao, I hope I’m pronouncing it alright.


Camille Grenier: And really I would like to take this word because information integrity, as you may have understood today, is a big house and everybody is welcome to bring their own contribution to build that. and make sure that we have access to reliable information, that we protect our youth, that we protect access to facts, journalists, activists from around the world. And really, I think that with this global initiative, with the fund, with the call to action, we have a very precious thing that will take forward and that will bring to COP and hopefully beyond COP 30. That will be, of course, crucial. Thank you so much for being with us today. And of course, remain available. Thank you. Thank you. Thank you.


C

Camille Grenier

Speech speed

149 words per minute

Speech length

2023 words

Speech time

810 seconds

Climate disinformation is weaponized for political purposes and undermines democratic processes

Explanation

Climate disinformation becomes a democratic issue when it is deliberately used for political gains and to manipulate democratic processes. This weaponization of false information about climate change threatens the integrity of democratic decision-making.


Evidence

Mentioned that climate disinformation is used for political purposes and political gains, and that it undermines democratic processes particularly around elections


Major discussion point

Climate Disinformation as a Democratic and Governance Issue


Topics

Human rights | Sociocultural


Information integrity on climate change requires respecting democratic principles including transparency, plurality of information, and political neutrality

Explanation

Ensuring reliable climate information must be grounded in democratic values such as transparent governance, diverse information sources, and maintaining political neutrality in information spaces. These principles are essential for maintaining trust in climate-related information.


Evidence

Specifically mentioned democratic principles including transparency of powers, plurality of information, access to reliable information, and political and ideological neutrality of information and communication spaces


Major discussion point

Climate Disinformation as a Democratic and Governance Issue


Topics

Human rights | Legal and regulatory


Environmental journalists face increasing threats, harassment, and physical attacks for investigating climate and environmental issues

Explanation

Journalists covering climate and environmental topics are experiencing escalating dangers including threats, harassment, and in worst cases, murder. This targeting of environmental journalists represents a serious threat to press freedom and access to reliable climate information.


Evidence

Mentioned that journalists investigating climate change and environmental issues are targeted, threatened, and in worst cases murdered


Major discussion point

Protection of Environmental Journalists and Information Sources


Topics

Human rights | Cybersecurity


Agreed with

– Guilherme Canela De Souza Godoi
– Audience

Agreed on

Environmental journalists and information sources face increasing threats and need protection


Media sustainability is crucial to ensure continued coverage of climate issues and access to reliable environmental reporting

Explanation

The economic viability of media organizations covering environmental and climate issues is essential for maintaining public access to reliable climate information. Without sustainable funding models, quality climate journalism cannot survive.


Evidence

Emphasized the importance of media sustainability and mentioned ongoing work on this issue


Major discussion point

Protection of Environmental Journalists and Information Sources


Topics

Economic | Human rights


E

Eugênio Garcia

Speech speed

111 words per minute

Speech length

1159 words

Speech time

621 seconds

Brazil launched the Global Initiative during G20 presidency, making information integrity a priority for the first time in G20 history

Explanation

Under Brazil’s G20 presidency, information integrity was included as one of four priorities for the digital economy working group, marking the first time this issue was addressed at the G20 level. This represented a significant diplomatic achievement in elevating the issue globally.


Evidence

Mentioned that Brazil had four priorities including information integrity, reached consensus, and it was the first time G20 addressed information integrity, with the Maceió declaration available in English


Major discussion point

Global Initiative on Information Integrity and Climate Change


Topics

Legal and regulatory | Development


The initiative is a partnership between Brazil, UN, and UNESCO with Brazil pledging $1 million to demonstrate commitment

Explanation

The Global Initiative represents a concrete collaboration between major international actors, with Brazil providing substantial financial backing to show genuine commitment. This funding demonstrates that the initiative goes beyond rhetoric to actual resource allocation.


Evidence

Specifically mentioned the partnership with UN and UNESCO, and Brazil’s pledge of one million dollars to the global fund


Major discussion point

Global Initiative on Information Integrity and Climate Change


Topics

Development | Economic


C

Charlotte Scaddan

Speech speed

183 words per minute

Speech length

2356 words

Speech time

771 seconds

Climate disinformation creates distrust in institutions responsible for climate action and undermines public trust in climate science

Explanation

False information about climate change systematically erodes confidence in scientific institutions, the UN, and climate processes like COP. This erosion of trust makes it harder to build consensus and take collective action on climate issues.


Evidence

Mentioned that tactics are used to steadily erode trust in academic scientific institutions, the UN, COP and the COP process, and isolate people to certain information sources


Major discussion point

Climate Disinformation as a Democratic and Governance Issue


Topics

Human rights | Sociocultural


The UN’s Global Principles for Information Integrity provide a framework for action across five principles including societal trust, healthy incentives, and transparency

Explanation

The UN has developed comprehensive principles that offer a structured approach to addressing information integrity challenges. These five principles provide actionable recommendations for different stakeholders to create safer, more reliable information environments.


Evidence

Listed the five principles: societal trust and resilience, healthy incentives, public empowerment, independent free and pluralistic media, and transparency and research


Major discussion point

Global Initiative on Information Integrity and Climate Change


Topics

Legal and regulatory | Human rights


Young people are deeply concerned about climate change but need meaningful engagement as equal partners rather than add-ons to processes

Explanation

While youth are highly motivated about climate issues, they are often marginalized in decision-making processes. True engagement requires treating young people as equal stakeholders from the beginning rather than tokenistic inclusion.


Evidence

Emphasized the need to engage youth meaningfully by listening to them on equal footing, bringing them to the table, and integrating them from the start rather than as add-ons


Major discussion point

Youth Engagement and Community-Based Solutions


Topics

Human rights | Development


Agreed with

– Harriet Kingaby

Agreed on

Youth engagement must be meaningful and treat young people as equal partners


Community-based approaches are essential, as people are most influenced by trusted local voices like pastors, teachers, and community leaders

Explanation

Effective information sharing happens through personal relationships and trusted community figures rather than through mainstream media or digital platforms. Local community engagement is crucial for reaching people with reliable climate information.


Evidence

Specifically mentioned that people are most influenced by their pastor or priest, teacher, local community leader, uncle, or cousin, and emphasized the importance of identifying trusted local voices


Major discussion point

Youth Engagement and Community-Based Solutions


Topics

Sociocultural | Development


Solutions must address structural obstacles including platform transparency, advertising accountability, and protection of information sources

Explanation

Addressing information integrity requires systemic changes to how digital platforms operate, how advertising funds content, and how reliable information sources are protected. Individual content moderation is insufficient without addressing underlying structural issues.


Evidence

Mentioned the need to expand stakeholder circles to include advertisers who fund the digital ecosystem and have unique power to influence platforms


Major discussion point

Multi-stakeholder Approach and Systemic Solutions


Topics

Legal and regulatory | Economic


Agreed with

– Guilherme Canela De Souza Godoi
– Harriet Kingaby

Agreed on

Multi-stakeholder approach is essential for addressing climate information integrity


G

Guilherme Canela De Souza Godoi

Speech speed

164 words per minute

Speech length

1911 words

Speech time

697 seconds

UNESCO’s approach recognizes information as a public good requiring support for both supply and demand sides of the information ecosystem

Explanation

Information integrity cannot be achieved by focusing only on educating citizens or only on supporting information producers. A comprehensive approach must simultaneously empower citizens while ensuring reliable information sources have the resources and protection they need to operate effectively.


Evidence

Described three pillars: empowering citizens through education and media literacy, qualifying the supply by supporting journalists, scientists, influencers, and addressing economic and safety issues


Major discussion point

Global Initiative on Information Integrity and Climate Change


Topics

Development | Human rights


Disagreed with

– Audience

Disagreed on

Scope of responsibility for addressing climate disinformation


The problem requires both qualifying the supply of information and empowering citizens to navigate the information ecosystem

Explanation

A balanced approach is needed that doesn’t place the entire burden on citizens to identify misinformation while trillion-dollar companies spread false information. Both citizen education and systemic support for reliable information sources are necessary conditions.


Evidence

Explained that it’s unfair to tell citizens not to share misinformation when they face trillion-dollar companies and attention economy on the other side


Major discussion point

Multi-stakeholder Approach and Systemic Solutions


Topics

Development | Economic


Agreed with

– Charlotte Scaddan
– Harriet Kingaby

Agreed on

Multi-stakeholder approach is essential for addressing climate information integrity


More investigative journalism and research funding is needed to understand the sources, channels, and impacts of climate disinformation

Explanation

Current understanding of climate disinformation lacks depth about funding sources, distribution systems, and conflicts of interest behind false information campaigns. The UNESCO fund aims to support research and investigative journalism to fill these knowledge gaps.


Evidence

Mentioned the open call for the Global Initiative fund to collect evidence about who funds disinformation, distribution systems, and conflicts of interest, with applications due July 6th


Major discussion point

Research Gaps and Evidence Needs


Topics

Development | Human rights


Agreed with

– Fredrick Ogenga

Agreed on

Current research and data on climate disinformation is insufficient, particularly in the Global South


Vulnerable and marginalized groups, including refugees, need special attention and multilingual approaches to access reliable climate information

Explanation

Information integrity efforts must specifically address the needs of vulnerable populations who may have limited access to reliable information sources. This includes providing multilingual content and using appropriate communication channels like radio in refugee camps.


Evidence

Mentioned lessons learned from using radio in refugee camps and emphasized the importance of multilingualism and special protections for vulnerable groups


Major discussion point

Multi-stakeholder Approach and Systemic Solutions


Topics

Human rights | Development


Agreed with

– Camille Grenier
– Audience

Agreed on

Environmental journalists and information sources face increasing threats and need protection


H

Harriet Kingaby

Speech speed

170 words per minute

Speech length

2392 words

Speech time

839 seconds

The attention economy funded by advertising creates unhealthy incentives that prioritize divisive content over quality information

Explanation

The current advertising model rewards platforms for keeping users engaged as long as possible, which incentivizes addictive and divisive content over quality journalism. This fundamental business model creates systemic problems for information integrity.


Evidence

Described how advertising funds the attention economy where longer scrolling means more ads and profit, changing incentive structures from quality and informing citizens to keeping users hooked


Major discussion point

Advertising Industry’s Role in Information Integrity


Topics

Economic | Sociocultural


Advertisers inadvertently fund climate disinformation through opaque digital advertising supply chains

Explanation

Major brands unknowingly support climate disinformation because they cannot track where their advertising dollars go in the complex digital ecosystem. This creates a situation where legitimate businesses financially support harmful content without realizing it.


Evidence

Provided specific examples including Money Supermarket advertising on Hurricane Hélène disinformation, YouTube advertising on Valencian floods disinformation, and Get Your Guide on Brazilian climate disinformation content


Major discussion point

Advertising Industry’s Role in Information Integrity


Topics

Economic | Legal and regulatory


The advertising ecosystem lacks transparency, with advertisers having no visibility into where their ads appear or what content they fund

Explanation

Unlike other corporate supply chains, the digital advertising system is completely opaque to advertisers. They have no way to know what content their money supports, creating opportunities for exploitation and unintended funding of harmful content.


Evidence

Explained that there are many technology companies in the system doing data collection and processing, with no know-your-customer laws or supply chain mapping, and only $0.41 of every advertising dollar reaches publishers


Major discussion point

Advertising Industry’s Role in Information Integrity


Topics

Economic | Legal and regulatory


Advertisers can be part of the solution by demanding transparency and investing in pluralistic media coverage of climate issues

Explanation

Since advertising funds the digital ecosystem, advertisers have unique power to drive positive change by requiring transparency from platforms and actively supporting quality climate journalism. This creates business incentives aligned with information integrity goals.


Evidence

Mentioned research showing 70% of quality climate content couldn’t monetize through advertising, and that Google introduced climate disinformation monetization policy due to advertiser pressure


Major discussion point

Advertising Industry’s Role in Information Integrity


Topics

Economic | Human rights


Agreed with

– Charlotte Scaddan
– Guilherme Canela De Souza Godoi

Agreed on

Multi-stakeholder approach is essential for addressing climate information integrity


Mental health impacts on young people are exacerbated by online systems that promote despair rather than hope about climate action

Explanation

The current online environment contributes to mental health crises among youth by promoting addictive, despairing content rather than hopeful, actionable information about climate solutions. Addressing information integrity can help improve youth mental health outcomes.


Evidence

Shared the story of Adele Zeynep Walton whose sister took her own life after being served harmful content online through the attention economy patterns described


Major discussion point

Youth Engagement and Community-Based Solutions


Topics

Human rights | Cybersecurity


Agreed with

– Charlotte Scaddan

Agreed on

Youth engagement must be meaningful and treat young people as equal partners


F

Fredrick Ogenga

Speech speed

130 words per minute

Speech length

2046 words

Speech time

940 seconds

Strategic skepticism is replacing climate denialism to delay climate response and derail climate interventions

Explanation

Rather than outright denying climate change, bad actors now use more sophisticated tactics to create doubt and confusion about climate science. This strategic approach is more effective at preventing action while appearing more reasonable than complete denial.


Evidence

Mentioned that strategic skepticism is replacing climate denialism, with people strategically frustrating or obscuring climate science to delay climate response and derail interventions


Major discussion point

Climate Disinformation as a Democratic and Governance Issue


Topics

Sociocultural | Human rights


Current research on climate disinformation is concentrated in a handful of countries, with minimal data from the Global South

Explanation

The evidence base for understanding climate disinformation is heavily skewed toward North America, Europe, China, and Russia, with very little research from Africa, Latin America, and Southeast Asia. This creates blind spots in global understanding of the problem.


Evidence

Stated that out of 300 studies reviewed, only one study from South Africa was found, with most data coming from North America, Europe, China, and Russia, and minimal data from Global South, Latin America, and Southeast Asia


Major discussion point

Research Gaps and Evidence Needs


Topics

Development | Sociocultural


Agreed with

– Guilherme Canela De Souza Godoi

Agreed on

Current research and data on climate disinformation is insufficient, particularly in the Global South


The lack of evidence-based research hampers the development of effective policies to counter climate disinformation

Explanation

Without comprehensive data about how climate disinformation operates globally, policymakers cannot develop targeted, effective responses. The research gaps particularly affect the Global South where different dynamics may be at play.


Evidence

Discussed the challenge of developing policies without adequate data, particularly in Global South contexts where infrastructure and data centers may be lacking


Major discussion point

Research Gaps and Evidence Needs


Topics

Legal and regulatory | Development


Local and indigenous knowledge systems need to be documented and integrated into climate information integrity efforts

Explanation

Traditional oral knowledge systems contain valuable climate expertise that is often overlooked by formal research approaches. Grassroots approaches that incorporate local wisdom can provide more meaningful and practical climate interventions than top-down solutions.


Evidence

Mentioned the Ogiek community in Kenya known for forest preservation, and emphasized the need for primary research to engage with local knowledge repositories rather than just systematic reviews


Major discussion point

Research Gaps and Evidence Needs


Topics

Sociocultural | Development


A

Audience

Speech speed

141 words per minute

Speech length

1785 words

Speech time

759 seconds

Scientists and environmental defenders are being targeted both physically and through surveillance of their data and communications

Explanation

Environmental activists and researchers face not only physical threats but also digital surveillance and data compromise. This dual threat environment makes it dangerous for people to work on climate and environmental justice issues.


Evidence

Referenced UN Special Rapporteur Michel Faust’s findings that environmental defenders are targeted physically, emotionally, through legal intimidation, surveillance, and data spying in Brazil, DRC, and Indonesia


Major discussion point

Protection of Environmental Journalists and Information Sources


Topics

Human rights | Cybersecurity


Agreed with

– Camille Grenier
– Guilherme Canela De Souza Godoi

Agreed on

Environmental journalists and information sources face increasing threats and need protection


Disagreed with

– Guilherme Canela De Souza Godoi

Disagreed on

Scope of responsibility for addressing climate disinformation


International mechanisms are needed to protect environmental activists and ensure data security for forest and indigenous territories

Explanation

Current data collection for carbon credit projects in indigenous territories lacks proper protocols for data protection, potentially exposing vulnerable communities to land theft and exploitation. International frameworks are needed to protect both the data and the people it represents.


Evidence

Described massive data collection for carbon credit projects in indigenous territories and concerns about data storage protocols, citing examples of land theft in Congo for critical mineral mining


Major discussion point

Protection of Environmental Journalists and Information Sources


Topics

Human rights | Legal and regulatory


Youth movements like those inspired by Greta Thunberg show promise but need sustained support and integration into policy processes

Explanation

The Greta Thunberg movement demonstrated young people’s potential to drive climate action, but such movements need better integration into formal policy processes and sustained support beyond individual moments of activism.


Evidence

Referenced Greta Thunberg’s impact on empowering young people and noted that COVID disrupted the movement, with concerns about youth becoming passive with technology addiction


Major discussion point

Youth Engagement and Community-Based Solutions


Topics

Human rights | Sociocultural


Agreements

Agreement points

Multi-stakeholder approach is essential for addressing climate information integrity

Speakers

– Charlotte Scaddan
– Guilherme Canela De Souza Godoi
– Harriet Kingaby

Arguments

Solutions must address structural obstacles including platform transparency, advertising accountability, and protection of information sources


The problem requires both qualifying the supply of information and empowering citizens to navigate the information ecosystem


Advertisers can be part of the solution by demanding transparency and investing in pluralistic media coverage of climate issues


Summary

All speakers agree that addressing climate disinformation requires coordinated action across multiple stakeholder groups including governments, platforms, advertisers, civil society, and communities rather than siloed approaches


Topics

Legal and regulatory | Economic | Human rights


Current research and data on climate disinformation is insufficient, particularly in the Global South

Speakers

– Guilherme Canela De Souza Godoi
– Fredrick Ogenga

Arguments

More investigative journalism and research funding is needed to understand the sources, channels, and impacts of climate disinformation


Current research on climate disinformation is concentrated in a handful of countries, with minimal data from the Global South


Summary

Both speakers emphasize the critical need for more comprehensive research and evidence collection, especially in underrepresented regions, to understand and combat climate disinformation effectively


Topics

Development | Sociocultural


Environmental journalists and information sources face increasing threats and need protection

Speakers

– Camille Grenier
– Guilherme Canela De Souza Godoi
– Audience

Arguments

Environmental journalists face increasing threats, harassment, and physical attacks for investigating climate and environmental issues


Vulnerable and marginalized groups, including refugees, need special attention and multilingual approaches to access reliable climate information


Scientists and environmental defenders are being targeted both physically and through surveillance of their data and communications


Summary

There is strong consensus that those producing and disseminating reliable climate information face escalating dangers and require systematic protection measures


Topics

Human rights | Cybersecurity


Youth engagement must be meaningful and treat young people as equal partners

Speakers

– Charlotte Scaddan
– Harriet Kingaby

Arguments

Young people are deeply concerned about climate change but need meaningful engagement as equal partners rather than add-ons to processes


Mental health impacts on young people are exacerbated by online systems that promote despair rather than hope about climate action


Summary

Both speakers agree that youth must be genuinely included as equal stakeholders in climate information integrity efforts, not merely consulted as an afterthought


Topics

Human rights | Sociocultural


Similar viewpoints

All three speakers represent the core partnership behind the Global Initiative and share a commitment to institutionalizing information integrity through international frameworks and concrete funding mechanisms

Speakers

– Eugênio Garcia
– Charlotte Scaddan
– Guilherme Canela De Souza Godoi

Arguments

Brazil launched the Global Initiative during G20 presidency, making information integrity a priority for the first time in G20 history


The UN’s Global Principles for Information Integrity provide a framework for action across five principles including societal trust, healthy incentives, and transparency


UNESCO’s approach recognizes information as a public good requiring support for both supply and demand sides of the information ecosystem


Topics

Legal and regulatory | Development


Both speakers recognize that climate disinformation has evolved from outright denial to more sophisticated tactics aimed at undermining trust in institutions and delaying action

Speakers

– Charlotte Scaddan
– Fredrick Ogenga

Arguments

Climate disinformation creates distrust in institutions responsible for climate action and undermines public trust in climate science


Strategic skepticism is replacing climate denialism to delay climate response and derail climate interventions


Topics

Human rights | Sociocultural


Both speakers emphasize the importance of community-level engagement and the need to reach vulnerable populations through trusted local channels rather than top-down approaches

Speakers

– Charlotte Scaddan
– Guilherme Canela De Souza Godoi

Arguments

Community-based approaches are essential, as people are most influenced by trusted local voices like pastors, teachers, and community leaders


Vulnerable and marginalized groups, including refugees, need special attention and multilingual approaches to access reliable climate information


Topics

Sociocultural | Development


Unexpected consensus

Advertising industry as both problem and solution

Speakers

– Charlotte Scaddan
– Harriet Kingaby

Arguments

Solutions must address structural obstacles including platform transparency, advertising accountability, and protection of information sources


The attention economy funded by advertising creates unhealthy incentives that prioritize divisive content over quality information


Explanation

It’s unexpected that there’s consensus on engaging the advertising industry as a key stakeholder in solving information integrity issues, given that advertising is also identified as part of the problem. This represents a pragmatic approach to working with economic actors who have power to drive change


Topics

Economic | Legal and regulatory


Integration of local and indigenous knowledge systems

Speakers

– Fredrick Ogenga
– Guilherme Canela De Souza Godoi

Arguments

Local and indigenous knowledge systems need to be documented and integrated into climate information integrity efforts


UNESCO’s approach recognizes information as a public good requiring support for both supply and demand sides of the information ecosystem


Explanation

The consensus on valuing traditional knowledge systems alongside formal research approaches is unexpected in a discussion focused on digital information integrity, showing recognition that solutions must be culturally grounded and inclusive


Topics

Sociocultural | Development


Overall assessment

Summary

There is strong consensus among speakers on the need for multi-stakeholder approaches, protection of information sources, meaningful youth engagement, and addressing research gaps particularly in the Global South. The speakers also agree on the evolution of climate disinformation tactics and the importance of community-based solutions.


Consensus level

High level of consensus with complementary rather than conflicting perspectives. The speakers represent different sectors but share aligned goals and approaches, suggesting strong potential for collaborative action. The consensus extends beyond problem identification to specific solutions and implementation strategies, indicating readiness for coordinated global action on climate information integrity.


Differences

Different viewpoints

Scope of responsibility for addressing climate disinformation

Speakers

– Guilherme Canela De Souza Godoi
– Audience

Arguments

UNESCO’s approach recognizes information as a public good requiring support for both supply and demand sides of the information ecosystem


Scientists and environmental defenders are being targeted both physically and through surveillance of their data and communications


Summary

While Guilherme emphasizes a balanced approach between citizen education and systemic support, audience members focused more heavily on protecting vulnerable groups and activists, suggesting different priorities in resource allocation and intervention strategies.


Topics

Human rights | Development


Unexpected differences

Approach to journalistic standards in climate reporting

Speakers

– Audience
– Guilherme Canela De Souza Godoi

Arguments

Youth movements like those inspired by Greta Thunberg show promise but need sustained support and integration into policy processes


UNESCO’s approach recognizes information as a public good requiring support for both supply and demand sides of the information ecosystem


Explanation

An audience member suggested returning to traditional journalism norms of presenting ‘both sides’ equally, while Guilherme cautioned against false balance, arguing that 99% scientific consensus shouldn’t be presented as equal to 1% dissent. This represents a fundamental disagreement about journalistic objectivity versus accuracy in climate reporting.


Topics

Human rights | Sociocultural


Overall assessment

Summary

The discussion showed remarkable consensus on the fundamental problems and need for multi-stakeholder solutions, with disagreements mainly centered on emphasis and approach rather than core objectives. The main areas of difference involved the balance between individual versus systemic interventions, the role of different stakeholders, and approaches to research and evidence-gathering.


Disagreement level

Low to moderate disagreement level. The speakers largely agreed on problem identification and general solution directions, but differed on priorities, implementation strategies, and stakeholder emphasis. This suggests a mature field where practitioners agree on fundamentals but are still working out optimal approaches, which is actually positive for collaborative action and policy development.


Partial agreements

Partial agreements

Similar viewpoints

All three speakers represent the core partnership behind the Global Initiative and share a commitment to institutionalizing information integrity through international frameworks and concrete funding mechanisms

Speakers

– Eugênio Garcia
– Charlotte Scaddan
– Guilherme Canela De Souza Godoi

Arguments

Brazil launched the Global Initiative during G20 presidency, making information integrity a priority for the first time in G20 history


The UN’s Global Principles for Information Integrity provide a framework for action across five principles including societal trust, healthy incentives, and transparency


UNESCO’s approach recognizes information as a public good requiring support for both supply and demand sides of the information ecosystem


Topics

Legal and regulatory | Development


Both speakers recognize that climate disinformation has evolved from outright denial to more sophisticated tactics aimed at undermining trust in institutions and delaying action

Speakers

– Charlotte Scaddan
– Fredrick Ogenga

Arguments

Climate disinformation creates distrust in institutions responsible for climate action and undermines public trust in climate science


Strategic skepticism is replacing climate denialism to delay climate response and derail climate interventions


Topics

Human rights | Sociocultural


Both speakers emphasize the importance of community-level engagement and the need to reach vulnerable populations through trusted local channels rather than top-down approaches

Speakers

– Charlotte Scaddan
– Guilherme Canela De Souza Godoi

Arguments

Community-based approaches are essential, as people are most influenced by trusted local voices like pastors, teachers, and community leaders


Vulnerable and marginalized groups, including refugees, need special attention and multilingual approaches to access reliable climate information


Topics

Sociocultural | Development


Takeaways

Key takeaways

Climate disinformation is fundamentally a democratic issue that undermines trust in institutions and delays climate action through strategic skepticism rather than outright denialism


Information integrity requires a multi-stakeholder ecosystem approach rather than focusing on individual actors, involving supply side (journalists, scientists), demand side (citizens), and distribution channels (platforms, advertisers)


The Global Initiative on Information Integrity and Climate Change represents unprecedented international cooperation, with Brazil pledging $1 million and making it a G20 priority for the first time


The advertising industry inadvertently funds climate disinformation through opaque supply chains but can be part of the solution by demanding transparency and supporting quality climate journalism


Research gaps are severe, particularly in the Global South, hampering evidence-based policy development and requiring urgent investment in local research and investigative journalism


Environmental journalists and climate defenders face increasing physical and digital threats, requiring international protection mechanisms


Youth engagement must be meaningful and equal rather than tokenistic, as young people are most affected by climate change and online mental health impacts


Community-based approaches are essential since people trust local voices more than mainstream media or institutions


Resolutions and action items

Launch a Call to Action for COP30 that will be open to all stakeholders to submit concrete proposals on information integrity and climate change


UNESCO’s Global Fund will provide funding for research and investigative journalism, with an open call closing July 6th


Plan high-level side events at COP30 in Belém to showcase information integrity initiatives


Develop a global toolkit on data governance under the Broadband Commission with UNDP


Engage advertisers at industry events like Cannes to promote information integrity principles


Expand research efforts particularly in the Global South to fill critical evidence gaps


Implement community-based approaches using trusted local voices for climate information dissemination


Unresolved issues

How to balance communicating climate urgency without inciting fear while motivating action, particularly in contexts where climate change is seen as ‘a white man’s problem’


Lack of transparency in digital platforms and AI systems that continue to spread climate disinformation


Protection mechanisms for environmental defenders’ data security, particularly regarding carbon credit projects and indigenous territories


How to effectively reach vulnerable populations like refugees in camps with reliable climate information


Integration of information integrity concerns into youth climate statements and movements


Economic sustainability of quality climate journalism when platforms demonetize climate content


How to incorporate local and indigenous knowledge systems into formal climate information frameworks


Addressing the mental health crisis among young people exacerbated by online climate despair


Suggested compromises

Applying traditional journalism standards (fact-checking, multiple sources) to climate activism while recognizing the scientific consensus is not equivalent to fringe denial views


Balancing the need for urgent climate communication with avoiding fear-based messaging by always pairing problems with specific, actionable solutions


Combining top-down policy approaches with grassroots community engagement to address different information needs and trust levels


Working with rather than against the advertising industry by demonstrating business cases for supporting quality climate information


Integrating both digital and offline approaches to reach all populations regardless of connectivity levels


Thought provoking comments

So we lose what the bad actors already knew at that time, the ecosystem logic of this. So the initiative, that one that I did 20 years ago, was very successful in its goals… But when we were talking with them, or even with the scientists, the logic is the scientists want to know how to give better interviews for the journalists. The scientists were not even thinking that there was an important field of research on the issue of information integrity… And this was a huge mistake of our part because we were not prepared enough to think the rest of the ecosystem.

Speaker

Guilherme Canela De Souza Godoi


Reason

This comment is deeply insightful because it reveals a fundamental strategic error in past approaches to climate communication – focusing on individual actors rather than understanding the systemic nature of information ecosystems. It demonstrates how well-intentioned efforts can fail when they don’t account for the coordinated, ecosystem-wide approach that disinformation actors use.


Impact

This comment fundamentally reframed the discussion from viewing information integrity as a series of individual problems to understanding it as a complex ecosystem challenge. It established the theoretical foundation for why multi-stakeholder approaches are necessary and influenced subsequent speakers to emphasize systemic solutions rather than isolated interventions.


Most of this you know, I think there’s been plenty of cleverer people than me talking about that at this conference but the twist that I want you to take away is that this situation does not work for advertisers either and that actually creates opportunities for us to create powerful alliances that can really, really take on some of this system.

Speaker

Harriet Kingaby


Reason

This comment is thought-provoking because it completely reframes advertisers from being part of the problem to potential allies in the solution. It challenges the typical adversarial framing and introduces a strategic insight about aligned interests that opens new pathways for action.


Impact

This comment shifted the conversation from a defensive posture against harmful actors to a more strategic approach of building coalitions with unexpected allies. It introduced the concept that economic incentives can be realigned to support information integrity, which became a recurring theme in subsequent discussions about sustainable solutions.


So people are strategically frustrating or obscuring climate science in order to delay the climate response, to cut into the chains of those in the policy line of coming up with measures that are supposed to address climate change and effectively derailing climate interventions… strategic scepticism is actually replacing climate denialism.

Speaker

Fredrick Ogenga


Reason

This observation is crucial because it identifies the evolution of climate disinformation tactics from outright denial to more sophisticated delay strategies. This insight reveals why traditional counter-narratives focused on proving climate change exists are no longer sufficient.


Impact

This comment elevated the discussion’s analytical sophistication by showing how disinformation tactics have evolved. It helped other panelists and the audience understand why current approaches may be inadequate and influenced the conversation toward more nuanced response strategies that address delay tactics rather than just denial.


In countries like the US and North America, we found out that the mainstream media is still led in terms of spreading false information about climate… What many institutions have long thought, and I include the UN in that, have long thought as mainstream media is no longer mainstream. The mainstream has shifted, and that’s true across many geographies.

Speaker

Charlotte Scaddan


Reason

This comment is profoundly insightful because it challenges fundamental assumptions about media landscapes and information distribution. It forces a reconsideration of communication strategies based on outdated models of how information flows.


Impact

This observation prompted a significant shift in how participants discussed outreach and communication strategies. It moved the conversation away from traditional media-focused approaches toward community-level engagement and influenced discussions about reaching people through trusted local voices rather than institutional channels.


In the region where I come from, climate change is viewed as a white man’s problem. It is viewed as a matter of whether it is cold or it is hot. But in actual fact, what we see is that it means loss of life, loss of property, loss of assets, and displacement… what are the ethics in information sharing and information dissemination when it comes to climate change?

Speaker

Lee Cobb-Ottoman (audience member)


Reason

This comment is exceptionally thought-provoking because it exposes how climate change communication can be perceived as culturally biased and disconnected from lived realities. It raises profound ethical questions about how to communicate urgency without perpetuating colonial or paternalistic narratives.


Impact

This intervention fundamentally challenged the panel’s framing and forced a deeper examination of cultural and ethical dimensions of climate communication. It led to more nuanced discussions about local knowledge systems, community-based approaches, and the need for research and solutions that emerge from the Global South rather than being imposed from outside.


We are in effect guinea pigs in an information experiment in which the resilience of our societies is being put to the test… People just don’t know what’s real, what to believe.

Speaker

Charlotte Scaddan


Reason

This metaphor is striking because it captures the unprecedented nature of our current information environment and the societal-scale risks we face. It frames the current moment as an uncontrolled experiment with democracy and social cohesion at stake.


Impact

This vivid characterization heightened the urgency of the discussion and helped frame information integrity as a fundamental threat to social stability. It influenced subsequent speakers to emphasize the democratic and societal implications of their work, moving beyond technical solutions to consider broader social resilience.


Overall assessment

These key comments fundamentally transformed the discussion from a relatively straightforward presentation of initiatives into a sophisticated analysis of systemic challenges and strategic opportunities. Guilherme’s ecosystem insight established the theoretical foundation for understanding information integrity as a complex system rather than isolated problems. Harriet’s reframing of advertisers as potential allies introduced strategic thinking about coalition-building and economic incentives. The research findings about evolved disinformation tactics and shifted media landscapes challenged assumptions about effective response strategies. Most importantly, the intervention from the Global South participant forced the entire panel to confront issues of cultural bias and ethical responsibility in climate communication. Together, these comments elevated the discussion from operational details to strategic and ethical considerations, creating a more nuanced understanding of both the challenges and opportunities in addressing climate disinformation. The conversation evolved from presenting solutions to questioning fundamental assumptions about how information systems work and how change can be achieved.


Follow-up questions

How can we better understand the impact of social media on audiences regarding climate disinformation in measurable ways?

Speaker

Fredrick Ogenga


Explanation

There is a significant gap in understanding how social media platforms specifically impact audiences with climate disinformation, which is crucial for developing effective countermeasures


What are the governance mechanisms and transparency requirements needed for digital platforms, especially regarding climate disinformation?

Speaker

Fredrick Ogenga and Charlotte Scaddan


Explanation

The lack of transparency in digital platforms about ownership, data sources, and content moderation makes it difficult to address climate disinformation systematically


How can we expand research on climate disinformation in the Global South, Latin America, and Southeast Asia?

Speaker

Fredrick Ogenga


Explanation

Out of 300 studies reviewed, only one was from South Africa, indicating a massive research gap in understanding climate disinformation in these regions


What are the legal frameworks and litigation possibilities for addressing greenwashing and climate disinformation?

Speaker

Fredrick Ogenga


Explanation

This is described as a ‘grey area’ and ‘contentious area’ where data is still minimal, requiring further legal and policy research


How can we develop secure data infrastructure for climate science data in regions lacking reliable data centers?

Speaker

Fredrick Ogenga and Agenunga Robert


Explanation

The need for secure hosting of climate data sets is crucial, especially when data must be stored elsewhere due to lack of local infrastructure


What protocols are needed to protect data collected for carbon credit projects from indigenous territories?

Speaker

Agenunga Robert


Explanation

There are concerns about data security and potential harm to indigenous communities if carbon credit data is compromised or misused


How can we ensure that only 4% of carbon credit benefits go to forest-dependent communities while protecting their data rights?

Speaker

Agenunga Robert


Explanation

Questions about fair distribution of benefits from the Tropical Forest Forever Facility and data governance for indigenous communities


What self-regulatory standards should climate activists and civil society adopt to maintain information integrity?

Speaker

Pavel Antonov


Explanation

Concerns about whether climate defenders should adopt traditional journalism standards to avoid contributing to the fractured information environment


What are the ethical boundaries for communicating climate urgency without inciting fear, particularly in regions where climate change is viewed as a distant problem?

Speaker

Lee Cobb-Ottoman


Explanation

Balancing the need to communicate climate risks effectively while avoiding fear-mongering, especially in contexts where climate change is not seen as locally relevant


How can reliable climate information be effectively delivered to refugee populations in encampment settings?

Speaker

Mbadi (UNHCR representative)


Explanation

Refugees and asylum seekers have limited access to reliable information and are vulnerable to misinformation in enclosed settings


How can we better engage and channel young people’s climate concerns into effective action that influences decision-makers?

Speaker

Larry Maggett


Explanation

Young people are highly concerned about climate change but need better mechanisms to translate their energy into policy impact


What lessons can be learned from the Greta Thunberg movement and how can similar youth mobilization be sustained?

Speaker

Mikko Salo


Explanation

Understanding what made the Thunberg movement effective and how to prevent similar movements from losing momentum


How can information integrity be integrated into youth climate statements and agendas at regional conferences?

Speaker

Jasmine Ku


Explanation

Youth representatives are not considering information integrity as a climate problem, partly due to overwhelming corporate greenwashing that makes the problem seem solved


How can evidence-based climate policies be developed in regions with insufficient data on climate disinformation?

Speaker

Online audience member


Explanation

The challenge of creating effective policies when there is a lack of research and data, particularly in the Global South


How can grassroots approaches and local knowledge be better documented and integrated into climate intervention strategies?

Speaker

Fredrick Ogenga


Explanation

Need to move beyond systematic reviews to primary research that captures local wisdom and oral traditions in climate solutions


What is the relationship between the attention economy, mental health crises in young people, and climate disinformation?

Speaker

Harriet Kingaby


Explanation

Understanding how the same systems that spread climate disinformation also contribute to mental health problems among youth


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #323 New Data Governance Models for African Nlp Ecosystems

WS #323 New Data Governance Models for African Nlp Ecosystems

Session at a glance

Summary

This panel discussion explored new data governance mechanisms for language data driving natural language processing (NLP) ecosystems in Africa, focusing on licensing frameworks that protect cultural sovereignty while enabling innovation. The session was moderated by Mark Irura from Mozilla Foundation and featured six experts from across Africa discussing the challenges of current open licensing models like CC0 that may inadvertently enable extractive practices with African language data.


Dr. Lilian Wanzare emphasized the need for community-centered approaches to data collection and licensing that balance open sharing with benefit sharing, noting that language embodies cultural identity and community aspirations. Dr. Melissa Omino highlighted the distinction between language communities (who preserve languages) and data communities (who create datasets), advocating for community ownership rather than just consent, and introduced the new Litiyabodo Open Data License as an alternative framework. She stressed that communities should define what benefits they want, which often involves sustainable, community-based returns rather than monetary compensation.


Deshni Govender pointed out that extractive practices occur within countries as well as across borders, suggesting policy protections should build on existing cultural and indigenous rights frameworks. She referenced the Nagoya Protocol from biodiversity as a potential model for linguistic resource sharing. Viola Ochola emphasized the need for robust legal frameworks, meaningful community engagement, and capacity building within African nations to support homegrown AI development.


Samuel Rutunda discussed how government AI strategies can raise awareness, create working frameworks, and foster collaborations, while Eli Sabblah shared Ghana’s experience in developing national AI strategy through inclusive stakeholder consultations. The panelists agreed that effective governance requires collaborative partnerships between communities, governments, funders, and developers, moving beyond extractive models toward equitable benefit-sharing arrangements that respect cultural protocols while advancing technological innovation.


Keypoints

## Major Discussion Points:


– **Data Governance and Community Sovereignty**: The need to shift from treating African language communities as mere data sources to recognizing them as collective data stewards with inherent rights to govern their cultural and linguistic data, moving beyond individual consent to community-centered governance models.


– **Licensing and Benefit-Sharing Mechanisms**: Discussion of new licensing frameworks like the Litiyabodo Open Data License that move beyond traditional open licenses (like CC0) to ensure equitable benefit-sharing with language communities, where benefits are defined by communities themselves rather than imposed externally.


– **Anti-Extractive Practices and Cultural Protection**: Addressing how current AI development practices often extract value from African language communities without providing benefits, and the need for policies that protect cultural sovereignty while still enabling innovation and open collaboration.


– **Government Role and Policy Frameworks**: Exploration of how national AI strategies and procurement systems can support community-led governance, including the challenges of government understanding and funding AI/NLP projects, and the need for capacity building within government institutions.


– **Community Capacity Building and Skills Development**: The necessity of building technical literacy, digital rights awareness, and governance frameworks within language communities so they can effectively participate in and control the development of AI technologies using their languages.


## Overall Purpose:


The discussion aimed to explore practical solutions for creating more equitable data governance mechanisms for African language data used in Natural Language Processing (NLP) systems. The panel sought to address the power imbalances and extractive practices in current AI development while finding ways to protect cultural sovereignty without stifling innovation.


## Overall Tone:


The discussion maintained a collaborative and solution-oriented tone throughout, with participants building on each other’s ideas constructively. While there was acknowledgment of serious challenges around exploitation and power imbalances, the tone remained optimistic and focused on practical pathways forward. The conversation was academic yet accessible, with participants sharing both theoretical frameworks and real-world experiences from their work across different African countries.


Speakers

– **Mark Irura** – Moderator, works with Mozilla Foundation


– **Deshni Govender** – Dynamic force from South Africa working at the intersection of law, technology, and social impact; passionate about democratizing AI ecosystems; advisory board member of the South African AI Association; co-founder of the GIZ diverse women in tech network; working group member on AI strategy recommendations for South Africa; featured on the list of 100 brilliant women in AI ethics


– **Rutunda Samuel** – CTO and principal researcher at Digital Umuganda, a leading AI driven voice technology organization for African languages based in Kigali


– **Lilian Diana Awuor Wanzare** – Dr., lecturer at the Department of Computer Science at Maseno University; research interests in artificial intelligence, machine learning, and natural language processing for low resource languages; holds a PhD in Computational Linguistics and an MSc in Language Science and Technology from Saarland University in Germany


– **Melissa Omino** – Dr., Director of the Center for Intellectual Property and Information Technology (CIPIT) at Strathmore University; intellectual property expert; board member at Creative Commons


– **Elikplim Sabblah** – Technical advisor working for the Fair Forward program, a project within the Digital Transformation Center (DTC) Ghana within GIZ (German technical cooperation); focuses on AI policy advisory, open AI resource accessibility and capacity building


– **Ochola Viola** – Director of Access to Information; advocate of the High Court of Kenya; legal practitioner with experience in administrative law, commercial law, human rights and law reforms; holds an MBA in strategic management; Open Government Leadership Fellow


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# Data Governance Mechanisms for African Language Technologies: A Panel Discussion Report


## Introduction and Context


This panel discussion, moderated by Mark Irura from Mozilla Foundation, brought together six distinguished experts from across Africa to explore new data governance mechanisms for language data driving natural language processing (NLP) ecosystems on the continent. Irura opened by providing context about Mozilla Common Voice, which has collected over 30,000 hours of voice data in more than 180 languages, highlighting both the scale of community contribution and the need for better governance frameworks.


The panel featured Dr Lilian Diana Awuor Wanzare, a computational linguistics expert from Maseno University; Dr Melissa Omino, Director of the Centre for Intellectual Property and Information Technology at Strathmore University; Deshni Govender, a South African legal and technology expert working on AI democratisation; Rutunda Samuel, CTO of Digital Umuganda focusing on African language voice technology; Elikplim Sabblah, a technical advisor with Ghana’s Digital Transformation Centre; and Ochola Viola, Director of Access to Information and legal practitioner specialising in administrative and human rights law.


The session focused particularly on developing licensing frameworks that protect cultural sovereignty whilst enabling innovation, addressing the critical challenge of how current open licensing models like Creative Commons Zero (CC0) may inadvertently enable extractive practices with African language data.


## Core Challenges in Current Data Governance Models


### The Inadequacy of Traditional Licensing Frameworks


The discussion began with a fundamental critique of existing open licensing models. Dr Melissa Omino articulated a crucial distinction that shaped the conversation: “Ownership and consent are two completely different things. The traditional data sharing regime treats communities as sources rather than partners, and this extracts value whilst leaving these very communities with just the risks and harms.”


This observation highlighted how current frameworks, including widely-used Creative Commons licences like CC0, fail to address the power imbalances inherent in AI development. Dr Lilian Wanzare emphasised the need for “community centeredness” in approaches to data collection and licensing, noting that language embodies cultural identity and community aspirations.


### Extractive Practices: Beyond Simple North-South Dynamics


Deshni Govender introduced a particularly thought-provoking perspective that challenged conventional narratives about data exploitation: “I think it’s important also to point out that when we mention the concept of extractive practices, that it’s not always a foreign versus local context. And it’s not a cross-border issue, because I think that extractive practices often happen within countries and within the continent under the guise of the open collaboration concept.”


This insight reframed the discussion from simplistic North-South dynamics to a more nuanced understanding of power structures that can perpetuate extraction even within African contexts. Govender suggested that policy protections should build upon existing cultural and indigenous rights frameworks, referencing the Nagoya Protocol from biodiversity as a potential model for linguistic resource sharing.


### The Complexity of Oral Traditions


A critical technical challenge emerged through Govender’s analysis of African oral traditions: “The problem with having culture or language that is intended for oral knowledge, it means that it’s also shaped by tone, it’s shaped by cadence, it’s shaped by who is telling the story and what is that meaning that’s attached to it… And so it’s kind of hard to understand the asset that you’re working with if you’re not even sure how to put it into create an asset value or an asset form.”


This observation highlighted the fundamental difficulty of digitising oral knowledge systems without losing their cultural essence, presenting unique challenges for NLP development that go beyond simple text-based approaches.


## Community Ownership and Alternative Frameworks


### Moving Beyond Consent to Ownership


The panellists demonstrated broad agreement on the need to shift from treating African language communities as mere data sources to recognising them as collective data stewards with inherent rights. Dr Omino distinguished between language communities (who preserve languages) and data communities (who create datasets), advocating for community ownership rather than just consent.


Ochola Viola emphasised that “community ownership should be legally entrenched with operationalised mechanisms to reach remote communities,” highlighting the need for robust legal frameworks with stringent data collection rules to protect communities from exploitation. She stressed that local communities should control data from collection through usage with meaningful engagement throughout the process.


### New Licensing Approaches


Dr Omino introduced the Litiyabodo Open Data Licence as an alternative framework, though specific details about its mechanisms were not elaborated in the discussion. She also mentioned that Creative Commons has released preference signaling work that complements existing CC licenses, suggesting ongoing evolution in licensing approaches.


Govender referenced other emerging frameworks, mentioning both the “Noodle license” and “Inkuba license” as examples of alternative approaches being developed, though again without detailed explanation of their specific features.


## Government Role and Policy Challenges


### Potential and Limitations of Government Involvement


The discussion revealed both the potential and limitations of government involvement in language data governance. Rutunda Samuel explained how government AI strategies can raise awareness, create working frameworks, add accountability, and help raise resources for language technology development. He shared Rwanda’s experience with Common Voice, which collected 30,000 hours over six years across more than 10 African languages.


However, Samuel also highlighted practical challenges: “Usually, I don’t know, I was talking to someone and say, government is run by accountants. And accountants, they want facts. They want, oh, what is this going to do? And then it’s still in the early stage of the language technology… So it’s very hard to show the facts.”


### National Strategy Development


Elikplim Sabblah shared Ghana’s experience in developing national AI strategy through inclusive stakeholder consultations, noting that Ghana’s draft strategy includes guidelines for data collectors on collection, storage, and sharing practices. He also mentioned the launch of an AI policy playbook at a UNESCO conference, indicating broader continental efforts at policy development.


However, Ochola Viola pointed out critical implementation gaps, particularly in procurement processes where “the procurement person does not, is not aware of AI, let alone even, you know, any other thing.”


## Capacity Building and Skills Development


A critical theme throughout the discussion was the need for comprehensive capacity building across multiple levels. Dr Wanzare emphasised that communities need understanding of AI model development, governance frameworks, and benefit structures to participate effectively in governance decisions.


Sabblah highlighted the importance of outreach programmes to help communities understand AI’s purpose and overcome fatigue from repeated data collection schemes. He also mentioned research on women-led SMEs that are using AI tools without realising it, pointing to the need for broader digital literacy.


Govender noted her inclusion in the “100 brilliant women in AI ethics” list, highlighting the importance of diverse voices in AI governance discussions.


## Economic Considerations and Investment


The discussion touched on funding and investment strategies for African language technology development. Dr Omino advocated for greater local investment, arguing that “governments need to invest locally in NLP rather than looking externally, and challenge local investors to fund model development.”


Samuel, while agreeing on the need for government support, focused more on changing procurement mindsets and willingness to take risks with emerging technologies. The challenge of demonstrating concrete benefits from early-stage language technology investments emerged as a significant barrier to securing both government and private sector support.


## Technical and Infrastructure Challenges


Beyond governance and legal frameworks, the discussion acknowledged significant technical challenges specific to African language contexts. The predominantly oral nature of many African languages creates unique NLP design challenges that require specialised approaches to preserve cultural nuances and communal knowledge systems.


Infrastructure limitations also pose significant barriers to community participation in governance mechanisms. Viola emphasised that digital infrastructure must be available so remote communities can access benefits and engage with AI technology investors.


## Areas of Agreement and Ongoing Tensions


### Shared Principles


The panellists showed broad agreement on several key principles:


– Communities should have ownership and control over their language data rather than just providing consent


– Current licensing and governance frameworks are inadequate and need reform


– Capacity building is essential for effective community participation in AI governance


– Government procurement systems need updating to handle AI technologies appropriately


### Different Emphases


While not representing fundamental disagreements, speakers emphasised different aspects of the challenges:


– Dr Wanzare focused on community knowledge gaps as a primary barrier


– Viola emphasised government institutional capacity and legal framework inadequacies


– Omino stressed the need for complete shifts away from external funding dependency


– Samuel highlighted the practical challenges of demonstrating early-stage technology benefits


## Conclusion


This discussion revealed both the complexity of challenges facing African language data governance and the emerging consensus among experts on fundamental principles. While significant questions remain about implementation strategies and funding approaches, the shared commitment to community sovereignty and equitable benefit-sharing provides a foundation for future development.


The conversation demonstrated that protecting African language data and communities requires not just new licensing frameworks, but fundamental changes in how AI development is conceptualised, funded, and implemented. The panellists’ references to ongoing work on new licensing models and national AI strategies suggest that practical progress is being made alongside theoretical development.


The path forward demands continued dialogue, experimentation with new models, and commitment to centering community voices and values in all aspects of language technology development. As Irura noted in moderating the discussion, these conversations are part of an ongoing effort to ensure that the benefits of AI development reach the communities whose languages and knowledge make such technologies possible.


Session transcript

Mark Irura: Good evening. Good morning. Hi, everyone. Thank you for joining our session. My name is Mark Irura. I’ll be moderating this session. I think we will start with introductions. I’ll introduce the panel. We have three participants who are online and three who are here on stage. I will start with Deshni Govenderni on my right. Deshni Govenderni is a dynamic force healing from South Africa. Her work intersects law, technology, and social impact. She’s passionate about democratizing AI ecosystems. Her skill sets range from prototyping new open source licenses with local language communities to scaling AI and data science through a boot camp that has been conducted for women that has been conducted across three African countries. She has also co-developed South Africa’s first AI maturity assessment framework. She has passionately worked on birthing the language AI hub for African NLP and co-created an AI policy playbook with Global South policymakers. Deshni Govenderni describes herself as a bridge builder and a co-creator of AI in Africa. She’s an advisory board member of the South African AI Association, a co-founder of the GIZ diverse women in tech network, working group member on AI strategy recommendations for South Africa, and was featured on the list of 100 brilliant women in AI ethics. Later we’ll introduce Samuel Rutunda, who is on my left. Samuel is a CTO and principal researcher at Digital Umuganda, a leading AI driven voice technology organization for African languages. Digital Umuganda is a Kigali based AI and open data organization on a mission to democratize access to information in African languages. Founded in 2018, the company builds large scale voice and text data sets and develops voice AI tools to bridge the language divide and preserve linguistic diversity. With projects spanning 17 African languages, they’ve recorded thousands of hours of speech and digitized countless text samples, fueling models for local and global impact. Founded in Rwanda’s tradition of Umuganda, community uplift through collective efforts, Digital Umuganda unites community contributors, developers, governments and NGOs to build open source language infrastructure by Africans for the world. On the far right, on my far right, I have Dr. Lilian Wanzare. Dr. Lilian is a lecturer at the Department of Computer Science at Maseno University. Her research interests are in artificial intelligence and machine learning, in particular natural language processing, you will hear the term NLP a lot in this panel, and building text processing tools for low resource languages. She has served as the principal investigator for several research projects funded by BMGF, the Lacuna Fund, the Canadian Development Research Agency, IDRC, AI4D, among others. She has pioneered the Kenya Corpus, or Ken Corpus as we know it. which is a Kenyan language corpus for NLP and machine learning research. A project that looks at building data sets for training NLP tools for underserved languages, particularly those spoken in Kenya, with use cases geared towards agriculture, education, and health. She also works on sign language research, particularly Kenyan sign language, researching ways of bridging language barriers using virtual signing avatars. She holds a PhD in Computational Linguistics and an MSc in Language Science and Technology from Saarland University in Germany. Online, I will start with Dr. Melissa Omino. Melissa is the Director of the Center for Intellectual Property and Information Technology CIPIT at Strathmore University, a leading Eastern African AI Policy Hub and Data Governance Policy Center. Her research direction is focused on utilizing an African lens and a human rights lens. Part of the research conducted under Dr. Omino’s leadership involved mapping AI applications in Africa as an initial step in answering the question of what determines African AI and the problems that African AI should aim to solve. Dr. Omino is an intellectual property expert and has served as an advisory board member in several projects that intersect between AI and IP. This also includes driving a national AI strategy process and she has also led IP advisory for a global entity that is funding AI research in Africa. We also have Eli Elikplim Sabla. He is a technical advisor working for the Fair Forward program, a project within the Digital Transformation Center DTC Ghana. Within GIZ, GIZ is a German technical corporation. In this role, ELI focuses on AI policy advisory, open AI resource accessibility and capacity building to foster inclusive and sustainable AI development in Ghana. ELI has worked on the development of Ghana’s national AI strategy, collaborating with the Ministry of Communication, Digital Technology and Innovation through the Agency of the Data Protection Commission. With a strong background in data science, monitoring and evaluation, project management and stakeholder engagement, ELI is working towards enhancing AI accessibility, local innovation and responsible AI adoption in Ghana. And last but definitely not least, we have Miss Viola Ochola. She is the Director of Access to Information. She is an advocate of the High Court of Kenya and a legal practitioner with administrative law, commercial law, human rights and law reforms experience spanning over 15 years. She also holds an MBA in strategic management and has extensive experience both in the public and the private sector. She is the immediate former manager in the Complaints, Investigations and Legal Services Department at the Commission on Administrative Justice, Kenya. Viola is an Open Government Leadership Fellow and a member of the Technical Committee on the Open Government Partnership, the Kenya chapter, in her capacity as cluster lead for the Access to Information commitment. She is passionate about open governance and the empowerment of citizenry to access services and benefit from opportunities offered by government. The reason I’ve gone through the elaborate introduction is for you to understand and know who will be talking to us in this topic this evening and also for you to To look up the panelists and reach out via LinkedIn and and and ask questions and connect and and continue to engage on the topic. Our topic today is Exploring new data governance mechanisms for language data driving NLP ecosystems in Africa the issue of Licensing of language has already come up in various workshops And today we want to have a more practical discussion that looks at Research that is currently going on in this topic Language is culture and culture is identity Yet the digital identity of Africa is skewed, manipulated, misinterpreted or disproportionately commercialized. The language data collection is characterized by a Significant disparity between large-scale publicly accessible resources and numerous smaller isolated projects. The Mozilla Foundation seeks to positively impact the way in which local language data is viewed, collected and stored and utilized. Currently, Mozilla Common Voice is the world’s largest most diverse crowdsourced multilingual Open speech corpus holding more than 30,000 hours and more than 180 different languages and is an example of a successful Community initiative that is also a digital public good It is a self-serving community platform as well as a lab for linguistic inclusion and for traversing data governance issues in NLP but there has been an Awakening and sentiment change amongst the language communities and this is what we will delve into today Speakers who crowdsource data sets and some of the issues that have been raised including inequitable investment locally sensitive community control, and the dynamics around power that are impacting the use of the language to build language technology. So in this session, having introduced and set the background for the problem, we are going to highlight the unintended and intended ripple effects of the CC0 open public license on communities and language data, and we want to look at governance and policy, how do they intersect, and what are some solutions that are being worked on to try and resolve the problem. I will go straight to you Lilian, and I will begin with a question on how can AI training data licenses be adapted to protect cultural sovereignty and ensure equitable benefit, especially for those


Lilian Diana Awuor Wanzare: who have been marginalized. Thank you so much Mark for the introduction and for the question. I think we all know that when it comes to AI systems, data is core, and when it comes to NLP systems, that data is language. But what is language specifically, as Mark has mentioned here, it is really more than a group of words. It embodies really the aspirations of the communities from where they come from, and the cultural identity of the different communities. So if you look at that, and to think about this data, how can it be licensed in a way that still promotes the cultural values from where they come from. I will think about it as one, community centeredness. How do we go about collecting the data from the community themselves? How do we manage the use of this data along the journey as it’s been used in the NLP systems? And if you look at community centeredness, there’s a lot of things that go into it. One of them is consent. As they are going to provide the data, are they properly informed? And this informed consent is a continuous process. An understanding of the journey of their data as it goes around being developed and moves across as it is being used for different NLP systems. Now, how do we balance the issue of within the licensing, the issues of open sharing vis-a-vis benefit sharing? Those things should not be mutually exclusive. We can still have open sharing and still have benefit sharing. How can this be embodied within our existing licensing for us to be able to have both? In such a way that, yes, we still do open sharing to facilitate development of tools, development of systems that promote the language, but still from where the data comes from. It’s no longer extractive. They too have a benefit of whatever tools are going to be developed from them. And how does this move not just from the data community, but from the larger language community? Those who collected data themselves, but the larger community who speak the particular words, languages. And if I think about it in the last bit as a close, in this licensing ecosystem, I know there are different bodies that look about licensing and licensing in general. In this ecosystem, how transparent is it to enable different views and different combinations that support different requirements by different people? You’re able to really pull things together that are aligned to your values. And there’s no one size fits all. There can be different ways of combining the licensing, the ecosystem that still supports the community centeredness with the communities, but still allows for open sharing and development across the journey. That would be my opening remark.


Mark Irura: Thank you so much, Lilian. I will come to you, Melissa. And Lilian has mentioned something to do with the different needs and the different requirements. over the entire, let me call it language or AI language life cycle and she’s talked about values and to it I want to throw in the benefit. I will not just make it economic but when we think about benefit I would like you to help us to unpack that in view of the question like how do we think about sovereignty but we also enable these things based on the work that you’re currently undertaking. Okay, thank you so much Mark. I hope you can hear me. Yes. Excellent.


Melissa Omino: So I think when we think about the benefit I don’t think that we here should be discussing the benefit without referencing the language community because that’s where the benefit should flow. So in part of the work that CIPIT is doing in collaboration with the data science law lab at University of Pretoria is reaching out to these language communities and I’m being very specific about this term language community because there’s also a data community that exists and is made up of African AI data developers who actually collate languages into these particular data sets for natural language processing. We think that that community, the language community should be able to speak for itself and say what type of benefit that they would require for the particular use of that language data set and it has already been mentioned by Dr Lillian that different types of uses might require different types of benefits or might actually require a different thought as to what a benefit would mean. A lot of resistance towards having these language communities speak about quote-unquote a benefit is that it is automatically assumed to be a monetary thing or a royalty based thing but essentially we are saying that it should be given up to the community to decide what that should be. Most of my discussions with various communities, including the Tuluwa community via Masanui University, has been that they want something that is sustainable and that is community-based, meaning something that everyone in this language community can interact with and benefit from. And a monetary or royalty benefit doesn’t quite meet that mark. So, essentially, we need to think about the harmful dynamic that has been created with the current use of language datasets and the fact that these language datasets being commodified in AI systems primarily serves dominant languages and wealthy corporations, while marginalized communities receive no benefits, no matter how you define it, and often leave their cultural protocols, practices, values violated. So, a benefit could actually be the respect of the cultural knowledge that the language carries or even a share or access to this AI tool that has been built using the language data. So, can a licensing framework deliver this? I think that it can. That’s what the new dual license, the new Litiyabodo Open Data License, is meant to do. It’s supposed to provide an avenue where this conversation about what type of benefit would flow to a community would start from. And we are sort of trying to fit it into what currently governs the language dataset regime, which is copyright licensing. So, we came up with an alternative license that has elements of copyright, but also has elements of recognition of cultural knowledge and giving a voice to the community to negotiate about what they would want as a benefit. And here, I also have to signal the Creative Commons. community, where I am a board member, who just yesterday released publicly their work on preference signaling that would work hand in hand with Creative Commons licenses. And this essentially gives data stewards of particular data sets that are being utilized by AI to be able to say what they would prefer that data set to be used for or as. So this is actually signaling that this act of benefit recognition, benefit sharing is something that is being worked on and needs to be worked on. And maybe it’s not for us to determine, because it would just be us imposing our thoughts on these language communities, but bringing language communities to the forefront so they can speak for themselves as to what they would like. Thank you.


Mark Irura: Thanks, Melissa. It’s, I appreciate especially your comment on benefit. And with benefit, obviously, one of the things that is apparent in the continent is the issue of avoiding, you know, the recolonization through language and through AI. And I think this is an important question to ask you, I’ll come to you, back to you and ask about policy. When we think about policy, and we think about policy frameworks, what are broad principles that we can incorporate to think about, you know, equity and anti-extractiveness, so that there is mutual benefit, we do not stifle innovation, as Lillian said, but we’re able to grow and advance because we still need a commons to be able to move forward.


Melissa Omino: Thanks, Mark. I think that in order to have real equity, we need, we are required to think about communities as having ownership. and not just a group that would provide consent. Ownership and consent are two completely different things. The traditional data sharing regime treats communities as sources rather than partners, and this extracts value while leaving these very communities with just the risks and harms. So there has to be a shift where there is community data sovereignty, and I think Lilian has mentioned this or has alluded to this, and you have also alluded to that, Mark, where we legally recognize communities as collective data stewards with inherent rights to govern data about their members, territories, and cultural knowledge, which is where language would fall into. So individual consent is not enough when data affects an entire community. We need graduated consent that requires community consultation before individual agreements. The community gets to weigh in on whether that serves their collective interests. They get to voice what their collective interests are, and this includes verification rather than one-time permission and complete transparency about who is benefiting and how, and also deferring community veto power over harmful applications. So if someone profits from community data, the community must benefit too, and this means a mandatory benefit sharing requirement where communities might get a percentage of profits if that’s what they want, or they might get capacity building investments in infrastructure, education, and priority access to products developed using their community data, and this is not coming from me. This is coming from consultations that I’ve had with specific community members. So in order to prevent exploitation or to make this shift to this new utopia that I’m speaking about, We need strong anti-extractive safeguards, so data should not be sold to third parties without going back to the community for permission. Communities must be able to reclaim their data and take it elsewhere if they feel like, which requires regular audits with results shared publicly in accessible formats to ensure accountability. All these should be backed by penalties for violating community agreements. And I must admit here my bias as a lawyer, I’m really thinking about legal frameworks and structures. So that’s why I’m talking about accountability and I’m talking about enforcement mechanisms. But I think that really works currently in the language data sharing regime because they are using legal agreements being copyright licenses to govern the sharing of this data. So the conversation or rather what I’m trying to highlight here is that it’s ultimately about power and not just viewing data as a tool under a data governance regime, so not just about privacy. It’s really about where is the wealth and power concentrated and how can we then distribute this in an equitable manner. So legal frameworks would be one of the policy considerations that I would think of, but I also think that governments when coming up with their AI strategies and policies, which a plethora of those have happened on the African continent, they need to center culture as one of the main pillars of their strategy. I know the Kenyan strategy does that. It does mention that culture is an important factor. It does mention responsible and ethical AI, which this would be a pillar that this conversation would fall under. And it also talks about model development for problem solving on the continent. And you cannot talk about model development for problem solving if you do not think about language data sets. So I think that this is… essentially how we can get to a balance. It’s not about closing off the data, it’s about ensuring that it’s an equitable exchange between those who want to collect and use the data and the communities that have preserved and curated. And again, I say that there are two communities that exist, the language community that has suffered historical disadvantages in curating and preserving the language, particularly in the context of Africa, but there’s also the data community, which is the African AI developers who put in effort, who’ve used their skills and knowledge in creating these datasets, and who have an interaction with those who fund these activities. So there needs to be a balance for, let’s talk about these three parties in this context, those who would like to use the dataset, those who have curated the languages and preserved them, and those who actually have created the dataset.


Mark Irura: Thanks. Thanks, Melissa. I’m looking at you, Deshni Govender, now. Melissa has taken us to utopia, to canon, but we need to come back now here to what exists now that we could latch on to. And even as Deshni Govender gives her remarks, maybe I’ll ask you, I’ll ask you, Viola, to also be on standby to give us a different perspective, if there’s anything that Deshni Govender will have missed out. So over to you, Deshni Govender.


Deshni Govender: Sure. I think it’s important also to point out that when we mention the concept of extractive practices, that it’s not always a foreign versus local context. And it’s not a cross-border issue, because I think that extractive practices often happen within countries and within the continent under the guise of the open collaboration concept. I do think that policy protections that cover digital work should also actually take their foundational basis from existing protections that are afforded to cultural and indigenous communities which exist in a civil context. So, assuming those foundational building blocks exist, then policy protection can almost come into play in two ways. Sorry. Policy protection can come into play in two ways, which is, A, as a source for human rights, because that’s really important protecting labor rights and gig workers who also often do the unsexy work of labeling data, of training algorithms, but also come in as a counter leverage point in the context of open source and digital public goods. And we’ve heard the speakers mention the concept of quid pro quo. If you take something, give something back. And I’ll just run through very quickly a few points. So, fair sharing is one way. And then my co-panelist Melissa mentioned the Noodle license, but there’s also the Inkuba license that was developed. Another way is if a commercial actor has to cross-subsidize public maintenance for open source AI resources, what would that look like? Does it come with conditions? But the use of open grants or long-term partnerships that actually benefit the community. So, one example was a grant that google.org had given to Ghana NLP, which had really very minimal conditions attached that the community could use as they saw fit. I think the other one for that AI policy could include, which doesn’t often happen and should, is having where there are foreign investors or foreign partners, including local partners as equal collaborators, because oftentimes localized partners come in as just consultants. And when you have an equal collaborator, you have a co-ownership of the data corpora, and that could be often done by MOUs or just general contracts. And I think that policies should make AI developers accountable and that accountability can look like… impact reports or independent audits. I will mention very quickly something that I came across before I hand over to Viola in my research, something that’s called the Nagoya Protocol. And this actually exists in the biodiversity space, which basically requires fair and equitable sharing of benefits in the use of genetic resources. And that’s like plants, animals, microorganisms, etc. And I feel like if we want to learn, we could learn from parallels like this. So establishing something like a linguistic protocol for use of African languages and AI could be a great policy tool for regional principles or codes of conduct. I guess another policy tool could be the AI policy playbook that was recently launched at the UNESCO conference a few weeks ago. But I’ll stop here.


Mark Irura: Over to you, Viola. All right.


Ochola Viola: Thank you. Thank you, Mark. And having speaking after Melissa and Deshni Govenderni, I think most of them have sort of taken out most of the policy requirements. But I would still emphasize on the issue of the data sovereignty and equitable data sharing that both Melissa and Deshni Govenderni talked about. The local African communities should be able to control the data from the point of collection and up to the point of usage of these AI technologies so that they are able to be part of the process. So the whole process has to be inclusive. They should not just be there at the point of information givers or data givers, but they should be involved in the whole process. And Melissa also mentioned having that she’s a lawyer, so she’ll be biased around the legal framework. So I’ll also speak, I’m a lawyer. So definitely the legal framework around the collection of this data. has to be very stringent, has to be very robust, so that the local communities are protected from possible exploitation from the external of the big tech, so to speak, so that even at the point of usage, the benefits, whatever way they may define these benefits, they are able to benefit from that, so that it’s not an issue that they feel that are being exploited. Quickly, the aspect of community ownership should not just be something that is entrenched in the law, but there should be actual mechanisms that have been operationalized within the ecosystem, within the African countries, so that these local communities can be reached, because sometimes you’ll realize that some of these communities are in very remote areas in the African continent, and sometimes even in terms of the digital infrastructure, they cannot even access some of these benefits or some of these issues that the external parties want to develop. So, it would be important for the governments, at least the African governments, to make sure that the infrastructure is available, so that these communities can be able to reach out to these investors who might want to develop these AI technologies using their languages. And with that, like I said, the engagements have to be very meaningful. It shouldn’t just be, like one of my co-panelists said, something that you’re just called to give information or to give data. You have to be aware and to understand what exactly you’re giving out and the possible repercussions. And finally, I’ll speak to another policy as perspective, that one of building the capacity and skills development of the African nations, because we realize sometimes the issue is the lack of skills and the lack of the capacity to do this within the continent. So it’s important for the various policy frameworks to be able to put in place possible training solutions or skills development strategies, so that some of these technologies are homegrown and home-owned also, so that you then now even develop a framework within which you can transfer the knowledge locally beyond just waiting for the external parties to come in. And this, it’s not necessarily to be done within the country. You can also collaborate with the big tech to be able to develop the skills within the continent, then now the skills will be developed from there. So I think I’ll stop there. Thank you.


Mark Irura: Thanks Viola. And also thank you for such a broad response. You covered infrastructure, you covered capacity building, and this speaks to an ecosystem approach. You can’t just develop infrastructure only. You can’t just build capacity. You can’t just develop policy. So I will look at you some now, because when we are coming up with national AI strategies, I know Rwanda has one, the goal is to think about an ecosystem, is to think about where do we want to go and what do we want to achieve. And I will also ask you, Eli, to share experiences from Ghana, since you’ve gone through this cycle. I will start with you, Sam. And it’s an abstract question, but it’s also a simple question. Very simple. Can government support community-led governance? Could government partner? It’s always top-down. It’s always, this is what you need to do. What do you think, like, these strategies can help to support the growth of the AI ecosystem that is coming up? Thank you.


Rutunda Samuel: Thank you, Mark. Yeah, I think, first, the AI strategies or AI policies, they help within these three categories. First, they raise awareness. Usually, once something becomes a strategy or a policy, it makes people to know about it. So AI, once it’s implemented, people are now looking at all the components of AI, of which currently the major one is the language component. Second, it creates a working framework that AI governments and other entities can use as a guideline or as a framework to follow. And then it also adds some accountability, because they have to explain something. And this helps us, where in the absence of that policy, this could not have been a way. And then, in terms of what it creates, it starts creating a discussion. It means now when you go to them, you can have a base of how you can discuss. You have some place from where to start the discussion, and then they can look at it, and they can say, oh, we have actually a plan, we have a policy or a strategy. this is what it says. And then the thing about language is cross-cutting and it touches many aspects of everyday life and then start creating synergies. So for example someone in health can say oh actually we are thinking of using this tool but then they don’t know how to do it but given that there is a policy they have where to ask and then it’s that even us as a community start saying oh how about we work within the health for example medicinal plants is it something we can capture within our languages. So it creates synergies and collaborations and then ultimately the goal is to raise resources. So with these discussions with these collaborations how as a country we start now streamlining how we raise resources because there is a need to raise the resources. Yeah I think that’s what I would say. I’ll invite you Eli to also


Mark Irura: contribute to that point. Bearing also like we have a global audience and we have also ways that we are trying to see and build these ecosystems in a way that others could learn from us. Right thank you very much Mark and so I


Elikplim Sabblah: particularly would say that government should definitely support local communities to take ownership or lead on data governance so far as language data is concerned. Government should actually empower local communities. I mean by thinking about the idea of national strategies AI policies and AI strategies even looking at the way they are developed and drafted it actually includes whichever approach is taken. includes local communities and major stakeholders. And so just by that definition, through stakeholder consultations, ecosystem analysis and research, SWOT analysis, all that process should already include communities that are existing in the space. And so if that is the case, then it is, in the first step, a way of supporting the community to also take ownership of whatever comes out of there, of what data governance is concerning in a particular country. Now, what I’ve learned in the process in Ghana is that currently we have a draft national AI strategy that is undergoing review. And throughout the review processes, we reach out to various groups and trying to understand their specific needs and what they would like to see in the reviewed document. And it has been consistently spoken of how they need to see representation in there or how they would like to be empowered to be able to govern data sets that are generated within the same space. Now, to this, I would say that in the already existing draft, there is a pillar that actually speaks to this, and it’s called the Pillar 5, which says that the strategy seeks to provide data collectors with guidelines and principles for collecting data, storing and sharing it. I think this creates an avenue for the government to empower local communities to be able to take the lead or ownership as far as data governance is concerned. If the strategy would actually pinpoint specific principles and guidelines that these communities need to take, that would eventually influence the level of ownership they would be able to take of the data governance system in the country. So, I think a lot has been said already, and we also need to take a look at the adoption of alternative licensing models like the NUDO that has already been mentioned by Dr. Melissa on the call. in the session. And I think that this, when we want to take this approach, I think it will all go well for the communities involved. Yeah.


Mark Irura: Thanks, Eli. I think this is something also that always comes up with me and this morning in a session I had it. So I wanna put an open question to the panel. So you’ve talked about rules and regulations. I’ve not talked about money. Some before this panel asked me a difficult question about money. And one of the challenges even that came up earlier was the procurement systems is there because procurement provides an opportunity for these communities, developer communities that Melissa mentioned. And even Viola talked about like people who are in remote areas, they cannot benefit because there’s no infrastructure, there’s no connectivity. So to this panel and to anyone who might have a thought on it, the issue about public procurement and the ability to procure innovation, that conversation with government, not just in Africa but globally because I think that’s also an issue. Do you have any reflections on it? We have representation from government but we’ll not put her on the spot. But anyone who has a view, like what could we do in this regard so that even as we talk about governance, procurement becomes an issue and thinking about procuring innovation. Any thoughts?


Rutunda Samuel: Yeah, let me start. Usually, I don’t know, I was talking to someone and say, government is run by accountants. And accountants, they want facts. They want, oh, what is this going to do? And then it’s still in the early stage of the language technology, particularly within our domain, especially for low-resource languages. So it’s very hard to show the facts. It’s something to say, oh, I’m going to take a chance and then I will see. Yeah, but I think there is a need to take a chance. For example, when we worked in the beginning with Common Voice for Rwanda, there was no policy, there was no AI ecosystem, there was nothing. But then there was a leap forward to say, okay, let’s take a chance. And now six years, I think, 30,000 hours have been collected. I think there is, at least last time I checked, there’s around more than 10 African languages that were done. So there is a need to take those chance. But then that requires us to talk to people and to convince and to change mentalities to say, okay, just this is what happened. And then another thing, currently I’m also looking, although we are talking about language, we should look at the settings. So for these technologies to be used, there’s maybe access to the internet or the digital literacy and others. So we’ll have to look globally, but there is a need of changing of the mindset to deploy some use cases and then to learn from it before needing to first having the proofs so that you can deploy.


Deshni Govender: I think I would come in for a bit. So one of the things we know particularly about African language or African NLP or just NLP for indigenous languages is that a lot of the time it’s oral and it’s particularly so for African languages but also for other cultures. And the problem with having culture or language that is intended for oral knowledge, it means that it’s also shaped by tone, it’s shaped by cadence, it’s shaped by who is telling the story and what is that meaning that’s attached to it. And also communal use. And the problem with that is that it creates a little bit of a NLP design flaw. For example, like a design challenge and how do you actually then codify knowledge that is not as easy as taking something that is a book and then making it digital. And so the point I’m trying to make is that when we’re talking about procurement and talking about what it is that we need to do, we need to understand what asset we’re actually working with. And it’s kind of hard to understand the asset that you’re working with if you’re not even sure how to put it into create an asset value or an asset form. You know it’s an asset, but you don’t exactly know how to make this tangible and make it in a form that somebody says, oh, that’s actually interesting. I’m willing to invest in it or I’m willing to do this or I’m willing to do that. And so it’s the difficult part of trying to actually unpack that and then unpack it properly and in a way that you actually shape and preserve and protect the cultures and the nuances that come with trying to take this raw material that is an asset to the people, but then make it a tangible and international value that you could say, cool, as a country, we have this and we have this. Now let’s see how we can use this as a bargaining tool to come in for infrastructure development, to come in for knowledge sharing, but still protect the people.


Melissa Omino: I’m going to ask you a very lawyerly question, which is when you talk about procurement, are we also talking about funding, right? Because when you say money, I think about funding. And if you think about that in the local context, I really think that the challenge is on government to move away from looking to other people to save us. And I’m really stealing that sentiment from Dr. Albert Kahira, who was one of the keynote speakers at COSA, where he said, nobody’s coming to save us. We need to start thinking of ways where we can invest, locally invest in natural language processing so that we can then call the shots or really have the terms, put down the terms of how the language data would be used. And I think this is something that government is very much aware of. A lot of conversation around the Kenya National AI Strategy has been how will it be implemented? The Kenya government made the decision to keep the implementation plan away from public purview, but there is an implementation plan there. There are key performance indicators there, and there are key partners who have been identified to help with the implementation of that AI strategy. Because essentially this conversation that we’re having, we are right at the beginning cycle of natural language processing, and the experts in the room can say that. We are merely talking about data collection when we talk about language data. We need to get into the conversation of building models that will utilize this language data. And that’s why we are up in arms about having that language data open and free for all, because it will minimize the ability for local companies to invest in that language data and build models, because the market will thoroughly thrash them, if you’re talking about market economics, demand, supply, et cetera, which also as a lawyer I might not be very good at. That’s the end of my disclaimers. So I think when we talk about procurement, we need to think about funding. we need to also stop looking outside, we need to think about locally on the African continent, how can we fund? At the Kigali AI Summit earlier this year, there was a conversation about infrastructure, there was a conversation about having data centres, which is very integral to how do we control who can access and use the data, and there was a conversation about starting to have particular data centres in particular regions, and the question was, will it be accessible to African developers, or are we creating data centres for others to use on the continent in order to be compliant with data governance regimes. So I would say for public procurement to make sense, we need to first think about funding. To think about funding, we must challenge local investors to put their money where their mouth is and invest locally, not just in data collection, but in the development of models, because as far as I know, nobody outside is actually funding the development of models in order for us to actually truly have African AI.


Mark Irura: Thank you. Thank you, thank you, Melissa. I’ll come to you Viola, and if you’re online and you have a question that you’d like to pose, please put it in the chat. Over to you Viola.


Ochola Viola: Thank you, Mark. Mine will be quick. Melissa has talked about the funding aspect, because you know, you can’t talk about procurement without the funding bit, but there’s the other aspect of procurement, which is the process, and I believe that was where the challenge you were speaking on was. The question is, does even the procurement officer understand what it is? In government, where I am, there’s always a process. For example, in Kenya, there’s the Public Procurement Act that outlines the process of procurement, and part of the process is you need to give specifications and you say that this is the end product. Now, sometimes the procurement person does not, is not aware of AI, let alone even, you know, any other thing. So it will be difficult for such a person to even appreciate where you’re coming from, if you want to procure this. So maybe as a way forward, and now that Kenya has developed the strategy, it’s very fresh. It was launched in March. And this I will tell you, we may perhaps need to just build the capacity of some of these key offices, for example, the procurement arm of government, so that they’re able to appreciate that this may not necessarily be a tangible item that we are looking at, but it could be something else. So that is number one. And number two, the laws, because now the laws, as we have them now, do not appreciate such things that we may need to review the laws so that they capture these angles. And these laws should not only be reviewed by lawyers, because I mean, Melissa knows, you need to have the technical capacity to be able to put it in the laws in a way that then it will inform what you want to get at the end of the tunnel. So I think I’ll stop there with respect to procurement. Thank you. There’s a friend of mine who says, for government,


Mark Irura: procuring a packet of milk and procuring an AI system is the same. It’s not supposed to be like that. I will come to you, Eli, and I will ask a question. What sort of skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively?


Lilian Diana Awuor Wanzare: What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? What skills would communities need to build in order to govern their own language technologies effectively? And for them, they really don’t have a governance framework. If somebody wants to use our data, how do they come in? If I want to share my data, how do they come in? If the media wants to share the data, how do they come in? All these data generators, how do they come in? What is their benefit structure? As again, you can see then, it is because they really don’t know what comes together to develop these AI models. There’s a really disjointness in terms of how this comes in, vis-à-vis the model. I want a model that’s able to help me with chemistry in the law. I’m asking, do you even understand what you’re saying? First of all, do you have chemical terms in the law? Before this model can start talking about chemistry in the law, how do we get it there? You see, there’s the utopia of this thing is magical, but there is no understanding of how do we get there, and how do all the stakeholders come into place to make us get there. So that really needs to be put into place. Thank you.


Mark Irura: Thanks. Eli, I’ll ask you maybe almost to wrap it up or to talk a little bit about anything to do with community work, right? Since we are at this place where we’re thinking about governance of products that will be developed for and by these communities and probably in collaboration with them.


Elikplim Sabblah: Thank you very much, Mark. And I don’t know, for the past few responses that have come from the other panelists, one theme is connecting all of it, and you can hear a lot about maybe outreach and community sensitization and all of that. I think that we have to understand that some definite skills have to be built. We need people in the communities who understand digital rights, who understand the importance of data, and who understand linguistics or who have skills in linguistics to be able to maximize the opportunity that this technology brings to their communities. Now, one of the things that I’ve come… to understand is that sometimes there is community fatigue regarding contributing to data collection schemes and so there’s desensitization would make them understand that there is a purpose for this and they may not have immediate should I say benefits in terms of maybe monetary terms or whatever support they may need in the immediate sense or but it goes a long way to contribute to something bigger that can actually benefit them immediately and then also the nation as a whole. So I think that it is important for us to understand the need for outreach programs to reach out to people in communities to let them understand the purpose of artificial intelligence. Recently we did a research trying to understand how women-led SMEs or entrepreneurs are using AI and NLP tools to be able to interact with their customers and partners and all of that and we came to the understanding that most of them are probably even using tools that have AI algorithms working in them but they don’t even know and then also some of them will even express a certain level of fatigue as I already mentioned that they are tired of contributing to data collection schemes and stuff like that but then we actually need people with indigenous knowledge and indigenous experience to contribute to these things. Now one other thing that I wanted to also point out is the need for us to let these models that we’re developing on the African continent to represent African culture and one culture is the shared ownership of resources and when you talk about African culture and oral tradition you’ll notice that proverbs and idiomatic expressions and stories don’t have proprietary ownership, it belongs to the community and so that should be reflected in the models that we build and in our data collection activities so that data and models are openly accessible to all. I think I mashed up a lot of things but yeah basically that’s what I wanted to end with. Thank you. Thanks Eli.


Mark Irura: So I will, a question has come. Wow, and we have just run out of time. Lilian, in 30 seconds, how to bridge the gap between building capacity for local communities in AI beyond collection and increasing usage of AI models within those same communities. 30 seconds, please. It is really about partnership and


Lilian Diana Awuor Wanzare: collaboration. If we think about the whole model, we have the local ecosystem, the government, the funders, internal players. How do we come together collaboratively to be able to make this possible? It cannot be disjointed. It has to be a collaborative effort within all members of the ecosystem.


Mark Irura: Thank you. I don’t want to recap what has been said. I began the panel with an elaborate introduction of everyone. Maybe I didn’t introduce myself properly. I’m Mark I work with the Mozilla Foundation. You can follow us online, each one of us. You can hit the subscribe button and like. No, you just follow us on LinkedIn and feel free. I’m making a pact with each of the panelists. Feel free to reach out and ask them about the work and about this work and about what they’re doing. Thank you so much. Thank you so much and thank you for being part of this panel. We really appreciate it. Thank you.


L

Lilian Diana Awuor Wanzare

Speech speed

151 words per minute

Speech length

836 words

Speech time

332 seconds

Community-centered data collection with informed consent and benefit sharing while maintaining open access

Explanation

Wanzare argues that language data licensing should embody community centeredness through proper informed consent processes and benefit sharing mechanisms. She emphasizes that open sharing and benefit sharing should not be mutually exclusive, allowing for both development of tools and non-extractive practices that benefit the originating communities.


Evidence

She mentions the need for transparency in licensing ecosystems that support different requirements and values, allowing for different combinations that are aligned to community values while still supporting open sharing and development.


Major discussion point

Data Governance and Licensing for African Language Data


Topics

Legal and regulatory | Human rights | Sociocultural


Agreed with

– Melissa Omino
– Deshni Govender

Agreed on

Current licensing and governance frameworks are inadequate and need fundamental reform


Communities need understanding of AI model development, governance frameworks, and benefit structures to effectively participate

Explanation

Wanzare argues that communities lack understanding of how AI models are developed and what governance frameworks should look like. She emphasizes that communities need to understand the entire process from data collection to model development to effectively govern their language technologies.


Evidence

She provides examples of communities asking for AI models for specific purposes without understanding the technical requirements, such as wanting ‘chemistry in the law’ without having chemical terms in the language or understanding the development process.


Major discussion point

Community Capacity Building and Skills Development


Topics

Development | Sociocultural | Legal and regulatory


Agreed with

– Elikplim Sabblah
– Ochola Viola

Agreed on

Communities need capacity building and skills development to effectively participate in AI governance


Disagreed with

– Ochola Viola

Disagreed on

Primary barriers to effective language data governance


Bridging capacity gaps requires collaborative partnerships between local ecosystems, government, funders, and international players

Explanation

Wanzare emphasizes that building capacity for local communities in AI requires collaborative effort from all stakeholders in the ecosystem. She argues that the approach cannot be disjointed but must involve partnership between local communities, government, funders, and international players.


Major discussion point

Community Capacity Building and Skills Development


Topics

Development | Economic | Legal and regulatory


M

Melissa Omino

Speech speed

155 words per minute

Speech length

1892 words

Speech time

729 seconds

Language communities should define their own benefits rather than having monetary benefits imposed on them

Explanation

Omino argues that language communities should be able to speak for themselves and determine what type of benefit they want for the use of their language data. She emphasizes that benefits are often automatically assumed to be monetary, but communities typically want sustainable, community-based benefits that everyone can interact with and benefit from.


Evidence

She cites discussions with the Tuluwa community via Masanui University, where communities expressed wanting sustainable and community-based benefits rather than monetary or royalty-based benefits. She also mentions Creative Commons’ recent work on preference signaling for AI data use.


Major discussion point

Data Governance and Licensing for African Language Data


Topics

Legal and regulatory | Human rights | Sociocultural


Communities should be recognized as collective data stewards with inherent rights, not just sources providing consent

Explanation

Omino argues for a fundamental shift from treating communities as data sources to recognizing them as partners with ownership rights. She emphasizes that individual consent is insufficient when data affects entire communities, requiring graduated consent with community consultation and veto power over harmful applications.


Evidence

She explains that the traditional data sharing regime extracts value while leaving communities with risks and harms, and provides examples of what community stewardship would include: verification rather than one-time permission, transparency about benefits, and community veto power.


Major discussion point

Community Sovereignty and Ownership


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Ochola Viola
– Elikplim Sabblah

Agreed on

Communities should have ownership and control over their language data rather than just providing consent


Alternative licensing models like the Litiyabodo Open Data License can provide frameworks for community benefit negotiation

Explanation

Omino presents the Litiyabodo Open Data License as a solution that combines copyright elements with recognition of cultural knowledge and gives communities a voice in benefit negotiation. This license aims to address the harmful dynamics of current language dataset commodification that primarily serves dominant languages and wealthy corporations.


Evidence

She mentions that this work is being done in collaboration with the data science law lab at University of Pretoria, and references Creative Commons’ preference signaling work that would complement such licensing frameworks.


Major discussion point

Data Governance and Licensing for African Language Data


Topics

Legal and regulatory | Intellectual property rights | Sociocultural


Agreed with

– Deshni Govender
– Lilian Diana Awuor Wanzare

Agreed on

Current licensing and governance frameworks are inadequate and need fundamental reform


Governments need to invest locally in NLP rather than looking externally, and challenge local investors to fund model development

Explanation

Omino argues that governments must move away from expecting external salvation and instead focus on local investment in natural language processing. She emphasizes that local funding is crucial for building models that utilize language data, as this would allow African developers to set terms for language data use rather than being outcompeted by the market.


Evidence

She references Dr. Albert Kahira’s statement ‘nobody’s coming to save us’ from COSA, mentions discussions at the Kigali AI Summit about data centres and infrastructure, and notes that currently nobody outside Africa is funding model development for truly African AI.


Major discussion point

Government Role and Policy Frameworks


Topics

Economic | Development | Legal and regulatory


Disagreed with

– Rutunda Samuel

Disagreed on

Approach to funding and investment in African NLP development


D

Deshni Govender

Speech speed

169 words per minute

Speech length

867 words

Speech time

307 seconds

Extractive practices occur both within and across borders, requiring policy protections based on existing cultural and indigenous rights

Explanation

Govender argues that extractive practices in language data collection are not only foreign versus local issues but also occur within countries and continents under the guise of open collaboration. She suggests that policy protections for digital work should build upon existing protections for cultural and indigenous communities, serving both as human rights protection and counter-leverage in open source contexts.


Evidence

She mentions examples like Google.org’s grant to Ghana NLP with minimal conditions, the Inkuba license development, and the concept of cross-subsidization by commercial actors for public maintenance of open source AI resources.


Major discussion point

Data Governance and Licensing for African Language Data


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Melissa Omino
– Lilian Diana Awuor Wanzare

Agreed on

Current licensing and governance frameworks are inadequate and need fundamental reform


African languages being primarily oral creates NLP design challenges in codifying knowledge that isn’t easily digitized

Explanation

Govender explains that African languages are often oral and shaped by tone, cadence, storytelling context, and communal use, which creates significant challenges for NLP development. This makes it difficult to understand and quantify the asset value of language data, as it’s not as straightforward as digitizing written books.


Evidence

She describes how oral knowledge is shaped by who tells the story and the meaning attached to it, and explains the difficulty in creating tangible asset forms that investors can understand and value appropriately.


Major discussion point

Technical and Infrastructure Challenges


Topics

Sociocultural | Infrastructure | Development


O

Ochola Viola

Speech speed

132 words per minute

Speech length

894 words

Speech time

403 seconds

Legal frameworks must be robust with stringent data collection rules to protect communities from exploitation

Explanation

Viola emphasizes the need for very robust legal frameworks around data collection to protect local African communities from possible exploitation by external big tech companies. She argues that these frameworks should ensure communities benefit from AI technologies regardless of how they define those benefits.


Major discussion point

Data Governance and Licensing for African Language Data


Topics

Legal and regulatory | Human rights | Consumer protection


Community ownership should be legally entrenched with operationalized mechanisms to reach remote communities

Explanation

Viola argues that community ownership should not just be legally established but should have actual operational mechanisms within African ecosystems. She emphasizes the need for digital infrastructure to reach remote communities and enable them to access benefits and engage with potential investors in AI technologies.


Evidence

She notes that many communities are in remote areas of the African continent and lack digital infrastructure to access benefits or engage with external parties wanting to develop AI technologies using their languages.


Major discussion point

Community Sovereignty and Ownership


Topics

Infrastructure | Development | Legal and regulatory


Agreed with

– Melissa Omino
– Elikplim Sabblah

Agreed on

Communities should have ownership and control over their language data rather than just providing consent


Local communities should control data from collection through usage with meaningful engagement throughout the process

Explanation

Viola argues for data sovereignty where local African communities control their data from the point of collection to the point of usage of AI technologies. She emphasizes that the entire process must be inclusive, with communities involved beyond just being data providers to participating in the whole development process.


Major discussion point

Community Sovereignty and Ownership


Topics

Human rights | Legal and regulatory | Development


Agreed with

– Lilian Diana Awuor Wanzare
– Elikplim Sabblah

Agreed on

Communities need capacity building and skills development to effectively participate in AI governance


Capacity building for procurement officers and legal framework updates are needed to handle AI procurement effectively

Explanation

Viola identifies a critical gap in government procurement processes where procurement officers lack understanding of AI technologies, making it difficult to appreciate and specify AI-related procurements. She argues for capacity building of key government offices and updating laws to capture AI procurement requirements with technical input beyond just lawyers.


Evidence

She references Kenya’s Public Procurement Act and explains how procurement officers struggle to understand intangible AI products, noting that current laws don’t appreciate such technologies and need review with technical capacity input.


Major discussion point

Government Role and Policy Frameworks


Topics

Legal and regulatory | Development | Economic


Agreed with

– Mark Irura
– Rutunda Samuel

Agreed on

Government procurement systems are inadequate for AI and language technology innovation


Disagreed with

– Lilian Diana Awuor Wanzare

Disagreed on

Primary barriers to effective language data governance


R

Rutunda Samuel

Speech speed

131 words per minute

Speech length

607 words

Speech time

276 seconds

AI strategies raise awareness, create working frameworks, add accountability, and help raise resources for language technology development

Explanation

Samuel argues that AI strategies and policies serve multiple important functions: they raise public awareness about AI components including language, create frameworks for governments and entities to follow, add accountability mechanisms, and facilitate resource mobilization. He emphasizes that these strategies create synergies and collaborations across sectors like health, leading to resource raising opportunities.


Evidence

He provides an example of how health sector professionals might want to use AI tools for medicinal plants and how having a policy framework enables them to know where to ask for help and creates collaboration opportunities.


Major discussion point

Government Role and Policy Frameworks


Topics

Legal and regulatory | Development | Economic


Government procurement requires mindset changes and willingness to take chances on emerging language technologies

Explanation

Samuel argues that government procurement faces challenges because governments are run by accountants who want concrete facts, while language technology for low-resource languages is still in early stages and difficult to prove with hard data. He emphasizes the need for governments to take calculated risks and change mentalities to deploy use cases and learn from them.


Evidence

He cites the example of Common Voice for Rwanda, where despite having no policy or AI ecosystem initially, taking a chance led to collecting 30,000 hours of data and developing more than 10 African languages over six years.


Major discussion point

Technical and Infrastructure Challenges


Topics

Economic | Development | Legal and regulatory


Agreed with

– Mark Irura
– Ochola Viola

Agreed on

Government procurement systems are inadequate for AI and language technology innovation


Disagreed with

– Melissa Omino

Disagreed on

Approach to funding and investment in African NLP development


E

Elikplim Sabblah

Speech speed

157 words per minute

Speech length

855 words

Speech time

325 seconds

Government should empower local communities to take ownership of data governance through inclusive strategy development

Explanation

Sabblah argues that governments should support and empower local communities to lead data governance, particularly for language data. He emphasizes that the development of national AI strategies should include communities through stakeholder consultations, ecosystem analysis, and research, which inherently gives communities ownership of the resulting governance frameworks.


Evidence

He describes Ghana’s draft national AI strategy development process, which includes reaching out to various groups to understand their needs, and mentions Pillar 5 of the strategy that provides guidelines for data collectors on collecting, storing, and sharing data.


Major discussion point

Community Sovereignty and Ownership


Topics

Legal and regulatory | Development | Sociocultural


Agreed with

– Melissa Omino
– Ochola Viola

Agreed on

Communities should have ownership and control over their language data rather than just providing consent


Communities need people with digital rights knowledge, data importance understanding, and linguistics skills

Explanation

Sabblah argues that communities need specific skill sets to effectively govern their language technologies, including understanding of digital rights, data importance, and linguistics. He emphasizes the need for people with indigenous knowledge and experience to contribute to AI development while understanding the broader purpose and benefits.


Evidence

He mentions research on women-led SMEs using AI and NLP tools, finding that many use AI-powered tools without knowing it, and notes community fatigue regarding data collection schemes due to lack of understanding of the purpose and benefits.


Major discussion point

Community Capacity Building and Skills Development


Topics

Development | Human rights | Sociocultural


Agreed with

– Lilian Diana Awuor Wanzare
– Ochola Viola

Agreed on

Communities need capacity building and skills development to effectively participate in AI governance


Outreach programs are needed to help communities understand AI’s purpose and overcome fatigue from data collection schemes

Explanation

Sabblah identifies community fatigue and desensitization regarding data collection as a major challenge that requires targeted outreach programs. He argues that communities need to understand that while they may not see immediate monetary benefits, their contributions serve a larger purpose that can benefit them and the nation as a whole.


Evidence

He references research showing that women entrepreneurs are tired of contributing to data collection schemes, and emphasizes that African culture of shared ownership of resources like proverbs and stories should be reflected in the models and data collection activities.


Major discussion point

Community Capacity Building and Skills Development


Topics

Development | Sociocultural | Human rights


M

Mark Irura

Speech speed

125 words per minute

Speech length

2384 words

Speech time

1139 seconds

Language is culture and identity, yet Africa’s digital identity is skewed, manipulated, and disproportionately commercialized

Explanation

Irura argues that while language represents culture and cultural identity, the digital representation of Africa through language data is being distorted and exploited commercially. He emphasizes that this creates a fundamental problem where African digital identity is not authentically represented.


Evidence

He notes that language data collection is characterized by significant disparity between large-scale publicly accessible resources and numerous smaller isolated projects, and mentions the awakening sentiment change amongst language communities regarding data governance issues.


Major discussion point

Data Governance and Licensing for African Language Data


Topics

Sociocultural | Human rights | Legal and regulatory


There is an awakening and sentiment change among language communities regarding inequitable investment and power dynamics in language technology

Explanation

Irura identifies a growing awareness among African language communities about issues of inequitable investment, lack of locally sensitive community control, and problematic power dynamics affecting language technology development. This represents a shift in how communities view their participation in language data initiatives.


Evidence

He mentions issues raised by speakers who crowdsource datasets including inequitable investment locally, sensitive community control, and dynamics around power impacting the use of language to build language technology.


Major discussion point

Community Sovereignty and Ownership


Topics

Human rights | Sociocultural | Economic


Government procurement systems present challenges for innovation, particularly for AI and language technology development

Explanation

Irura highlights that procurement systems create barriers for communities and developer communities to benefit from government investment in AI technologies. He suggests that procurement could provide opportunities but current systems are not designed to handle innovative technologies effectively.


Evidence

He mentions that procurement provides an opportunity for developer communities and notes that people in remote areas cannot benefit due to lack of infrastructure and connectivity, making procurement a critical issue for accessing innovation.


Major discussion point

Government Role and Policy Frameworks


Topics

Economic | Legal and regulatory | Development


For government, procuring traditional goods and AI systems is treated the same way, which creates fundamental procurement challenges

Explanation

Irura points out a critical flaw in government procurement processes where complex AI systems are treated with the same procedures as simple commodities. This approach fails to account for the unique requirements, specifications, and evaluation criteria needed for AI and language technology procurement.


Evidence

He references a friend’s observation that ‘for government, procuring a packet of milk and procuring an AI system is the same’ and states ‘It’s not supposed to be like that.’


Major discussion point

Government Role and Policy Frameworks


Topics

Legal and regulatory | Economic | Development


Agreed with

– Ochola Viola
– Rutunda Samuel

Agreed on

Government procurement systems are inadequate for AI and language technology innovation


Agreements

Agreement points

Communities should have ownership and control over their language data rather than just providing consent

Speakers

– Melissa Omino
– Ochola Viola
– Elikplim Sabblah

Arguments

Communities should be recognized as collective data stewards with inherent rights, not just sources providing consent


Community ownership should be legally entrenched with operationalized mechanisms to reach remote communities


Government should empower local communities to take ownership of data governance through inclusive strategy development


Summary

All three speakers strongly advocate for moving beyond traditional consent models to recognize communities as having inherent ownership rights over their language data, with legal frameworks and government support to operationalize this ownership.


Topics

Human rights | Legal and regulatory | Sociocultural


Communities need capacity building and skills development to effectively participate in AI governance

Speakers

– Lilian Diana Awuor Wanzare
– Elikplim Sabblah
– Ochola Viola

Arguments

Communities need understanding of AI model development, governance frameworks, and benefit structures to effectively participate


Communities need people with digital rights knowledge, data importance understanding, and linguistics skills


Local communities should control data from collection through usage with meaningful engagement throughout the process


Summary

There is strong consensus that communities currently lack the necessary knowledge and skills to effectively govern their language technologies, requiring targeted capacity building in technical understanding, digital rights, and governance frameworks.


Topics

Development | Sociocultural | Human rights


Current licensing and governance frameworks are inadequate and need fundamental reform

Speakers

– Melissa Omino
– Deshni Govender
– Lilian Diana Awuor Wanzare

Arguments

Alternative licensing models like the Litiyabodo Open Data License can provide frameworks for community benefit negotiation


Extractive practices occur both within and across borders, requiring policy protections based on existing cultural and indigenous rights


Community-centered data collection with informed consent and benefit sharing while maintaining open access


Summary

All speakers agree that existing licensing frameworks like CC0 are insufficient and that new models are needed that can balance open access with community rights and benefit sharing.


Topics

Legal and regulatory | Intellectual property rights | Human rights


Government procurement systems are inadequate for AI and language technology innovation

Speakers

– Mark Irura
– Ochola Viola
– Rutunda Samuel

Arguments

For government, procuring traditional goods and AI systems is treated the same way, which creates fundamental procurement challenges


Capacity building for procurement officers and legal framework updates are needed to handle AI procurement effectively


Government procurement requires mindset changes and willingness to take chances on emerging language technologies


Summary

There is clear agreement that current government procurement processes are not designed to handle AI technologies effectively, treating complex AI systems the same as simple commodities, requiring both capacity building and procedural reforms.


Topics

Legal and regulatory | Economic | Development


Similar viewpoints

Both speakers emphasize that communities should determine their own definition of benefits from language data use, rejecting the assumption that benefits must be monetary and advocating for community-defined, sustainable benefits.

Speakers

– Melissa Omino
– Lilian Diana Awuor Wanzare

Arguments

Language communities should define their own benefits rather than having monetary benefits imposed on them


Community-centered data collection with informed consent and benefit sharing while maintaining open access


Topics

Human rights | Sociocultural | Legal and regulatory


Both speakers advocate for strong local investment and robust legal protections to prevent exploitation by external actors, emphasizing the need for African-controlled AI development.

Speakers

– Melissa Omino
– Ochola Viola

Arguments

Governments need to invest locally in NLP rather than looking externally, and challenge local investors to fund model development


Legal frameworks must be robust with stringent data collection rules to protect communities from exploitation


Topics

Economic | Legal and regulatory | Development


Both speakers see national AI strategies as crucial tools for creating frameworks, raising awareness, and enabling community participation in governance, though they approach from different angles of implementation.

Speakers

– Rutunda Samuel
– Elikplim Sabblah

Arguments

AI strategies raise awareness, create working frameworks, add accountability, and help raise resources for language technology development


Government should empower local communities to take ownership of data governance through inclusive strategy development


Topics

Legal and regulatory | Development | Sociocultural


Unexpected consensus

The need for collaborative partnerships rather than top-down approaches

Speakers

– Lilian Diana Awuor Wanzare
– Elikplim Sabblah
– Melissa Omino

Arguments

Bridging capacity gaps requires collaborative partnerships between local ecosystems, government, funders, and international players


Government should empower local communities to take ownership of data governance through inclusive strategy development


Communities should be recognized as collective data stewards with inherent rights, not just sources providing consent


Explanation

Despite coming from different professional backgrounds (academic researcher, government advisor, and legal expert), there is unexpected consensus on rejecting traditional top-down approaches in favor of genuine partnership models that recognize community agency and expertise.


Topics

Development | Human rights | Legal and regulatory


The complexity of oral African languages creates unique technical challenges for AI development

Speakers

– Deshni Govender
– Lilian Diana Awuor Wanzare

Arguments

African languages being primarily oral creates NLP design challenges in codifying knowledge that isn’t easily digitized


Community-centered data collection with informed consent and benefit sharing while maintaining open access


Explanation

There is unexpected technical consensus between a policy expert and an academic researcher about the fundamental challenges that oral traditions pose for AI development, recognizing that African languages require different approaches than text-based systems.


Topics

Sociocultural | Infrastructure | Development


Overall assessment

Summary

The speakers demonstrate remarkable consensus across multiple critical areas: the inadequacy of current licensing frameworks, the need for community ownership and control over language data, the importance of capacity building, and the failure of existing government procurement systems to handle AI innovation effectively.


Consensus level

High level of consensus with strong implications for policy reform. The agreement spans technical, legal, and social dimensions, suggesting a mature understanding of the interconnected challenges facing African language data governance. This consensus provides a solid foundation for developing comprehensive solutions that address community rights, technical requirements, and governance frameworks simultaneously.


Differences

Different viewpoints

Approach to funding and investment in African NLP development

Speakers

– Melissa Omino
– Rutunda Samuel

Arguments

Governments need to invest locally in NLP rather than looking externally, and challenge local investors to fund model development


Government procurement requires mindset changes and willingness to take chances on emerging language technologies


Summary

Omino advocates for a complete shift away from external funding and emphasizes local investment as the solution, while Samuel focuses on government willingness to take risks and change procurement mindsets to support emerging technologies, regardless of funding source


Topics

Economic | Development | Legal and regulatory


Primary barriers to effective language data governance

Speakers

– Lilian Diana Awuor Wanzare
– Ochola Viola

Arguments

Communities need understanding of AI model development, governance frameworks, and benefit structures to effectively participate


Capacity building for procurement officers and legal framework updates are needed to handle AI procurement effectively


Summary

Wanzare identifies community knowledge gaps as the primary barrier, while Viola focuses on government institutional capacity and legal framework inadequacies as the main obstacles


Topics

Development | Legal and regulatory | Sociocultural


Unexpected differences

Role of external versus internal capacity building

Speakers

– Melissa Omino
– Elikplim Sabblah

Arguments

Governments need to invest locally in NLP rather than looking externally, and challenge local investors to fund model development


Communities need people with digital rights knowledge, data importance understanding, and linguistics skills


Explanation

While both speakers advocate for local empowerment, Omino strongly rejects external involvement and emphasizes complete local self-reliance, while Sabblah appears more open to external collaboration for capacity building. This disagreement is unexpected given their shared goal of community empowerment


Topics

Development | Economic | Human rights


Overall assessment

Summary

The speakers show remarkable consensus on fundamental goals – community sovereignty, equitable benefit sharing, and the need for better governance frameworks. However, they disagree significantly on implementation strategies, funding approaches, and the role of external actors


Disagreement level

Low to moderate disagreement level with high strategic implications. While speakers agree on problems and desired outcomes, their different approaches to solutions could lead to fragmented or competing initiatives. The disagreements reflect different professional backgrounds and regional experiences, suggesting need for integrated approaches that combine legal, technical, policy, and community perspectives


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize that communities should determine their own definition of benefits from language data use, rejecting the assumption that benefits must be monetary and advocating for community-defined, sustainable benefits.

Speakers

– Melissa Omino
– Lilian Diana Awuor Wanzare

Arguments

Language communities should define their own benefits rather than having monetary benefits imposed on them


Community-centered data collection with informed consent and benefit sharing while maintaining open access


Topics

Human rights | Sociocultural | Legal and regulatory


Both speakers advocate for strong local investment and robust legal protections to prevent exploitation by external actors, emphasizing the need for African-controlled AI development.

Speakers

– Melissa Omino
– Ochola Viola

Arguments

Governments need to invest locally in NLP rather than looking externally, and challenge local investors to fund model development


Legal frameworks must be robust with stringent data collection rules to protect communities from exploitation


Topics

Economic | Legal and regulatory | Development


Both speakers see national AI strategies as crucial tools for creating frameworks, raising awareness, and enabling community participation in governance, though they approach from different angles of implementation.

Speakers

– Rutunda Samuel
– Elikplim Sabblah

Arguments

AI strategies raise awareness, create working frameworks, add accountability, and help raise resources for language technology development


Government should empower local communities to take ownership of data governance through inclusive strategy development


Topics

Legal and regulatory | Development | Sociocultural


Takeaways

Key takeaways

Language data governance requires a shift from treating communities as data sources to recognizing them as collective data stewards with inherent ownership rights


Community-centered approaches must balance open sharing with equitable benefit distribution, allowing communities to define what benefits mean to them beyond just monetary compensation


Alternative licensing frameworks like the Litiyabodo Open Data License can provide mechanisms for community benefit negotiation while respecting cultural protocols


Government AI strategies should center culture as a main pillar and include communities as equal partners throughout the entire AI development lifecycle, not just at data collection stage


Local investment and funding in NLP model development is crucial for African countries to control their language technology destiny rather than relying on external actors


Capacity building is needed across multiple levels – from procurement officers understanding AI to communities understanding digital rights and data governance


The oral nature of African languages creates unique technical challenges for NLP that require specialized approaches to preserve cultural nuances and communal knowledge systems


Successful language technology governance requires collaborative partnerships between local ecosystems, governments, funders, and international players rather than siloed approaches


Resolutions and action items

Panelists committed to being available for follow-up engagement via LinkedIn for continued discussion on language data governance topics


Reference made to Creative Commons releasing preference signaling tools that work with CC licenses to allow data stewards to specify preferred uses


Ghana’s draft national AI strategy includes Pillar 5 providing guidelines for data collectors on collection, storage and sharing practices


Kenya’s AI strategy implementation plan exists with identified key partners and performance indicators, though kept from public view


Unresolved issues

How to effectively operationalize community data sovereignty mechanisms, especially for reaching remote communities with limited digital infrastructure


Specific implementation details for alternative licensing frameworks and how they would work in practice across different African contexts


How to reform government procurement processes to effectively handle AI and language technology acquisitions


Bridging the gap between data collection activities and actual model development/deployment that benefits local communities


How to address community fatigue from repeated data collection schemes while building sustainable engagement


Balancing the need for open data commons to drive innovation with community ownership and benefit-sharing requirements


How to quantify and preserve the intangible cultural assets embedded in oral language traditions within digital frameworks


Suggested compromises

Graduated consent models that require both individual consent and community consultation before data agreements


Dual licensing approaches that allow for both open sharing and community benefit requirements


Cross-subsidization models where commercial actors support public maintenance of open source AI resources


Equal collaboration partnerships with co-ownership structures rather than consultant relationships between foreign and local partners


Mandatory benefit sharing with flexible definitions allowing communities to choose between monetary compensation, capacity building, infrastructure investment, or priority access to developed products


Learning from existing frameworks like the Nagoya Protocol in biodiversity to create linguistic protocols for African language use in AI


Combining legal frameworks with cultural recognition elements to respect both copyright and indigenous knowledge systems


Thought provoking comments

I think it’s important also to point out that when we mention the concept of extractive practices, that it’s not always a foreign versus local context. And it’s not a cross-border issue, because I think that extractive practices often happen within countries and within the continent under the guise of the open collaboration concept.

Speaker

Deshni Govender


Reason

This comment challenged the common narrative that data exploitation is primarily a North-South or foreign-domestic issue. It introduced the uncomfortable reality that extractive practices can occur within African countries themselves, even when framed as collaborative efforts. This reframed the entire discussion from an ‘us vs. them’ mentality to a more nuanced understanding of power dynamics.


Impact

This shifted the conversation away from simplistic colonial framings and forced participants to consider more complex internal dynamics. It elevated the discussion to examine power structures more critically, regardless of geographic origin, and influenced subsequent speakers to focus on governance mechanisms rather than just external protection.


Ownership and consent are two completely different things. The traditional data sharing regime treats communities as sources rather than partners, and this extracts value while leaving these very communities with just the risks and harms.

Speaker

Melissa Omino


Reason

This distinction fundamentally challenged the prevailing approach to data governance in AI. Most frameworks focus on obtaining consent, but Melissa highlighted that consent without ownership still perpetuates extractive relationships. This was a paradigm-shifting observation that questioned the adequacy of current ethical AI practices.


Impact

This comment became a cornerstone for the rest of the discussion. Multiple speakers referenced this ownership vs. consent framework, and it directly influenced the conversation toward community data sovereignty and benefit-sharing mechanisms. It provided a theoretical foundation that other panelists built upon throughout the session.


The problem with having culture or language that is intended for oral knowledge, it means that it’s also shaped by tone, it’s shaped by cadence, it’s shaped by who is telling the story and what is that meaning that’s attached to it… And so it’s kind of hard to understand the asset that you’re working with if you’re not even sure how to put it into create an asset value or an asset form.

Speaker

Deshni Govender


Reason

This comment introduced a critical technical and cultural complexity that hadn’t been adequately addressed. It highlighted the fundamental challenge of digitizing oral traditions without losing their essence, and the difficulty of creating economic value from intangible cultural assets. This bridged technical NLP challenges with cultural preservation concerns.


Impact

This deepened the technical discussion and helped explain why traditional licensing and governance models are inadequate for African language data. It influenced the conversation toward more nuanced approaches that consider the unique characteristics of oral traditions, and helped other participants understand why simple digitization approaches fail.


I really think that the challenge is on government to move away from looking to other people to save us… We need to start thinking of ways where we can invest, locally invest in natural language processing so that we can then call the shots or really have the terms.

Speaker

Melissa Omino


Reason

This was a provocative call for self-reliance that challenged the dependency mindset often present in development discussions. It shifted focus from seeking external validation and funding to building internal capacity and control. The comment was particularly powerful because it connected funding, sovereignty, and strategic control.


Impact

This comment redirected the conversation from governance mechanisms to fundamental questions about economic independence and strategic autonomy. It influenced subsequent discussions about procurement, funding, and capacity building, with other speakers building on this theme of local ownership and investment.


Usually, I don’t know, I was talking to someone and say, government is run by accountants. And accountants, they want facts. They want, oh, what is this going to do? And then it’s still in the early stage of the language technology… So it’s very hard to show the facts. It’s something to say, oh, I’m going to take a chance and then I will see.

Speaker

Rutunda Samuel


Reason

This comment provided a refreshingly honest and practical perspective on the bureaucratic challenges of innovation in government. It humanized the procurement discussion by acknowledging the risk-averse nature of public administration and the inherent uncertainty in emerging technologies. The informal tone made complex policy issues more accessible.


Impact

This comment grounded the theoretical policy discussions in practical reality. It helped explain why good intentions often fail in implementation and influenced other speakers to consider the human and institutional barriers to their proposed solutions. It also led to Viola’s important points about capacity building for procurement officers.


For example, in Kenya, there’s the Public Procurement Act that outlines the process of procurement… Now, sometimes the procurement person does not, is not aware of AI, let alone even, you know, any other thing. So it will be difficult for such a person to even appreciate where you’re coming from, if you want to procure this.

Speaker

Ochola Viola


Reason

This comment identified a critical but often overlooked implementation gap – the disconnect between policy aspirations and administrative capacity. It highlighted how existing legal frameworks and human capacity constraints can undermine even well-intentioned AI strategies. This was a practical insight that connected legal, technical, and human resource challenges.


Impact

This comment brought the discussion full circle from high-level policy to ground-level implementation challenges. It influenced the conversation toward practical capacity building needs and helped other participants understand why technical solutions alone are insufficient without corresponding institutional development.


Overall assessment

These key comments fundamentally shaped the discussion by challenging simplistic narratives and introducing crucial complexities. Deshni’s point about internal extractive practices prevented the conversation from falling into colonial binaries, while Melissa’s ownership vs. consent distinction provided a theoretical framework that anchored much of the subsequent discussion. The technical insights about oral traditions added necessary depth to understanding why standard approaches fail, while the practical observations about government bureaucracy and procurement grounded theoretical discussions in implementation reality. Together, these comments elevated the conversation from abstract policy discussions to a nuanced examination of power, culture, technology, and practical governance challenges. They created a more sophisticated understanding of the ecosystem needed to support equitable language technology development in Africa, moving beyond simple solutions to acknowledge the interconnected nature of technical, cultural, legal, and institutional challenges.


Follow-up questions

How can AI training data licenses be adapted to protect cultural sovereignty and ensure equitable benefit for marginalized communities?

Speaker

Mark Irura


Explanation

This is a fundamental question about developing new licensing frameworks that balance open sharing with community protection and benefit-sharing


How do we balance the issue of open sharing vis-à-vis benefit sharing within licensing frameworks?

Speaker

Lilian Diana Awuor Wanzare


Explanation

This addresses the core tension between making data openly available for development while ensuring communities benefit from their contributions


How transparent is the licensing ecosystem to enable different views and different combinations that support different requirements by different people?

Speaker

Lilian Diana Awuor Wanzare


Explanation

This explores the need for flexible, transparent licensing systems that can accommodate diverse community needs and values


What type of benefit should flow to language communities for the use of their language datasets?

Speaker

Melissa Omino


Explanation

This requires direct consultation with language communities to understand their preferences for benefits, which may not be monetary


Can a licensing framework deliver community-defined benefits and respect for cultural protocols?

Speaker

Melissa Omino


Explanation

This examines whether legal frameworks like the Litiyabodo Open Data License can effectively address community needs and cultural values


How can governments support community-led governance rather than top-down approaches?

Speaker

Mark Irura


Explanation

This explores mechanisms for governments to partner with and empower communities in governing their own language technologies


How can public procurement systems be adapted to support innovation in language technology for local communities?

Speaker

Mark Irura


Explanation

This addresses the challenge of government procurement processes that don’t understand or accommodate AI and language technology innovations


What skills would communities need to build in order to govern their own language technologies effectively?

Speaker

Mark Irura


Explanation

This identifies the capacity building needs for communities to meaningfully participate in governing their language data and technologies


How to bridge the gap between building capacity for local communities in AI beyond collection and increasing usage of AI models within those same communities?

Speaker

Audience member (via chat)


Explanation

This addresses the challenge of moving communities from data contributors to active users and beneficiaries of AI technologies


How do we establish something like a linguistic protocol for use of African languages in AI, similar to the Nagoya Protocol for biodiversity?

Speaker

Deshni Govender


Explanation

This explores applying existing international frameworks for genetic resources to language resources, requiring further research into legal parallels


How do we codify and create asset value from oral knowledge that is shaped by tone, cadence, and communal use?

Speaker

Deshni Govender


Explanation

This addresses the technical and conceptual challenge of preserving the full cultural context of oral languages in digital formats


How can local African investors be challenged to fund not just data collection but model development?

Speaker

Melissa Omino


Explanation

This explores the need for local investment in the full AI development pipeline to maintain control over African language technologies


Will African data centres be accessible to African developers or primarily serve external users?

Speaker

Melissa Omino


Explanation

This examines whether infrastructure development will truly benefit local developers or primarily serve compliance needs for external actors


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.