Uncategorized
WS #288 an AI Policy Research Roadmap for Evidence Based AI Policy
Lightning Talk #29 Multistakeholder Engagement in Africas WSis+20 Review
Open Forum #23 Protecting Refugees Digital Resilience Info Integrity
Open Forum #23 Protecting Refugees Digital Resilience Info Integrity
Session at a glance
Summary
This discussion focused on protecting refugees through digital resilience and information integrity, examining how misinformation and hate speech online directly impact forcibly displaced populations. Katie Drew from UNHCR moderated a panel exploring solutions to strengthen digital protection, improve access to reliable information, and foster social cohesion through multi-stakeholder partnerships.
The conversation centered on a case study from South Africa, where panelists described rising xenophobia and anti-foreigner sentiment amplified through social media platforms and online groups like “Operation Dudula.” Mbali Mushathama explained how misinformation targeting foreign nationals creates real-world violence, particularly affecting refugee children in schools who face xenophobic bullying. Liko Bottoman from South Africa’s Department of Basic Education highlighted how anti-foreigner narratives spread beyond classrooms into communities, making curriculum-based solutions insufficient.
The panel presented an innovative “pre-bunking” approach through a board game called “Nzanzi Life,” designed to counter anti-foreigner sentiment before children are exposed to harmful narratives online. Michael Power described how this gamified intervention, combined with facilitated discussions, achieved remarkable results, with student perceptions about online manipulation changing by nearly 50% after just three hours of engagement.
Participants discussed significant barriers to reporting hate speech and digital violence, including language barriers, fear of retaliation, and inadequate platform reporting mechanisms. Oluwaseun Adepoju emphasized the need for localized, anonymous reporting systems and partnerships between tech platforms and trusted local organizations. The discussion concluded with calls for continued multi-stakeholder collaboration, emphasizing that addressing information integrity challenges requires sustained partnerships across government, private sector, and humanitarian organizations to create comprehensive solutions rather than single technological fixes.
Keypoints
## Major Discussion Points:
– **Digital resilience challenges for forcibly displaced communities**: The discussion explored barriers to safe information access including vulnerability to misinformation, xenophobia, surveillance, censorship, lack of network access, and trust issues with digital platforms and reporting mechanisms.
– **Information risks and xenophobic narratives in South Africa**: Panelists examined how anti-foreigner sentiment spreads through online platforms like “Put South Africa First” and “Operation Dudula” movements, leading to real-world violence and affecting refugee children in schools through xenophobic bullying.
– **Pre-bunking strategies and the “Mzanzi Life” board game**: The team presented their innovative approach using a board game (rather than digital tools due to connectivity issues) to proactively counter false narratives before they take hold, showing significant success in changing student perceptions about anti-foreigner sentiment.
– **Multi-stakeholder partnerships and collaboration**: The discussion emphasized the importance of bringing together humanitarian organizations, government departments, private sector partners, and tech platforms to address the complex “wicked problem” of information integrity for displaced populations.
– **Reporting mechanisms and platform accountability**: Participants discussed the inadequacies of current hate speech reporting systems on social media platforms, highlighting issues with localization, trust, fear of retaliation, and the need for anonymous reporting options supported by local civil society organizations.
## Overall Purpose:
The discussion aimed to examine how multi-stakeholder partnerships can strengthen digital protection and information integrity for forcibly displaced people, focusing on practical solutions like pre-bunking strategies, improved access to reliable information, and fostering social cohesion while addressing xenophobia and misinformation.
## Overall Tone:
The discussion maintained a professional, collaborative, and solution-oriented tone throughout. While acknowledging serious challenges like xenophobia and digital violence, the conversation remained constructive and forward-looking, emphasizing practical innovations and partnerships. The tone was particularly encouraging when discussing the success of their board game intervention and the potential for scaling solutions across different contexts.
Speakers
**Speakers from the provided list:**
– **Katie Drew** – Works for UNHCR (UN Refugee Agency), specifically for UNHCR’s digital service on a work stream called information integrity, focusing on strengthening information integrity to mitigate information risks online that impact forcibly displaced and stateless people
– **Therese Marie Uppstrom Pankratov** – Head of the Humanitarian Innovation Program at Innovation Norway, previously worked with the Permanent Mission of Norway to the UN in Geneva, the Norwegian Refugee Council, Save the Children and UNHCR
– **Mbali Mushathama** – UNHCR protection associate working for UNHCR’s multi-country office, based in Pretoria, supports social cohesion in South Africa and advocates for refugee rights with a community-based approach
– **Micheal Power** – Public interest lawyer, managing director and co-founder of ALT Advisory and Power Associates (South African office of Power Law Africa), serves as chairperson of the Power Law Africa Alliance, specializes in technology law, information rights, and digital governance
– **Likho Bottoman** – Senior official within the South African government department of basic education, holds the position of director of social cohesion and equity in education
– **Oluwaseun Adepoju** – Technology and innovation leader, managing partner at Co-Creation Hub, oversees Co-Creation Hub’s design lab and supports technology and society work streams, has a master’s in public policy with focus on technology policy and is a PhD researcher in creative technologies
**Additional speakers:**
– **Audience** – Multiple audience members who asked questions during the Q&A session, including Olivia (from London story working in India context), Pumzele (works on disinformation in South Africa), and others
Full session report
# Protecting Refugees Through Digital Resilience and Information Integrity: A Multi-Stakeholder Approach
## Executive Summary
This comprehensive discussion examined the critical intersection of digital protection and refugee safety, focusing on how misinformation, hate speech, and inadequate digital infrastructure create significant risks for forcibly displaced populations. Held as an IGF (Internet Governance Forum) workshop, the session was moderated by Katie Drew from UNHCR’s information integrity work stream and brought together humanitarian practitioners, government officials, technology experts, and legal professionals to explore innovative solutions for strengthening information integrity and fostering social cohesion through collaborative partnerships.
The conversation began with an interactive Mentimeter session where audience members contributed key terms including “access,” “protection,” “vulnerability,” “xenophobia,” “misinformation,” and “digital literacy,” setting the stage for the discussion. The panel then focused on a compelling case study from South Africa, where rising xenophobia amplified through social media platforms has created tangible threats to refugee communities, particularly affecting children in educational settings. The panel presented an innovative board game intervention that achieved significant success in countering anti-foreigner sentiment before harmful narratives take hold.
## Key Participants and Perspectives
The discussion featured diverse expertise across sectors. **Katie Drew** from UNHCR’s information integrity work stream provided the humanitarian perspective on digital protection challenges. **Therese Marie Uppstrom Pankratov**, Head of the Humanitarian Innovation Programme at Innovation Norway, brought insights on multi-stakeholder partnerships and innovation processes, explaining how UNHCR responded to Innovation Norway’s annual call for proposals. **Mbali Mushathama**, a UNHCR protection associate based in Pretoria, offered ground-level experience working with refugee communities in South Africa’s complex social environment.
**Michael Power**, a public interest lawyer and technology governance expert, contributed legal and policy analysis alongside practical experience developing digital protection interventions. **Likho Bottoman** from South Africa’s Department of Basic Education provided the government perspective on addressing xenophobia in educational settings. **Oluwaseun Adepoju**, a technology and innovation leader from Co-Creation Hub, shared expertise on platform engagement and community-based reporting mechanisms.
## Digital Resilience Challenges for Displaced Communities
Katie Drew defined digital resilience as creating robust information ecosystems that allow displaced communities secure access to information and freedom of expression. As Therese Marie Uppstrom Pankratov articulated, “Safeguarding information integrity is one of the key challenges of our times… in humanitarian operations, we tend to talk about information being protection… when we have access to quality information, it helps keep us safe. And when we don’t, it causes a significant risk.”
The panel identified multiple barriers preventing refugees from achieving digital security. These challenges include vulnerability to misinformation campaigns, exposure to xenophobic narratives, surveillance concerns, censorship, inadequate network access, and fundamental trust issues with digital platforms and reporting mechanisms.
Mbali Mushathama emphasized that refugees seek safe spaces to share their stories and access reliable information in languages they understand. She stressed that digital literacy initiatives must be context-specific and utilize real-life examples from the community rather than generic approaches. This localization requirement emerged as a recurring theme throughout the discussion.
## Information Risks and Xenophobic Narratives in South Africa
The South African context provided a stark illustration of how online hate speech translates into offline violence. Mbali Mushathama described the rise of misinformation and hate speech targeting foreign nationals, particularly during election periods, with groups using coded language across South Africa’s 11 official languages to evade platform moderation systems.
She explained how movements like “Put South Africa First” and “Operation Dudula” (which means “to push out or to push away”) have leveraged social media to spread anti-foreigner sentiment, creating direct correlations between online incitement and physical violence in host communities. This digital-to-physical violence pipeline particularly affects refugee children in schools, who face xenophobic bullying that extends beyond educational settings into broader community tensions.
Likho Bottoman provided crucial context about South Africa’s multicultural complexity, noting that the country’s innate diversity management challenges create vulnerabilities that anti-foreigner narratives exploit. He observed that foreign nationals become scapegoats for socioeconomic problems, particularly given South Africa’s unemployment rate of just over 32% and limited public resources.
## Innovative Pre-bunking Strategies: The Mzanzi Life Board Game
One of the discussion’s most compelling elements was the presentation of an innovative “pre-bunking” approach through a board game called “Mzanzi Life,” similar to snakes and ladders. Michael Power explained that pre-bunking involves addressing harmful narratives before they take hold, rather than attempting to debunk false information after it has spread.
The team chose a board game format rather than digital tools due to connectivity issues and the need to reach populations without reliable internet access. This decision proved highly successful, with the intervention achieving significant results. As Michael Power reported, when students were asked whether they agreed with the statement about online manipulation, agreement increased from 43% to 86% after just three hours of engagement combining gamification with facilitated learning.
The board game approach addressed several critical challenges simultaneously. It provided an offline solution to online problems, created safe spaces for discussion about sensitive topics, and allowed for nuanced conversations about information manipulation that might be difficult to achieve through digital platforms alone.
Therese Marie Uppstrom Pankratov contextualized this success within broader innovation principles, explaining that “we call it a wicked problem, not because the problem is evil, but because it is really complex.” She noted that innovation processes require iterations, multiple testing, and redevelopments rather than linear solutions. The board game project exemplified this iterative approach, moving into a second phase of testing scheduled to conclude by August, followed by printing and distribution with digital facilitation guides.
## Multi-Stakeholder Partnerships and Collaboration
A central theme throughout the discussion was the necessity of multi-stakeholder partnerships for addressing complex information integrity challenges. Therese Marie Uppstrom Pankratov argued that solving these challenges requires partnerships across sectors with different expertise in context, technology, and behavioral sciences.
The panel demonstrated this collaborative approach in practice, bringing together humanitarian organizations, government departments, private sector partners, and civil society organizations. Each stakeholder contributed distinct capabilities: humanitarian organizations provided community access and protection expertise, government offered policy frameworks and educational infrastructure, private sector contributed technological solutions and innovation capacity, and civil society organizations served as trusted intermediaries.
The success of the South Africa project was attributed partly to this collaborative approach, with each partner contributing essential elements that no single organization could have provided independently. This model suggested potential for scaling across different contexts while maintaining local adaptation.
## Platform Engagement and Reporting Mechanisms
The discussion revealed significant inadequacies in current social media platform reporting mechanisms for vulnerable populations. Oluwaseun Adepoju provided particularly critical analysis, noting that “80% of the reporting platforms from the big techs are afterthoughts, after building the technology… The pressure was mounted by civil society organizations and technology activists to be able to do that. So it’s always challenging to even create the awareness about some of these tools for people using their platforms.”
These reporting systems suffer from multiple deficiencies: lack of localization, limited awareness among vulnerable populations, trust issues stemming from past failures to act on reports, and fear of retaliation among potential reporters. Oluwaseun Adepoju described cases where organizations had to work with multiple intermediaries to escalate situations to platforms because individuals refused to report directly due to past experiences where reports were made but no action was taken.
Despite these challenges, the panel identified some positive developments in platform engagement. Mbali Mushathama described productive engagement with platforms like TikTok and Meta to understand moderation systems and create opportunities for refugees to develop counter-narratives. However, she emphasized that meaningful engagement requires platforms to work with local civil society organizations as trusted intermediaries rather than expecting direct reporting from vulnerable populations.
## Policy and Regulatory Challenges
The panel identified significant gaps in current policy frameworks for protecting displaced populations from digital harm. Michael Power provided stark assessment of existing systems: “The practice in supporting vulnerable groups who are subjected to hate speech missing disinformation is wholly inadequate. There’s often re-victimization… from policing stations… The practice is simply not to involve people or to re-victimize or victim-blame through a series of processes.”
This re-victimization problem extends beyond platform reporting to institutional responses across law enforcement, legal systems, and government services. The panel noted that South Africa lacks specific policy addressing refugees and displaced people regarding digital protection, with scattered regulatory approaches across different government departments and jurisdictions.
An audience member raised the critical question of “how does UNHCR respond when harmful narratives are either generated or tolerated by state actors?” This highlighted the complex operational challenges facing humanitarian organizations in non-cooperative or hostile government environments.
Likho Bottoman emphasized that protecting refugee rights requires global conversation rather than just action by host countries due to international influences, suggesting the need for coordinated international responses rather than purely national solutions.
## Community Participation and Voice
A fundamental principle that emerged throughout the discussion was the necessity of meaningful refugee participation in developing solutions that affect them. Mbali Mushathama articulated this principle clearly: “We cannot make decisions for them and about them without them… unless you’re a refugee and you have that lived experience, we can’t really dictate what works and what doesn’t work.”
She emphasized that refugees want to be included in policy drafting conversations and need safe spaces for open dialogue and to report violations without re-traumatization. The discussion highlighted how community-based approaches and local civil society organizations serve as trusted intermediaries between vulnerable populations and formal systems.
This participatory approach influenced the methodology of the South Africa project, which prioritized refugee voices in identifying problems, developing solutions, and evaluating effectiveness. However, the panel also acknowledged challenges in ensuring authentic participation rather than tokenistic consultation.
## Audience Engagement and Key Questions
The session included significant audience participation, with several important questions raised. One audience member provided detailed context about India’s situation, describing how refugees face challenges accessing basic services and how misinformation spreads through WhatsApp groups. Another critical question addressed whether digital resilience initiatives should be prioritized when basic needs like food, water, and shelter aren’t met.
Mbali Mushathama responded that different contexts require different approaches, noting that “South Africa’s progressive legislation allows focus on xenophobia rather than basic service access,” highlighting how context determines appropriate intervention priorities. She emphasized that in South Africa, progressive legislation already provides basic services, making xenophobia the primary challenge rather than basic needs access.
## Global and Contextual Considerations
The discussion grappled with balancing global coordination with local contextualization. While refugee protection requires international cooperation, effective interventions must address specific local dynamics and cultural contexts.
The panel recognized that digital resilience initiatives must not overshadow urgent basic needs but should address context-specific challenges. The success of non-digital solutions in addressing digital problems challenged assumptions about the need for purely digital solutions to digital challenges.
## Implementation and Next Steps
The discussion concluded with several concrete action items. The Mzanzi Life board game project is moving into its second phase of testing, with plans for printing and distribution accompanied by digital facilitation guides.
Continued engagement with tech platforms aims to improve reporting mechanisms and empower refugee communities to create counter-narratives. The panel emphasized the need for developing anonymous reporting policies and accountability measures for those responsible for protecting vulnerable populations.
Implementation of combination approaches using both digital and non-digital methods emerged as a priority for reaching all population segments, recognizing that purely digital solutions exclude those without reliable internet access.
## Conclusion
This comprehensive discussion revealed both the complexity of protecting refugees in digital environments and the potential for innovative, collaborative solutions. The success of the South Africa project demonstrates that effective interventions are possible when humanitarian expertise, government support, private sector innovation, and community participation are combined strategically.
However, the discussion also highlighted significant systemic challenges that require sustained attention and resources. Current platform reporting mechanisms, policy frameworks, and institutional responses are inadequate for protecting vulnerable populations from digital harm.
The path forward requires continued multi-stakeholder collaboration, sustained investment in community-centered approaches, and recognition that digital protection is as essential as physical safety for displaced populations. The innovative pre-bunking strategies and participatory methodologies presented offer promising models for scaling across different contexts while maintaining local relevance.
Most importantly, the discussion reinforced that effective refugee protection in digital environments cannot be achieved through technological solutions alone but requires comprehensive approaches that address social, political, and economic dimensions of displacement and discrimination. The principle that refugees must be meaningfully included in decisions that affect them must guide future efforts to ensure that digital resilience initiatives genuinely serve the communities they aim to protect.
Session transcript
Katie Drew: Hi, good afternoon, everyone. Welcome back from lunch. I hope you had a good break and a great lunch. This is the session on protecting refugees, digital resilience and information integrity. So I hope everyone is on Workshop 2 channel and everyone can hear. There’s some very reassuring nods. So that’s great. Thanks. So hello, everyone. Thank you so much for your time today. I’m really excited to have a great panel with me and hopefully a very interesting panel discussion coming up. So today we’re going to examine information risks through the lens of forced displacement. So my name is Katie. I work for UNHCR, which is the UN Refugee Agency. I work for UNHCR’s digital service and on a work stream called information integrity, which is looking at how we can strengthen information integrity to mitigate against the challenges of information risks online that directly impact the lives of forcibly displaced and stateless people. So this includes refugees, asylum seekers, people who’ve been internally displaced within their own countries and people without a citizenship, so people who are stateless. We also address the impact that information integrity risks have to humanitarian operations and obviously this is a very challenging space. So hopefully today we can actually focus on some of the positives and some of the solutions that we’ve been working on when we look at addressing information risks. So today we are going to look at how we can strengthen digital protection, how we can improve access to reliable information, how we can uphold the freedom of expression and how importantly we can foster social cohesion and inclusion. So our key question is how can we do this collectively as well and I think that this really speaks to the purpose of the IGF. How can we really strengthen multilateral partnerships, how can we really engage with different community members, different groups within societies to address this challenge and hopefully this is a sort of exciting panel where we can talk about some of these solutions as well. But just to get started and I’m going to ask my colleague Ondine online to help us with this process. It’s a little bit of enforced audience participation and there might be some resistance to this after a heavy lunch but please do give us your views and ideas. We’re going to do a little bit of an online mentee. So hopefully you can see coming up on the screen a QR code. This might be a familiar process to you. If not, please go to the mentee.com and enter your code. So if you’re joining online and hello to participants online. If you’re joining online and you can’t you know take a photo of your own phone, please go to mentee.com and enter the code and we should start having coming up a couple of questions. I’ll just wait for everyone to take a photo of the code. Please wave your hands if you’re still taking photos of the code. If not, we’ll move to the first Mentimeter question. There are no wrong answers, don’t worry. Just in your own word or words, can you tell me what you think we mean when we talk about digital resilience for forcibly displaced communities? What does this mean in relation to information integrity? And hopefully we’ll have a lovely word cloud up there when we when we think about digital resilience. We’ll give a few minutes for those online and those off. I can see at the bottom that it says zero out of 29 participants. Okay, thank you. Whoever was the first person to have the courage to hit send. Brilliant. We’ll wait for a few more answers to come through. I see access is coming across quite strongly there. That’s very interesting. Protection, regulation. We’re going to hopefully touch on a number of these topics today. Freedom of expression, safety, safe care. Sorry, it keeps jumping around. Access to information, I think we’ve already said. So I think that’s brilliant. Financial security, safety online, rights and duties. That’s also something that we’ll talk to today. Authenticity. So brilliant. You can see that when we talk about digital resilience, we’re really talking about the ability for forcibly displaced communities to have access to an information ecosystem that is robust. It meets their needs. It allows them to express their concerns, their stories. Tell them, you know, have a voice in a place where they feel that they have access to information securely and safely. And that’s what we’re talking about when we come to digital resilience. Now that sounds great, but we’ll move to the next question on Dean. Obviously, there are some challenges when we talk about digital resilience for forcibly displaced people. And so can we think of some of the, what might be some of the barriers when we look at that sort of safety, security, freedom of expression and access to information ecosystem that we were considering. So we’ll just spend a little time thinking about the barriers. Oh, that’s interesting. So we’ve got vulnerability coming up. Capitalism. I’m not sure we’re going to be able to sort of knock that one on its head today. But xenophobia, that’s definitely something we will be able to touch on. Lack of network. Yeah. Some of the digital divide challenges as well. Fear and censorship and surveillance. I think that’s also coming across quite strongly here. Knowledge. Yeah. Untrusted trust. Trust and safety and access and reliability as well. Okay. So not wanting to focus too heavily on the challenges and hopefully moving quite quickly to sort of the solutions, I wanted to introduce Therese for our opening remarks. So Therese Marie-Obstrom-Pankratov is the head of the Humanitarian Innovation Program at Innovation Norway. And Innovation Norway are our key donor that we’ve been working with on one of the case studies or on the case study that we’re going to present today. So Innovation Norway supports humanitarian organizations to enter into innovative partnerships with the private sector. And so we also have our private sector partner on the panel today, which I’m quite excited to talk about the collaboration that we’ve had in South Africa. Therese is a strong believer in partnerships across sectors being worth the effort. And I think, again, that speaks to the spirit of the IGF. She’s previously worked with the Permanent Mission of Norway, to the UN in Geneva, the Norwegian Refugee Council, Save the Children and UNHCR, the refugee agency. So quite a familiar topic when it comes to some of the challenges that forcibly displaced and stateless persons experience. And Therese, it’d be great to hear from you sort of some of the reasons why you are interested to support multi-stakeholder
Therese Marie Uppstrom Pankratov: partnerships. Thank you. Sure. Great. Thank you. I’m really looking forward to this session. I think already this week, I’ve attended quite a lot of sessions where information integrity has been the topic. And so I think this conversation will fall well within that discourse and help us focusing on one of the key topics I think we need to look at, which is people that are forcibly displaced. I think we all know that vulnerable populations are particularly affected by misdeeds and malinformation. And so this focus is really important. And I think we’ve also established throughout this week that safeguarding information integrity is one of the key challenges of our times. It’s augmented by technological development that has made it easier to develop and spread misinformation. And in humanitarian operations, we tend to talk about information being protection, and it sounds a bit humanitarian language-wise, but when we think about it, it makes a lot of sense in that we know that when we have access to quality information, it helps keep us safe. And when we don’t, it causes a significant risk. So as Katie said, I work for the Humanitarian Innovation Program. It’s a program that is fully financed by the Norwegian Ministry of Foreign Affairs, and it is managed by Innovation Norway. And we’re set up to encourage, support, and de-risk innovation partnerships between humanitarian organizations and the private sector that would like to design and develop solutions to humanitarian challenges. And we have an annual call for proposals. So about two years ago, UNHCR responded to that call, and they said that they had identified a lack of solutions in combating misinformation, disinformation and hate speech, targeting or affecting forcibly displaced and stateless people. And so they wanted to design an innovation process and find partners from other sectors to test the use of pre-bunking strategies that could proactively counter false and potentially harmful narratives before they take hold. And safeguarding information integrity in crisis has become one of those areas of work that is referred to as a wicked problem. I think it was illustrated really well now with the word cloud and all the challenges that you listed. So we call it a wicked problem, not because the problem is evil, but because it is really complex and it has a lot of interdependent factors and there is not one clear-cut fix that can help us address it. So when we want to solve key challenges like this, we need innovation and we need partnerships. We cannot go it alone. Solving wicked problems requires a deep understanding of the stakeholders involved. It requires a deep insight into various technologies or other possible solutions. It requires an innovative approach characterized by dialogue with actors from various sectors and expertise. And it requires trust-based partnerships and collaboration along a process shaped by design thinking. It is really complex. And in this case, the partnership was needed between those who have a deep insight into the context and needs of people affected by crisis and that are forcibly displaced, deep insights into social media, artificial intelligence and other technologies that are used to spread misinformation, and deep insight into behavioral sciences that can help us understand how people establish trust in information that they seek or receive. So one actor alone will not be able to master all these skills. And so a good way forward then is to design a multi-sectorial innovation partnership to address the challenge in an appropriate way. And I think traditionally often we’ve thought of innovation as a fairly linear process. So you identify a challenge, you develop a solution, you implement the solution, and the problem is solved. And I think this view of innovation has caused a lot of disappointment because it normally doesn’t look like this. I’ve seen a few in my work on humanitarian innovation, but normally an innovation process is a lot messier than that. They go in loops and circles, they require iterations, multiple testing, redevelopments, and so on. And when it comes to wicked challenges, the solution is also most likely not one shiny new thing. It is often multiple processes, partnerships, and technologies that when they come together help us address the challenge at hand. So this means that in addition to developing various solutions, we also need to develop an ecosystem of partnerships and solutions that can come together. And this is not an easy task. What I think is particularly inspiring about the initiative that we’ll hear more about today is how these various partnerships have come together around a common challenge, bringing their various expertise and asking how can we strengthen the digital resilience of people affected by crisis, keeping people safe and preventing harm. In the panel we’ll hear from government, private sector, and the humanitarian sector, and together their insights create a unique basis for an innovation process that can help us develop solutions. I think the panel discussion today will help us both understand how we can support the digital resilience of people affected by crisis and how we can shape innovation partnerships to solve wicked challenges. So I very much look forward to hearing from the panelists.
Katie Drew: Thanks so much, Therese. And so without much further ado, we’re going to move to the panel. I’ll do my best to introduce quite a formidable bunch of speakers today, so I’m very excited. So first I’ll start with my colleague Mbali. Mbali Moshitama is a UNHCR colleague. She is a protection associate. She works for UNHCR’s multi-country office, which covers a number of countries, and she’s based in Pretoria. So in her role, she really supports social cohesion in South Africa. She’s a strong advocate for the rights of refugees and helps to address protection challenges that they face, really ensuring a community-based and community-led approach. And so hopefully Mbali is going to touch on some of the ways in which we really try to bring in the community voice into the project. On her immediate left is Michael Power. So Michael is a public interest lawyer. He’s a managing director and co-founder of ALT Advisory and Power Associates, which is the South African office of Power Law Africa. He serves as the chairperson of the Power Law Africa Alliance. He specializes in litigation, legal advisory, and policy development, including a focus on technology law, information rights, and digital governance. And he works to advance constitutional rights and good governance in the digital age. And then at the end of the table, to his left, we have Liko Boteman. Liko is a senior official within the South African government department of basic education. He holds the position of director of social cohesion and equity in education. And he’s dedicated to advancing inclusive, equitable, and socially cohesive schooling in South Africa. And then to my right, I have Oluwashin Adipoju. Sorry, I had been practicing all day and I knew I wasn’t going to be able to get through that, so my apologies. He is a technology and innovation leader. He currently is serving as a managing partner at Co-Creation Hub. And he oversees Co-Creation’s Hub design lab and supports several of Co-Creation Hub’s work streams, which includes the technology and society work stream. He has a master’s in public policy with a focus on technology policy from Career Development Institute School of Public Policy and Management and is a PhD researcher in creative technologies at Auckland University of Technology. So, to start our conversation today, I’m going to pass to Mbali to provide us with an understanding of some of the information risks and digital protection risks that we’re talking about, specifically in South Africa, as we highlight to begin with that case study. So, would you like to give us an overview of those challenges?
Mbali Mushathama: Yeah, sure. Thank you so much and good afternoon to everyone once again. In my experience working with refugees in South Africa, we have observed over years a rise of misinformation and hate speech, particularly targeted at foreign nationals. And we see a rise in this, especially during towards the election period as well. But we also have to look at the context of South Africa. South Africa is 31 years into its democracy. And during the early stages of its democracy, a number of commitments were made around equality, inclusion and access to resources. While South Africa has made significant strides in achieving this, we also must recognize that there are still significant gaps. And an example of this is the high unemployment rates in the country. I think it’s currently sitting at just over 32%, as well as the limited public resources. And so we find that where there are limited public resources, it can create a sense of competition. And that can also result in a lot of social tensions, which is what we have observed within the host communities where refugees particularly reside. And so foreign nationals, including forcibly displaced persons, are oftentimes used as scapegoats for socioeconomic problems in South Africa. And we have seen a rise in many online groups such as Put South Africa First, as well as Operation Dudula. So Dudula is a vernacular term that basically means to push out or to push away. So these are groups that will use trendy words, trendy phrases to gain momentum and gain traction. And they have used a lot of these online platforms to incite violence and looting in the host communities. Just to also highlight how serious the seriousness of the hate speech that these groups perpetuate, we have Operation Dudula, who was recently taken to court by civil society organizations so that they can be held accountable for some of the actions that they are perpetuating. So we’ve seen a direct correlation of online incitement to violence manifesting itself into the host communities as well. And this, unfortunately, also trickles down to young people. And young people are especially sensitive to this type of rhetoric, because both in the online space as well as in their communities. So you will find that refugee students, for example, have reported cases of xenophobic-motivated bullying as well as targeting in schools. So these are refugee children who are born in South Africa or perhaps fled to South Africa at a very young age. They identify as South Africans even. They speak the language. However, when they get to school, they find that they’re being bullied simply because they’re considered as not from South Africa. and Michael Power. We have a lot of research that has been done on this, but also we have a lot of real-life examples. Thanks, Katie. ≫ Maybe I’ll pass now to Likho to talk a little bit around the social cohesion challenges in schools, and maybe the role of digital information risks in that environment. Thanks, Likho. ≫ Thank you, Micheal.
Likho Bottoman: I do want to start by saying that some of the issues that we find in South Africa relate to the fact that South Africa in itself is a very multicultural and multiracial or even multilingual country on its own. And that on its own is bringing a set of innate challenges to diversity management in the country. And so when you then add foreign nationals into that whole compound of diversity elements in the country, you find then that there are already existing complications, and the anti-foreigner, the anti-racist, the anti-racist and the anti-racist narratives just simply finds themselves at the center of all of those complications. The second thing that we find is that even though as a country we have agreed that we will use schools as centers of life where young minds are being molded and being prepared for an inclusive society, school alone and curriculum alone is not going to solve the problem because curriculum is only delivered in the classroom during school hours. And this child goes back to school where these narratives are perpetuated, I mean, they go back home, rather, where these narratives are perpetuated. But they also then go into cultural and religious spaces in community where it is advocated for very strongly that anti-foreigner narratives are actually religiously correct. And so our curriculum is not able to help us shift the mindset, not just on issues of people of foreign nationality, but even on other issues related to HIV prevention or prevention of early unintended pregnancies and all of those things. And so these issues are not, I mean, this particular issue is not immune to those issues that we find in South Africa. And so we’ve got a greater task as the basic education sector to begin to think about education beyond the classroom and understand ourselves playing a role to educate not just the child in the classroom in front of the teacher, but to educate even the country because we are a basic education department. And that is going to take a while because on the one hand, the creative thinking around positioning education as a public education entity. But on the other hand, there is this thing that says that it’s not your role to guide value systems of the country. It’s not your role to guide belief systems of the country. And so you need to start and end with the core business of education, which is literacy and numeracy and other school subjects. And so we find ourselves in a tug of war quite a lot because now we’ve got to play this role of helping the country move forward but at the same time understanding how far we can go and what our limits are as a sector.
Katie Drew: Thanks, Liko. Maybe I can just follow up on that a little bit with the sort of point around digital resilience. So when we came to you and said, you know, we wanted to work on the concept of sort of strengthening digital resilience, how did you think that that was valuable and how did you think it might support the national action plan? And maybe for the purposes of the audience, just sort of outline that a little bit for us. Thank you.
Likho Bottoman: Well, there had been a belief that in South Africa we’ve got digital divide, we’ve got inequality, this and that. But actually, people have undertaken research about access to technology in our country are coming up with some very interesting data that actually says that even the most rural people have got access to technology and in spaces where we never thought that there is access to technology. And so for a very long time as a government, we didn’t think that we need to address technology-facilitated discrimination of any kind. But what is happening now is that our population is growing ahead of us because they’ve got access to technology, they are already absorbing the misinformation and disinformation. So it is up to us as government to rethink how we see ourselves and how we see our country and begin to maybe intentionally begin to work on disinformation and misinformation that exists in the digital spaces. Because children are already there. South Africans are already there. And if we don’t, by the time we get them, we would have lost them to the misinformation and disinformation.
Katie Drew: Great. Thank you. Thanks, Lico. Michael, I’m going to pass to you now, and just for the technicians at the back, I think we’re going to have a couple of slides on. We’ve been talking about this project in South Africa, and I think probably we need to sort of outline a little bit what we mean. So if you could walk us through sort of some of the approaches we’ve been testing together. Thanks, Michael.
Micheal Power: Sure. So thank you, Katie, and to the entire UNHCR team, as well as Innovation Norway for hosting us on this panel. And maybe thank you to the technical team at the back. This is my first ever silent seminar, and you’re doing a wonderful job. So thank you for that. You know, flowing from what Lico said, we’ve been working now with UNHCR for about 18 months, and we were simply asked, how do you change children’s perspectives? This was ultimately the macro question that we were asked within the purview of anti-foreigner sentiment in South African schools. And we really went to the drawing board. For the parents in the room, for those who work with learners, changing a perception is not easy, particularly when someone is in an echo chamber, and those beliefs that are held are being reinforced within their communities, by their parents, potentially by their educators as well. And, you know, as Mbalia said, we have a long history of xenophobia in South Africa for multiple socioeconomic reasons, but the fact that xenophobia is inherent in our community, I don’t think can be dispelled at this stage. So we went to work. And, you know, talking about Internet penetration, I think context is always really important to understanding any situation. And our Internet penetration rates in South Africa, while they are increasing, the biggest challenges we have in schools, for example, a brief explainer video. You explain what the problem is. You warn about certain types of narratives that are occurring in online spaces. You then have a preemptive refutation. You explain why those narratives are incorrect. And then, lastly, you microdose to try explain how you counterbalance that narrative, but the microdose is meant to be just that. It’s not meant to further perpetuate the harm. So you’re really and simply trying to get ahead of the story. You’re trying to get ahead of the narrative in terms of pre-bunking. So when we turned to South African context, we had hoped to start digital. You know, a lot of this program is around digital, but we found in consultations with learners that our Internet penetration rates were too low and access to technology just simply wasn’t viable at that stage. So we got thinking, and we ultimately came up with the concept of a board game, right? So I’m not certain what slide is on the screen, but we can flip the slide. And our game is really called Nzanzi Life. So what it is, for those who have played snakes and ladders before, it’s a very similar type game. But you start the game by taking a character card. You become a character. And throughout the course of this game, we gently microdose to avoid anti-foreigner content. So as this character, you go through life in the game. You have the ups. You have the downs. And you are rewarded for good behavior. You go up the game. You move towards your future. And for problematic behavior, you move backwards. And what we’ve done, and really the difficulty and challenge with pre-bunking, I think the key to debunking is getting the narratives right. If you’re too blunt about the situation, you often lose your audience. It’s about nuance. It’s about subtlety. And it’s really about ensuring that before children are exposed to these types of narratives, they already have some type of information to countermand it. You know, we learned today that Norway has a critical thinking day, where critical thinking is promoted in schools throughout the country. We don’t have that day. And it’s something for our department, most certainly, to think about. But this type of educational material, which is the board guide, coupled with a facilitation guide, is an approach that we’ve looked into to pre-bunking this anti-foreigner sentiment that we’re seeing online. If you go to the last slide, and for us, what has been most telling is, you know, adopting a multi-stakeholder approach, ensuring extensive consultation. The early results of testing of this game have somewhat exceeded our expectations. You know, when we initially started piloting, we thought it would be just another game, something that would be thrown in the cupboard and not used. But when we started rolling it out in our test groups, we really started to see really fantastic results. The most important result is the one at the bottom, and I’ll move back from there. But when it came to the question of perception, and on the question to learners who are surveyed after playing the game and going through a facilitated discussion on anti-foreigner sentiment, for the statement, some people online are trying to influence me by using emotional or shocking messages that spread quickly. After a three-hour engagement with learners, the perception of agree changed from 43% to 86%. I can’t remember personally when my perception was changed by almost 50% in the course of a three-hour conversation. So the combination of gamifying a concept with facilitated learning seems to be a magic ingredient, at least in our context. The traditional notions of pre-bunking, which is watch a 15-second TikTok video or watch this explainer video for a minute, are traditional mechanisms that are being rolled out and still often adopted. We’ve seen a more substantive approach as yielding slightly more significant results. Katie, I hope that gives a bit of an overview as to the pilot project, and I look forward to continuing the conversation.
Katie Drew: Excellent. And of course, if you’ve got any questions, we’ll be able to come to them at the end. Or please do grab Michael, Lico, Bali, and myself to talk a little bit more about this case study. I have one of the old iterations of the game with me, so if you want to see some of the play cards and things like that, I think it would be worth looking into if you would like a little bit more information. I’d like to bring us a little bit back to looking at the engagement that we can have with different stakeholders. And Bali, I mean, Michael mentioned sort of the jigsaw approach, jigsaw Google approach, but could you tell us a little bit more about some of the engagement that we’ve had with the tech platforms on this project?
Mbali Mushathama: Sure, thank you. I think one thing we also need to take into consideration is the fact that – it’s a bit weird when I hear myself. I think another thing we need to take into consideration is the fact that refugees are on these platforms, these various platforms. We can’t run away from that. The one thing we wanted to ensure was that they feel empowered to use these platforms. They feel safe enough to use these platforms as well. But also, since there is an existing narrative, we wanted them to also put out their own stories, their own lived experiences. And so we’ve been closely engaging TikTok, who recently hosted a webinar that was dedicated to just helping us learn more about how the users can stay safe while using the platform as well. So the session focused on equipping participants on tools and knowledge to navigate the platform safely as well as understand how TikTok’s moderation system works. The webinar also served as a platform for open dialogue. So we were afforded the opportunity to directly ask the team, the TikTok team, some of the challenges that we’re seeing on the ground because there are certain trends that we’re seeing that TikTok might not be aware of. There are certain subtle misinformation or hate speech that is being spread in such a way where they’re using coded language. So they might not use a particular word. They might change a word to another language. South Africa has about 11 official languages. So you have about 11 languages to play with to basically perpetuate hate speech. And so the TikTok team was very helpful in offering insights also into their community guidelines, their different reporting mechanisms. And for us, what was really encouraging was just their openness for continued collaboration. We’re also looking at empowering the refugee community on how they can create their own content as well. So how can they then start making content to counteract what is already on the ground? How can they also do this safely whereby they feel like they can express themselves? I saw that in the Mentimeter. I think one of the key things that kept coming up was freedom of expression. How can refugees… They should be afforded the right to a voice, right? They are contributing members of society. They have their own lived experiences. And so for us, really engaging platforms such as TikTok, engaging platforms such as Meta, where we’re able to say we have this marginalized group of people where oftentimes they are left behind. How do we bring them to the table so they can tell their own stories where they can also access safe information in a manner that does not endanger them?
Katie Drew: Thank you, Katie. Thanks, Mbani. And maybe zooming a little bit out of the South Africa context and speaking a little bit more broadly, Oluwosin, I’d love to hear a little bit more around some of these reporting mechanisms that we know that maybe the platforms have if it does come to someone saying that they have been directly a recipient of hate online. Do you feel that maybe displaced communities use these reporting platforms? And what could be some of the barriers that would stand in their way when it comes to reporting?
Oluwaseun Adepoju: Thanks, Katie. I think I’ll start by saying 80% of the reporting platforms from the big techs are afterthoughts, after building the technology. For most of the social media platforms, for example, the reporting platforms for hate speech or digital violence were afterthoughts. The pressure was mounted by civil society organizations and technology activists to be able to do that. So it’s always challenging to even create the awareness about some of these tools for people using their platforms. But also, as technology creators, sometimes we tend to create a one-size-fit-all types of technology for people, which is not really helping vulnerable people to be able to use these platforms effectively. And when we talk about… We can have classification of internally displaced people, forcibly displaced people. There are refugees as well. And some of the reporting platforms are not really fitting to the different classifications that we might have as well. And in the work that we do every day, we’ve seen increasingly that these people are not… Some of them are not even aware of some of these reporting platforms. But more importantly is the familiarity, first of all, with what you call hate speech or offensive opinion, first of all. We’ve worked with people that are seriously emotionally battered, that when you even use hate speech on them, they are not even aware, they are not emotionally sensitive to some of these things. And this brings the complexity of even helping and supporting, you know, internally displaced people or refugees. First of all, from the emotional level of classification. Number two is the language and localisation of some of these reporting platforms. And that is why we begin to see organisations creating different app desks offline where people can come to and make reports. And then those platforms escalate to the platforms where some of these things have been perpetrated. And I think it speaks to what you were saying around the fact that foreigners, internally displaced people or refugees coming into a new country, sometimes they don’t even understand the slangs and the languages being used, you know, for hate speech on them. And we’ve also seen in our work fear of retaliation. We’ve seen a lot of people who have experienced hate speech or violence, digital violence, they don’t want to report because of fear of retaliation or the power play in the mix of, you know, those who have used this word against them. And also, a particular situation that we’ve seen is our… And this is a co-created story with some organisations we worked with recently. In a particular IDP camp in Nigeria, the… Should we call them the warders or the people in charge a particular person in charge was making sexual sexually violent comments towards a particular young lady and It comes with intimidation. They don’t want them to speak out. So he took another IDP to Report to law enforcement and the first thing the law enforcement did was to ask if the lady was suggestive in a behavior You know and then that you know Just discourage a lot of young ladies facing this kind of situation to report. So it also comes to the responsibility of those that we have actually, you know appointed as those in charge of the refugees or IDPs in the first place as well. So There’s a wide range of you know I would say challenges facing IDPs and internally displaced people but I think more importantly is the localization of the platforms that for example we addressed a situation where it was a violent Revenge porn. That’s the word right image based digital violence on a particular platform and we had to work with two other organizations to be able to escalate the situation to the platform because these particular person didn’t want to go on the platform to use it because of a lot of historical issues when it comes to We’ve seen people report in the past and nothing was done about it So it shouldn’t be a case of afterthoughts. I think in terms of the development of these platforms now they have to be You know, it has to be with integrity. It must drive trust in people, but also it must be contextual in the way that people use it and The kind of work that we do we have a command center where people can come to you know, and for those Trust to escalate to us and then we escalate to the platforms and those who need psychological support as well. We provide that
Katie Drew: Great thank you. I was gonna ask you as my follow-up question But you already came there in terms of some of the practical steps to improve these these reporting mechanisms So along with sort of the localization, I heard you talk about sort of some of the partnerships that you were working with as well So is there any sort of other, you know, how how can the tech sector help do this better? Is it by engaging, you know local actors?
Oluwaseun Adepoju: I think the big tech need to engage the local actors because the truth is that when there’s breakdown in trust people don’t want to use this platform to report directly and if people have a structure they already trust either in the Community or with other civil society organizations that they are comfortable speaking with I think big tech should be able to come down and work with those organizations to be able to address these issues better there’s a lot of under reporting happening because of this trust issues and also the way in different contexts our law enforcement has also addressed some of these issues with levity, right so there should be a community approach where there’s a Report the offender kind of situation in those community. They could be we organize You know people on whatsapp. We also organize on people can reach out via You know other platforms as well But it truth is that there are more than in every local government You can have at least 25 civil society organizations or local actors who are genuinely interested in addressing some of these issues and they are a great gateway to the big techs to be able to get some of this reporting and We should not also limit this to what is happening on on platforms online there are offline situations as well that can be Escalated via independent platforms outside of social media as well, and I think we should build more platforms outside of social media that Encourages people to come out and speak about these things because for social media. It’s either you resort to Cancelling people or you keep quiet before independent platforms for offline situations, and I think we have more flying situations physical words violent words or actions spoken to Internally displaced people refugees that should be escalated Because when we go the route of digital literacy you might say some of these guys don’t even are not on Facebook Or they are not an X or they are not on any social media But the offline one is even more and what that we do that we see every day
Katie Drew: So we’ve talked about reporting mechanisms and Michael I’m gonna put you on a spot with quite a long question now But looking at if we bring it back to the conversation around sort of digital resilience And how can we create an environment that is supporting the digital resilience of refugees of asylum seekers? how Up to the challenge is the policy and regularly regulation Environment when we look when we look at that question are they sort of protected currently when it comes to policy and regulation
Micheal Power: Thanks Katie, I mean it’s a it’s a complex question and I would yeah I really welcome a conversation with colleagues in the room who who work and have different perspectives on this I mean, you know my view at least from the context we’re in I Think the practice is really the challenge and I’ll start there I mean the practice in supporting vulnerable groups who are subjected to hate speech missing disinformation is wholly inadequate There’s often re-victimization You know, we’ve heard a series of lessons learned and it’s not only the platforms here that are the culprits from policing stations whether it’s a you know, a The practice is simply not to involve people or to re-victimize or re-blame or victim-blame through a series of processes So given that the platforms have increased their dominance over an extended period of time The state response practically has been wholly insufficient and I think that is informed by the regulation So I think it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s it’s Insufficient and I think that is informed by the regulatory landscape Regulating hate speech is difficult, right? We’re seeing as we speak that Twitter is challenging new laws sort of, you know Hiding behind the hate act or New York’s hiding behind the hate act, you know in our jurisdiction South Africa We do have legislation it is somewhat enforced but I think that secondary vulnerability that a refugee has The ability and the enabling space to come forward and report in the first instance is Something that we haven’t got past at this stage so policy At least in the South African perspective There’s no specific policy that that looks to refugees or internally displaced people on these particular questions The broader framework is emergent, but it’s very much whack-a-mole and scattershot, right? We’re dealing with cyber crimes here. We’re dealing with non-consensual image distributions here We’re trying to look into platform power through competition policy There’s nothing harmonized that is really there to create the supportive environment And for me, I think that the state needs to play a far bigger role You know colleagues have references independent mechanisms that are being used for reporting one of the partners on our project media monitoring Africa Runs a platform called the real 4-1-1 Which is an independent platform that you can report to that then pursues complaints with the platform themselves And I think there’s been varying degrees of success But again, it’s a question of scale and it’s that scale that I think the platforms when it comes to content moderation Or the erstwhile concept of fact-checking which is a big problem. We have at the moment Coupled with these independent platforms, which just don’t have the capacity to get through the volume So well from a legal standpoint, at least in the South African context, there are safeguards in place Those safeguards are severely impeded by the willingness of those in power to support People in vulnerable positions to actually pursue their rights through them
Katie Drew: And I’m just going to turn back to you because I know that you also work on in terms of policy and Regulation and sort of governance actions Is there sort of any recommendations that you would make to try and sort of address some of these challenges? I know we spoke a little bit about like sort of the the challenge of reporting under reporting and and and re-victimization But at the sort of policy and governance level, do you see any? You know positive steps that could be could be taken to build that resilience
Oluwaseun Adepoju: recently, we’ve had invitations from a number of international organizations or But also some local actors on how do we effectively make policies around anonymous reporting that is effective? I think there’s a lot of fear Depending on the the the level of violence or how deep the situation is, right? the And I like what you said around Re-victimization and we’ve seen that over and over again You you know, maybe it’s a process started from somebody making rape you know statement to what’s a particular person and They didn’t report it continues the rape eventually happened and then they eventually Reported or somebody even supported them and encouraged them to report and from the launch On the law enforcement side, they started with derogatory comments right from the police station, right? And this person was shamed right from the police station. What do you expect to happen next, right? So most of the cases that we’ve been involved in is around how can we make anonymous reporting so effective, and can we also introduce policies around accountability of vulnerable people that are in charge of accountability for vulnerable people, which I think in most parts of, not just in Africa, most parts of the world, it’s very contextual and subjective as well. We’ve also seen situations where people that are supposed to be in charge of addressing these issues have their own independent way of seeing the situations, because we don’t have a lot of policy framework that helps us address some of these issues. So you judge it based on what you feel, that, well, that is not a rape, or that is not violence. I don’t believe, you know, a lot of personal opinion. So how do we introduce policies that takes lessons from some of these practical issues? Number one, to make anonymous reporting very easy and effective, accountability for people in charge of addressing some of these issues in government and in law enforcement. And number three, the localization of the platforms. We’ve seen, you know, reporting platforms that are just in English language, but somebody only speaks a particular language in a part of Nigeria, how do they report? And then also for EBDESC by independent organizations to also have different mediums of reporting as well. I think policy in these three areas can be a long and fruit to get started. Thank you.
Katie Drew: So Mbani, I’ll just pass back to you, because we’ve heard a little bit around some of the practical case study through Mbazi, Mzanzi Life. We’ve also heard a little bit around sort of the role of potentially of reporting mechanisms and sort of regulatory policy, but you engage directly with refugees themselves. What practical ideas have they given you to sort of strengthen their own digital resilience?
Mbali Mushathama: Thanks Katie. Working with refugees on a daily basis has really shown me how resilient refugees are. I mean, they are, yes, we agree, very traumatized because of the different things that they’ve had to go through from their country of origin, traveling to then find a country of asylum where they can be accepted and safe. And oftentimes I can’t agree more with my colleagues that we try and avoid re-traumatizing them. I don’t want the refugees to keep telling you the story over and over and over again, because that’s their lived experiences and no one wants to recount a trauma event. However, one thing I have come to recognize and appreciate is that they want safe spaces to tell their stories. They want safe spaces to have an open dialogue. They want safe spaces to report any activities or any incidents that they feel they may have been violated. Someone may have violated their human rights. And so for me, I think what I’ve heard over and over again is that it’s great that we meet in these platforms to discuss these issues, but more than ever, we cannot make decisions for them and about them without them. And so they want to be included in these conversations when policies are being drafted. They want to be brought in, because unless you’re a refugee and you have that lived experience, we can’t really dictate what works and what doesn’t work. So for me, I think in my conversations with refugees, the number one thing is create a safe space where we can bring in our ideas, where we can bring in, because they’ve got a lot to share. They have a lot to say. I think to also echo my colleague, accessing reliable information in a language that they understand. So many times we’ve seen there’s a number of communication policy changes that are communicated and refugees oftentimes don’t understand. And so they find themselves on the wrong side of the law, sometimes simply because they genuinely just did not understand what was being communicated to them. And so I think for me also, how can they access reliable information in a language that they understand? Lastly, I think ongoing digital literacy that is context specific and uses real life examples from their own community. The context of South Africa is very different from the context of Nigeria, very different from the context of Ghana. How can they protect themselves in the context of South Africa? How do they stay safe in their host communities in the context of South Africa? So these are some of the few examples I would give. Thanks, Katie.
Katie Drew: Thanks, Mali. Maybe we just pass to one final question back to Liko, if we can. Liko, we’ve heard a little bit about sort of some of the challenges, maybe different ways of working in partnership, some of the approaches that you’ve seen, and I know you’ve played the game as well. If you were to sort of give advice or guidance to maybe government counterparts, both in South Africa, but also in other countries, thinking about is this something that we could adopt to try and think of a way of building digital resilience? What advice would you give or what practical caution would you maybe advise?
Likho Bottoman: I think I would rather say that when it comes to making use of digital platforms versus a game like the board game that we have on Zanzibar, we need to understand that when we say yes to a fully digital approach, what we are saying no to. And when we say yes to digital platforms, we’re actually saying no to the ability for us to reach those that Michael is talking about who still don’t have access to digital platforms. Therefore, if you want to drive a pre-banking agenda, you need to use a combination approach. The one is not replacing the other, but they should be complementary to one another. That’s the first thing. The second thing that I want to say is that probably the conversation about protecting the rights of refugees is not a conversation that should be had by a country where they are, because there are other international influences that need to be taken into consideration. We need to have a global conversation as a global community about it. The third and the last thing that I want to say is that, yes, the fourth industrial revolution has brought on us the pressure to get onto technological and digital platforms, but we also have the responsibility to ensure that when we push children into those spaces where they need to now access information about misinformation and disinformation and pre-banking and so forth, we also then have another responsibility which is a hidden responsibility to protect them when they are online. Thanks, Liko.
Katie Drew: Thank you so much. I’d love to say thanks to the panellists. We have a few moments now to ask some questions and first of all I’m going to, whilst people are, I think this is the microphone that people come to ask questions, don’t be shy, there’s one there. First of all, whilst people are hopefully making their way to the microphone, I’m going to ask Ondine online, were there any questions coming in the chat? Thanks, Ondine. I can say no questions so far, so participants online, feel free to add your question and I’ll read them out for you. Yeah, thanks, Ondine. We’ll come back to you. Thanks for your question.
Audience: Hello, my name is Olivia and I’m coming from the London story and we work in the context of India. It’s a little bit different, so I don’t know if you can answer my questions, but also I would like to hear your experience on that. We document cases where we work a lot on refugee protection, et cetera, and India is not a party to the refugee convention and there are a lot of different groups of refugees which are also treated differently, et cetera, and especially in terms of the hate speech online. We document that refugees in India are systematically targeted by disinformation and they’re being labelled as terrorists, criminals, illegal infiltrators, et cetera, and these narratives are often pushed both by the government and by the media. by the state actors, and also by non-state actors. And they’re being accused of different things, that they want to grab lands, etc., all sorts of things. And this also results into arbitrary detention, expulsion, all sorts of violence, and also communal violence. So I would like to ask if UNHCR, if you have experience and knowledge, if UNHCR has taken specific steps to target this disinformation online by state and non-state actors in India? And more broadly, how does UNHCR respond when harmful narratives are either generated or tolerated by state actors? And also, I would like to know what you do in the context where the state did not ratify the convention, and UNHCR has a limited mandate, like in India. Thank you.
Katie Drew: Do we have some more questions to come through, Andine, if there’s any on the screen? No, nothing. Any further questions? I heard an um. Yeah, yeah, please do ask questions afterwards. Thank you, thank you for your questions. So I mean, personally, I’ve been working on the South Africa project, so obviously I’m not able to speak to or comment on the India case study, the India example, on this occasion. I would say that a lot, when it comes down to working in countries where maybe we don’t have such an enabling environment as South Africa, for example, a lot does come from the importance of being seen to be a trusted entity, and being seen to be able to have access and engagement. And I think that this comes from how can we really make sure that we are able to operate and have access to information available on channels around what services are available. A lot of what we’ve been doing on a global level as well is around really trying to make sure that people understand what is a refugee, how do we really try and tell these narratives around solidarity. But I think the sort of examples that you were saying strike to some of the work that we needed to do in South Africa around identifying what are these narratives, and actually what is the sort of behaviour science behind this? What is the fear that people are being exploited? What fear is being exploited here? And so, for example, I think, Michael, I’ll pass back to you, but some of the narratives we identified in South Africa are really deep-seated, and I’m sure in many other contexts there are hooks or narratives that it’s very easy for people to manipulate, because that’s where the fear is. And I think that my advice would be really to bring it back to a behavioural science approach and identify what are these grand narratives, and what fears, and what levers are people pulling, and to highlight, not to run after the hang on, hang on, the debunking piece, but actually look at how can you try and allow people to recognise that maybe their fear is being manipulated? And this is why pre-bunking has that warning, that warning sort of piece at the very beginning. The moment you say to someone, warning, psychologically, and it’s been tested in a number of different languages, not just English, psychologically someone is more receptive to the next piece of information. So, warning, your fear might be being manipulated, and then you can start to have a conversation that maybe opens up, you know, now we can start to address the issue that in this context refugees are always, you know, aligned with criminalisation. I don’t know whether you want to build a little bit more on that point.
Micheal Power: Yeah, I mean, I don’t want to speak to a context I don’t know enough about invariably, but I mean, you know, when looking at these challenges, and, you know, we’re seeing it, we’re seeing it in the US, where there’s often political alignment with a lack of safeguards, should we say, on the platforms. And then, I think you need to look for mechanisms within the state that may be supportive, right? So, if there’s executive support for what’s going on, for example, in the Indian context I know the competition authority has recently given a relatively landmark ruling against Android television, for example, and you may need to look at somewhat radical strategies to test these types of questions. You know, in the South African context, our competition authority is working on this. You’ve got people in the Department of Education who value the need for this, but then you do have other state departments which simply are not interested in these regulatory questions. So it’s really about trying to find those loose-knit partners at the right time, particularly where there’s a recalcitrant state. I think pre-bunking plays a really important role, as does civil society, but to the specifics I can’t speak to, but that broader alignment between potentially strategic litigation, policy reform, and activism.
Katie Drew: I’m just letting the online colleagues know that that was a comment from the floor, but it wasn’t in the microphone, so I could see Andine was looking at me like as if she’d lost connection, so apologies, online participants. Thank you. Were there any further… There’s one more question, and then we can maybe move to Therese to closing remarks. No rush, no rush. There’s a question in the chat also, Katie. Okay. Andine, do you want to read your question in the chat, and then we’ll come to the question on the mic? So, that’s a question from Beric Serbisa, so I’m bringing it out. How do we ensure that digital resilience initiatives for refugees and IDPs in Africa do not overshadow their urgent needs, like access to food, water, and shelter? What many displaced communities still lack in basic necessities is investing in digital tools of luxury or necessity, and how can we do both without trade-offs? Great question, and then we’ll come to the question on the mic, and then I can… Should I ask? Yes, please.
Audience: Yeah. Hi, everyone. A very informative discussion. So I personally worked for one of the big tech company before, and before that I worked in NGO in China for educational assistance for refugee children, and I actually found out in addition to disinformation, misinformation, which is inaccurate information, there are some of the information with refugees that maybe is true. For example, maybe some negative news, maybe some crime or violence happened because of, you know, it could happen to everyone, but then some information would easily get spread because of, I guess, algorithm, or also maybe people’s human has cognitive bias, just you reinforce the negative image. So I think these… It’s neither misinformation, disinformation, also not really like hate speech, it’s just the fact that spread may be faster than the opposite way. So I’m just curious, because for me, I think that’s make actually also huge damage in terms of public’s perception of this vulnerable group. So I’m just curious about if you people on the stage have worked or think about approach that could address this issue.
Katie Drew: Great. Thanks. I think we have one last question, and then we can come back to the panel. Thanks.
Audience: Hi. My name is Pumzele, and I do a lot of work around disinformation in South Africa. And I’ve worked on a couple of projects around foreign influence operations. And this has been kind of xenophobia in South Africa has been a big concern of mine for a very long time. So I’m glad to see that this kind of work is taking place. But what does it look like in future, in the next couple of months? Is there something going to continue? Because right now, I think it’s kind of not as busy as it can be, especially on the online space, but heading to a local government election, it’s going to start. And the thing is that it doesn’t, with this, it doesn’t remain online, like with other kind of disinformation campaigns. This spreads offline and results in, you know, of violence and death. So what does it look like going forward? Thanks.
Katie Drew: Okay, so I’m going to very quickly give the panellists one minute to answer each of the questions. Somebody, can I come to you for prioritisation? Micheal, what does this look like next? What are the next steps? And then, do you want to take the piece around sort of, you know, how to, you know, amplification and the algorithms that maybe run away with some of the sort of bad content as opposed to, you know, less positive content. So, one, two, three, and then we’ll pass to Therese. Thank you. Sorry, can you please remind me the prioritization? Prioritization. Why are we focusing on digital protection when we also have to make sure basic needs are met when it comes to refugee protection? Yeah. No, thank you very much. I think
Mbali Mushathama: for me, a lot are, well, in the context of South Africa, we are fortunate in that our legislation is very progressive in that refugees are afforded the right to work, they have the right to education regardless of their documentation status, they have access to basic services, healthcare, social grants as well, and so I think the number one problem that we’re seeing in South Africa is really just xenophobia, whereby as much as there’s access, there are limited resources, as I previously said, and because of this, we have a lot of the host communities saying, we don’t have jobs because foreigners are here and they’re taking our jobs. Our children don’t have spaces in school because foreigners are here taking all the spaces of our children in schools, and so for us, the main priority is not necessarily access to basic services for refugees in South Africa, but rather how do we ensure that in the country of asylum, they are protected in various ways. We have a huge problem with documentation, access to documentation. A lot of times, refugees will try and get themselves documented to access such services and legalize their stay in the country. However, there’s a lot of systematic issues within our Ministry of Home Affairs, so this also further perpetuates the narrative that we have a lot of foreigners that are undocumented that don’t care to get documented within the country. Therefore, this further incites violence, so I think for us, this is why this is a priority for us in South Africa because xenophobia is truly, I think, even as Pumzile has said, a really huge problem. Michael, one minute. Yeah, sure. What next?
Micheal Power: So, Pumzile, thank you. I mean, there’s a few things going on. Just speaking briefly to our project, we’re now moving into our second phase, which we hope to conclude by August. The second phase is the last round of testing, and then we’re actually going to start printing and distributing this game coupled with sort of digital facilitation guides and potentially a digital game. We’re still testing to see if we can pull it off in time. So, I think from the social pre-bunking approach, we’re hoping to move this relatively quickly. I think just for interest’s sake, in the South African context, there’s two broader developments. I mean, our Competition Commission, in its provisional findings and some of its recent reporting, is likely to recommend that there must be an amendment to ECTA to create a degree of platform liability for the amplification of hate speech. Now, a lot of people are not supportive of that amendment, but this is quite contested in the South African space, and that’s likely to be on the agenda. And then I know our National Human Rights Institute is looking into some of these questions as well, and they’ll probably be making sort of announcements in due course. But there’s a lot afoot, you know, both social, regulatory. So, we are trying to move cognizant of the deep concerns, but equally, you know, with deference to Lico, I think rolling this out in South African schools is also a process, but I think we’re live to the urgency of it, undoubtedly. Thank you.
Katie Drew: I realize that we are being told that we’re really strictly out of time. So, Therese, I’m sorry, we’re just going to sort of skip over the last question, but maybe we can find you after to discuss the points around sort of what do we do about sort of the algorithms that augment, you know, narratives that sort of spin out and drown out positive content. So, I’ll ask you to stay behind, Odersan. Therese, sorry, I think you probably have minus minutes, but it would be great to hear sort of a wrap-up summary. I think we’ve got a couple of minutes to hear from you. Thank you.
Therese Marie Uppstrom Pankratov: Okay, great. Thank you so much for an enlightening and inspiring conversation. It’s really great to see the trust-based relationship that has been created amongst you as partners, and I think that’s really key and essential to have an innovation process with impact. So, we’ve, it’s also equally inspiring and great to hear the fundamental understanding of the need that you’re designing the innovation process around, the deep insight into the challenges around information integrity in the context of displacement. So, we’ve heard about the rise in online misinformation and hate speech, and the wide range of challenges faced by people that are forcibly displaced, and you’ve all emphasized the importance of community-based approaches and multi-stakeholder engagement. We’ve heard about the participatory process that you’ve had in South Africa with the youth, how you’ve listened to them and iterated your solutions, and the importance of localization of digital platforms, and we’ve also heard about the potential pre-bunking, which was new to most of the participants, so that’s really encouraging to hear. Now, I said in the beginning that an innovation process to solve wicked challenges rarely leads to a shiny new thing, and I think what we’ve heard about today is exactly this, multiple partnerships and multiple smaller solutions that come together and create impact, but we have also heard about a game, and shiny new things are always fun, and the significant impact that that game seems to have already now. So, I look forward to seeing how that is being further rolled out. We’ve also heard about the need for safe spaces for people that are forcibly displaced to have their voices heard and share their stories, and I hope that’s something that we take with us as we move forward. I hope you all leave inspired to engage in this process moving forward, and that we’ll see all of you and have a future opportunity to collaborate. So, thank you.
Katie Drew: Therese, thank you for summing up. I always think that that’s always like one of the hardest tasks of the panel, so I think that was excellent. I’d like to say a huge thank you to Lico, Michael, Mbali, Olushan, and Therese for their participation today, and thank you everyone for attending. It was great, and sorry we didn’t have time. Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2 Workshop 2
Katie Drew
Speech speed
146 words per minute
Speech length
3578 words
Speech time
1464 seconds
Digital resilience means creating robust information ecosystems that allow displaced communities secure access to information and freedom of expression
Explanation
Katie Drew defines digital resilience as the ability for forcibly displaced communities to have access to an information ecosystem that is robust, meets their needs, and allows them to express their concerns and stories while having secure and safe access to information.
Evidence
Referenced the Mentimeter word cloud exercise where participants identified key concepts like access, protection, safety, freedom of expression, and authenticity as components of digital resilience
Major discussion point
Digital Resilience and Information Integrity for Displaced Communities
Topics
Human rights | Development | Sociocultural
Agreed with
– Therese Marie Uppstrom Pankratov
– Micheal Power
Agreed on
Multi-stakeholder partnerships are essential for addressing complex information integrity challenges
Therese Marie Uppstrom Pankratov
Speech speed
163 words per minute
Speech length
1195 words
Speech time
437 seconds
Vulnerable populations are particularly affected by misinformation and disinformation, making information protection crucial for safety
Explanation
Therese argues that vulnerable populations, including forcibly displaced people, are disproportionately impacted by false information. She emphasizes that in humanitarian operations, information is considered protection because quality information helps keep people safe while lack of it creates significant risks.
Evidence
Referenced humanitarian language concept that ‘information being protection’ and noted that technological development has made it easier to develop and spread misinformation
Major discussion point
Digital Resilience and Information Integrity for Displaced Communities
Topics
Human rights | Cybersecurity | Sociocultural
Solving ‘wicked problems’ like information integrity requires partnerships across sectors with different expertise in context, technology, and behavioral sciences
Explanation
Therese explains that complex challenges like safeguarding information integrity cannot be solved by single actors alone. These problems require deep understanding of stakeholders, technology insights, and behavioral sciences, necessitating multi-sectoral partnerships with trust-based collaboration.
Evidence
Described UNHCR’s partnership proposal to combat misinformation targeting displaced people, requiring expertise in crisis context, social media/AI technologies, and behavioral sciences
Major discussion point
Multi-Stakeholder Innovation Partnerships
Topics
Development | Legal and regulatory | Sociocultural
Agreed with
– Katie Drew
– Micheal Power
Agreed on
Multi-stakeholder partnerships are essential for addressing complex information integrity challenges
Innovation processes are messy and require iterations, multiple testing, and redevelopments rather than linear solutions
Explanation
Therese challenges the traditional linear view of innovation (identify challenge → develop solution → implement → problem solved) as often disappointing. She argues that real innovation processes involve loops, circles, iterations, and multiple testing phases, especially for wicked challenges.
Evidence
Contrasted traditional linear innovation thinking with the reality of messy, iterative processes and noted that solutions are often multiple processes, partnerships, and technologies working together
Major discussion point
Multi-Stakeholder Innovation Partnerships
Topics
Development | Economic | Sociocultural
Multi-sectoral partnerships bring together humanitarian organizations, private sector, and government to address complex challenges
Explanation
Therese highlights how the Innovation Norway program facilitates partnerships between humanitarian organizations and private sector to design solutions for humanitarian challenges. She emphasizes that different sectors bring unique expertise that creates a comprehensive basis for innovation.
Evidence
Described the Innovation Norway program’s annual call for proposals and how UNHCR responded with a partnership proposal, bringing together government, private sector, and humanitarian sector expertise
Major discussion point
Multi-Stakeholder Innovation Partnerships
Topics
Development | Economic | Legal and regulatory
Mbali Mushathama
Speech speed
154 words per minute
Speech length
1703 words
Speech time
661 seconds
Rise of misinformation and hate speech targeting foreign nationals, especially during election periods, with groups using coded language across 11 official languages
Explanation
Mbali describes how South Africa experiences increased misinformation and hate speech against foreign nationals, particularly during elections. She explains that perpetrators use coded language and switch between South Africa’s 11 official languages to evade detection by content moderation systems.
Evidence
Mentioned specific groups like ‘Put South Africa First’ and ‘Operation Dudula’ (meaning ‘to push out’), and noted that Operation Dudula was recently taken to court by civil society organizations for their actions
Major discussion point
Information Risks and Xenophobia in South Africa
Topics
Human rights | Sociocultural | Legal and regulatory
Online incitement directly correlates with physical violence in host communities, affecting refugee children in schools through xenophobic bullying
Explanation
Mbali establishes a clear connection between online hate speech and real-world violence, explaining how digital incitement manifests in physical communities. She particularly highlights how this affects refugee children who face xenophobic bullying in schools, even those born in South Africa or who identify as South African.
Evidence
Described how online groups use platforms to incite violence and looting in host communities, and provided examples of refugee students reporting xenophobic-motivated bullying despite speaking the language and identifying as South African
Major discussion point
Information Risks and Xenophobia in South Africa
Topics
Human rights | Cybersecurity | Sociocultural
Foreign nationals are scapegoated for socioeconomic problems due to high unemployment and limited public resources
Explanation
Mbali explains the root causes of xenophobia in South Africa, noting that despite 31 years of democracy and commitments to equality, significant gaps remain including high unemployment (over 32%) and limited public resources. This creates competition and social tensions where foreign nationals become scapegoats.
Evidence
Cited specific unemployment rate of ‘just over 32%’ and explained how limited public resources create a sense of competition leading to social tensions in host communities where refugees reside
Major discussion point
Information Risks and Xenophobia in South Africa
Topics
Development | Economic | Sociocultural
Refugees want safe spaces to tell their stories and access reliable information in languages they understand
Explanation
Based on her direct work with refugees, Mbali emphasizes that displaced people desire safe spaces for dialogue and storytelling rather than repeatedly recounting trauma. They also need access to reliable information in languages they can understand to avoid inadvertently violating laws due to miscommunication.
Evidence
Described how refugees often don’t understand policy changes and find themselves on the wrong side of the law simply because information wasn’t communicated in a language they understood
Major discussion point
Community Participation and Voice
Topics
Human rights | Sociocultural | Development
Agreed with
– Katie Drew
– Oluwaseun Adepoju
Agreed on
Community participation and inclusion of displaced people in decision-making is crucial
Digital literacy must be context-specific and use real-life examples from the community
Explanation
Mbali argues that digital literacy programs cannot be generic but must be tailored to specific contexts and use examples relevant to the community. She emphasizes that the context of South Africa differs from other countries, requiring localized approaches to help refugees protect themselves and stay safe.
Evidence
Contrasted contexts of South Africa, Nigeria, and Ghana, emphasizing how refugees need to understand how to protect themselves specifically in South African host communities
Major discussion point
Digital Resilience and Information Integrity for Displaced Communities
Topics
Development | Sociocultural | Human rights
Agreed with
– Oluwaseun Adepoju
– Likho Bottoman
Agreed on
Localization and context-specific approaches are necessary for effective interventions
Engagement with platforms like TikTok helps understand moderation systems and provides opportunities for refugees to create counter-narratives
Explanation
Mbali describes UNHCR’s collaboration with TikTok to help refugees navigate the platform safely and understand moderation systems. The engagement also focuses on empowering refugees to create their own content to counter existing negative narratives while expressing themselves safely.
Evidence
Mentioned TikTok hosting a dedicated webinar on platform safety, community guidelines, and reporting mechanisms, with opportunities for direct dialogue about challenges like coded language and hate speech in multiple South African languages
Major discussion point
Platform Engagement and Reporting Mechanisms
Topics
Human rights | Sociocultural | Legal and regulatory
Decisions cannot be made for refugees without including them in policy conversations and solution development
Explanation
Mbali strongly advocates for refugee participation in decision-making processes, emphasizing that policies and solutions cannot be developed about refugees without their direct involvement. She stresses that only those with lived experience as refugees can truly understand what works and what doesn’t.
Evidence
Emphasized the principle ‘we cannot make decisions for them and about them without them’ and noted that refugees have valuable insights to share in policy drafting conversations
Major discussion point
Community Participation and Voice
Topics
Human rights | Development | Legal and regulatory
Agreed with
– Katie Drew
– Oluwaseun Adepoju
Agreed on
Community participation and inclusion of displaced people in decision-making is crucial
Refugees need safe spaces for open dialogue and to report violations without re-traumatization
Explanation
Mbali highlights the importance of creating environments where refugees can engage in dialogue and report human rights violations without being forced to repeatedly recount traumatic experiences. She emphasizes avoiding re-traumatization while still providing avenues for refugees to seek help and justice.
Evidence
Noted that refugees don’t want to keep telling their trauma stories over and over again, but they do want safe spaces to report incidents where they feel their human rights may have been violated
Major discussion point
Community Participation and Voice
Topics
Human rights | Sociocultural | Legal and regulatory
Different contexts require different approaches – South Africa’s progressive legislation allows focus on xenophobia rather than basic service access
Explanation
Mbali explains that South Africa’s progressive legislation grants refugees rights to work, education, healthcare, and social grants regardless of documentation status. This context allows focus on xenophobia as the primary challenge rather than basic service access, though documentation remains a systematic problem.
Evidence
Detailed South Africa’s progressive refugee legislation providing access to work, education, healthcare, and social grants, while noting systematic issues with the Ministry of Home Affairs regarding documentation
Major discussion point
Global and Contextual Considerations
Topics
Human rights | Legal and regulatory | Development
Disagreed with
– Audience
Disagreed on
Prioritization of digital resilience versus basic needs for displaced populations
Likho Bottoman
Speech speed
127 words per minute
Speech length
910 words
Speech time
427 seconds
South Africa’s multicultural complexity creates innate diversity management challenges that anti-foreigner narratives exploit
Explanation
Likho explains that South Africa’s inherent multicultural, multiracial, and multilingual nature creates existing diversity management challenges. When foreign nationals are added to this complex diversity landscape, anti-foreigner narratives find themselves at the center of these pre-existing complications.
Evidence
Described South Africa as ‘very multicultural and multiracial or even multilingual country on its own’ with ‘innate challenges to diversity management’
Major discussion point
Information Risks and Xenophobia in South Africa
Topics
Sociocultural | Human rights | Development
Education must extend beyond classrooms since children return to environments where anti-foreigner narratives are perpetuated
Explanation
Likho argues that schools and curriculum alone cannot solve xenophobia problems because education only occurs during school hours. Children return home and to cultural/religious spaces where anti-foreigner narratives are strongly advocated as religiously or culturally correct, limiting curriculum effectiveness.
Evidence
Explained how curriculum is only delivered in classrooms during school hours, but children go back to homes and community spaces where anti-foreigner narratives are perpetuated and even considered ‘religiously correct’
Major discussion point
Pre-bunking Strategies and Educational Approaches
Topics
Sociocultural | Human rights | Development
Government must rethink its approach to technology-facilitated discrimination as populations already have access to technology
Explanation
Likho challenges the assumption about digital divide in South Africa, citing research showing even rural populations have technology access. He argues government must intentionally address digital misinformation and disinformation because people are already absorbing false information faster than government can respond.
Evidence
Referenced research showing ‘even the most rural people have got access to technology and in spaces where we never thought that there is access to technology’
Major discussion point
Policy and Regulatory Challenges
Topics
Development | Legal and regulatory | Infrastructure
Combination approaches using both digital and non-digital methods are necessary to reach all populations
Explanation
Likho advocates for complementary rather than replacement approaches, arguing that choosing fully digital methods means excluding those without digital access. He emphasizes that digital platforms and non-digital methods like board games should work together to drive pre-bunking agendas effectively.
Evidence
Explained the trade-offs of digital versus non-digital approaches and emphasized that ‘they should be complementary to one another’ rather than one replacing the other
Major discussion point
Pre-bunking Strategies and Educational Approaches
Topics
Development | Infrastructure | Sociocultural
Agreed with
– Mbali Mushathama
– Oluwaseun Adepoju
Agreed on
Localization and context-specific approaches are necessary for effective interventions
Protecting refugee rights requires global conversation rather than just action by host countries due to international influences
Explanation
Likho argues that refugee protection cannot be addressed solely by individual host countries because international influences must be considered. He advocates for global community dialogue rather than leaving the conversation entirely to countries where refugees are located.
Major discussion point
Global and Contextual Considerations
Topics
Human rights | Legal and regulatory | Development
Micheal Power
Speech speed
162 words per minute
Speech length
1920 words
Speech time
709 seconds
Pre-bunking involves warning about narratives, explaining problems, providing preemptive refutation, and microdosing counternarratives before harmful content takes hold
Explanation
Michael explains the four-step pre-bunking methodology: providing a brief explainer video with a warning, explaining the problem, offering preemptive refutation of why narratives are incorrect, and microdosing counterbalancing information. The goal is to get ahead of harmful narratives before they take hold.
Evidence
Described the specific four-step process and emphasized that microdosing is meant to be minimal to avoid further perpetuating harm while still providing counterbalancing information
Major discussion point
Pre-bunking Strategies and Educational Approaches
Topics
Sociocultural | Human rights | Cybersecurity
The Mzanzi Life board game achieved 43% perception change in three hours by combining gamification with facilitated learning
Explanation
Michael presents impressive results from their board game pilot, showing that learners’ agreement with the statement about online emotional manipulation increased from 43% to 86% after a three-hour engagement. He emphasizes that gamification combined with facilitated discussion appears to be a ‘magic ingredient’ for changing perceptions.
Evidence
Provided specific statistics showing perception change from 43% to 86% agreement on recognizing online manipulation, and described the game as similar to snakes and ladders with character cards and life scenarios
Major discussion point
Pre-bunking Strategies and Educational Approaches
Topics
Sociocultural | Development | Human rights
Agreed with
– Katie Drew
– Therese Marie Uppstrom Pankratov
Agreed on
Multi-stakeholder partnerships are essential for addressing complex information integrity challenges
Current practice in supporting vulnerable groups subjected to hate speech is wholly inadequate, often leading to re-victimization
Explanation
Michael criticizes the current system’s response to hate speech against vulnerable groups, arguing that practice is inadequate and often results in re-victimization or victim-blaming. He notes this occurs not just with platforms but also with policing stations and other institutions meant to provide support.
Evidence
Mentioned re-victimization and victim-blaming through various processes, and noted that platforms have increased dominance while state response has been insufficient
Major discussion point
Policy and Regulatory Challenges
Topics
Human rights | Legal and regulatory | Cybersecurity
Agreed with
– Oluwaseun Adepoju
Agreed on
Current reporting mechanisms and policy frameworks are inadequate for protecting vulnerable populations
South Africa lacks specific policy addressing refugees and displaced people regarding digital protection, with scattered regulatory approaches
Explanation
Michael explains that South Africa has no specific policy framework for refugees or internally displaced people on digital protection issues. The current regulatory landscape is fragmented, dealing with cybercrime, non-consensual image distribution, and platform power through competition policy without harmonized approaches.
Evidence
Described the regulatory approach as ‘whack-a-mole and scattershot’ with separate handling of cybercrimes, image distribution, and competition policy, noting nothing harmonized exists
Major discussion point
Policy and Regulatory Challenges
Topics
Legal and regulatory | Human rights | Cybersecurity
Oluwaseun Adepoju
Speech speed
157 words per minute
Speech length
1534 words
Speech time
584 seconds
Tech platforms’ reporting mechanisms are often afterthoughts that lack localization and awareness among vulnerable populations
Explanation
Oluwaseun argues that 80% of reporting platforms from big tech companies were developed as afterthoughts following pressure from civil society organizations. These platforms often use one-size-fits-all approaches that don’t serve vulnerable populations effectively and lack proper localization for different languages and contexts.
Evidence
Cited that ‘80% of the reporting platforms from the big techs are afterthoughts’ and mentioned reporting platforms only available in English while users may only speak local languages
Major discussion point
Platform Engagement and Reporting Mechanisms
Topics
Human rights | Legal and regulatory | Sociocultural
Agreed with
– Micheal Power
Agreed on
Current reporting mechanisms and policy frameworks are inadequate for protecting vulnerable populations
Many displaced people are unaware of hate speech classification or fear retaliation when reporting incidents
Explanation
Oluwaseun explains that displaced people often lack awareness of what constitutes hate speech, with some being emotionally desensitized to abuse. Additionally, fear of retaliation and power dynamics prevent many from reporting incidents, compounded by past experiences where reporting yielded no results.
Evidence
Described people who are ‘seriously emotionally battered’ and ‘not emotionally sensitive’ to hate speech, and provided example of sexual violence in IDP camp where reporting led to victim-blaming by law enforcement
Major discussion point
Platform Engagement and Reporting Mechanisms
Topics
Human rights | Cybersecurity | Sociocultural
Agreed with
– Micheal Power
Agreed on
Current reporting mechanisms and policy frameworks are inadequate for protecting vulnerable populations
Big tech companies need to engage local actors and civil society organizations to build trust and improve reporting
Explanation
Oluwaseun advocates for big tech companies to work with local civil society organizations that communities already trust, as there’s significant under-reporting due to trust issues. He suggests that in every local government area, there are numerous organizations that could serve as gateways for reporting to platforms.
Evidence
Mentioned their command center where people can report issues for escalation to platforms, and noted that ‘in every local government You can have at least 25 civil society organizations’ that could serve as intermediaries
Major discussion point
Platform Engagement and Reporting Mechanisms
Topics
Human rights | Development | Legal and regulatory
Agreed with
– Mbali Mushathama
– Likho Bottoman
Agreed on
Localization and context-specific approaches are necessary for effective interventions
Anonymous reporting policies and accountability measures for those responsible for vulnerable populations are needed
Explanation
Oluwaseun calls for effective anonymous reporting systems and accountability policies for those in charge of vulnerable populations. He argues that current policy frameworks are too subjective, allowing personal opinions to influence decisions about what constitutes violence or abuse.
Evidence
Described situations where officials make ‘derogatory comments right from the police station’ and noted that people judge cases ‘based on what you feel’ due to lack of policy framework
Major discussion point
Policy and Regulatory Challenges
Topics
Legal and regulatory | Human rights | Cybersecurity
Community-based approaches and local civil society organizations serve as trusted intermediaries for reporting and support
Explanation
Oluwaseun emphasizes the importance of community-based approaches where local organizations serve as trusted intermediaries between vulnerable populations and formal reporting mechanisms. He describes how their organization provides both escalation services to platforms and psychological support to victims.
Evidence
Described their command center model where people can report through trusted organizations, and mentioned providing psychological support alongside escalation services
Major discussion point
Community Participation and Voice
Topics
Human rights | Development | Sociocultural
Agreed with
– Katie Drew
– Mbali Mushathama
Agreed on
Community participation and inclusion of displaced people in decision-making is crucial
Audience
Speech speed
121 words per minute
Speech length
593 words
Speech time
293 seconds
Refugees in India are systematically targeted by disinformation and labeled as terrorists, criminals, and illegal infiltrators by both state and non-state actors
Explanation
An audience member from London Story working in India highlighted how refugees face systematic targeting through disinformation campaigns. These narratives are pushed by government, media, state actors, and non-state actors, resulting in arbitrary detention, expulsion, violence, and communal violence.
Evidence
Documented cases where refugees are accused of wanting to grab lands and other accusations, leading to arbitrary detention, expulsion, and various forms of violence including communal violence
Major discussion point
Information Risks and Xenophobia in Different Contexts
Topics
Human rights | Legal and regulatory | Sociocultural
UNHCR’s response to state-generated or tolerated harmful narratives needs clarification, especially in countries with limited mandate
Explanation
The audience member questioned how UNHCR responds when harmful narratives against refugees are either generated or tolerated by state actors. They specifically asked about UNHCR’s approach in contexts like India where the state hasn’t ratified the refugee convention and UNHCR has limited mandate.
Evidence
Referenced India’s non-party status to the refugee convention and different treatment of various refugee groups
Major discussion point
Global and Contextual Considerations
Topics
Human rights | Legal and regulatory | Development
Digital resilience initiatives should not overshadow urgent basic needs like food, water, and shelter for displaced communities
Explanation
An audience member questioned whether investing in digital tools might be a luxury when many displaced communities still lack basic necessities. They asked how to balance both digital resilience and basic needs without creating trade-offs.
Evidence
Highlighted that many displaced communities still lack access to food, water, and shelter
Major discussion point
Prioritization of Digital vs Basic Needs
Topics
Development | Human rights | Economic
Disagreed with
– Mbali Mushathama
Disagreed on
Prioritization of digital resilience versus basic needs for displaced populations
True but negative information about refugees spreads faster due to algorithms and cognitive bias, causing significant damage to public perception
Explanation
An audience member with big tech and NGO experience pointed out that beyond misinformation and disinformation, factual but negative information about refugees spreads more rapidly due to algorithmic amplification and human cognitive bias. This creates substantial damage to public perception of vulnerable groups even when the information is technically accurate.
Evidence
Referenced personal experience working for big tech companies and NGOs providing educational assistance for refugee children in China
Major discussion point
Algorithmic Amplification and Information Spread
Topics
Sociocultural | Human rights | Legal and regulatory
Xenophobia in South Africa escalates during election periods and spreads from online to offline violence, requiring urgent ongoing intervention
Explanation
An audience member working on disinformation in South Africa expressed concern about the cyclical nature of xenophobic campaigns, particularly during election periods. They emphasized that unlike other disinformation campaigns, xenophobic content doesn’t remain online but translates into physical violence and death, making it particularly dangerous.
Evidence
Referenced upcoming local government elections and noted that xenophobic disinformation results in offline violence and death, distinguishing it from other types of disinformation campaigns
Major discussion point
Information Risks and Xenophobia in South Africa
Topics
Human rights | Sociocultural | Cybersecurity
Agreements
Agreement points
Multi-stakeholder partnerships are essential for addressing complex information integrity challenges
Speakers
– Katie Drew
– Therese Marie Uppstrom Pankratov
– Micheal Power
Arguments
Digital resilience means creating robust information ecosystems that allow displaced communities secure access to information and freedom of expression
Solving ‘wicked problems’ like information integrity requires partnerships across sectors with different expertise in context, technology, and behavioral sciences
The Mzanzi Life board game achieved 43% perception change in three hours by combining gamification with facilitated learning
Summary
All speakers agree that complex challenges like information integrity for displaced communities cannot be solved by single actors alone and require collaborative approaches bringing together different sectors and expertise
Topics
Development | Legal and regulatory | Sociocultural
Community participation and inclusion of displaced people in decision-making is crucial
Speakers
– Katie Drew
– Mbali Mushathama
– Oluwaseun Adepoju
Arguments
Decisions cannot be made for refugees without including them in policy conversations and solution development
Refugees want safe spaces to tell their stories and access reliable information in languages they understand
Community-based approaches and local civil society organizations serve as trusted intermediaries for reporting and support
Summary
There is strong consensus that displaced communities must be directly involved in developing solutions that affect them, with emphasis on creating safe spaces for their voices and using trusted local intermediaries
Topics
Human rights | Development | Sociocultural
Current reporting mechanisms and policy frameworks are inadequate for protecting vulnerable populations
Speakers
– Micheal Power
– Oluwaseun Adepoju
Arguments
Current practice in supporting vulnerable groups subjected to hate speech is wholly inadequate, often leading to re-victimization
Tech platforms’ reporting mechanisms are often afterthoughts that lack localization and awareness among vulnerable populations
Many displaced people are unaware of hate speech classification or fear retaliation when reporting incidents
Summary
Both speakers agree that existing systems for protecting vulnerable populations from digital harm are fundamentally flawed, leading to re-victimization and under-reporting
Topics
Human rights | Legal and regulatory | Cybersecurity
Localization and context-specific approaches are necessary for effective interventions
Speakers
– Mbali Mushathama
– Oluwaseun Adepoju
– Likho Bottoman
Arguments
Digital literacy must be context-specific and use real-life examples from the community
Big tech companies need to engage local actors and civil society organizations to build trust and improve reporting
Combination approaches using both digital and non-digital methods are necessary to reach all populations
Summary
All three speakers emphasize that one-size-fits-all solutions don’t work and that interventions must be tailored to local contexts, languages, and cultural specificities
Topics
Development | Sociocultural | Human rights
Similar viewpoints
Both speakers understand xenophobia in South Africa as a complex issue rooted in the country’s diverse social fabric, with online hate speech directly translating into offline violence, particularly affecting children in educational settings
Speakers
– Mbali Mushathama
– Likho Bottoman
Arguments
Online incitement directly correlates with physical violence in host communities, affecting refugee children in schools through xenophobic bullying
South Africa’s multicultural complexity creates innate diversity management challenges that anti-foreigner narratives exploit
Topics
Sociocultural | Human rights | Development
Both speakers advocate for sophisticated, iterative approaches to addressing misinformation that move beyond simple linear solutions to embrace complex, multi-step methodologies
Speakers
– Therese Marie Uppstrom Pankratov
– Micheal Power
Arguments
Innovation processes are messy and require iterations, multiple testing, and redevelopments rather than linear solutions
Pre-bunking involves warning about narratives, explaining problems, providing preemptive refutation, and microdosing counternarratives before harmful content takes hold
Topics
Sociocultural | Development | Cybersecurity
Both speakers see platform engagement as essential but emphasize the need for meaningful collaboration that goes beyond surface-level consultation to include capacity building and trust-building with communities
Speakers
– Mbali Mushathama
– Oluwaseun Adepoju
Arguments
Engagement with platforms like TikTok helps understand moderation systems and provides opportunities for refugees to create counter-narratives
Big tech companies need to engage local actors and civil society organizations to build trust and improve reporting
Topics
Human rights | Legal and regulatory | Sociocultural
Unexpected consensus
The effectiveness of non-digital solutions in addressing digital problems
Speakers
– Micheal Power
– Likho Bottoman
Arguments
The Mzanzi Life board game achieved 43% perception change in three hours by combining gamification with facilitated learning
Combination approaches using both digital and non-digital methods are necessary to reach all populations
Explanation
Despite the focus on digital resilience, there was unexpected consensus that analog solutions (like board games) can be highly effective in addressing digital information problems, challenging assumptions about the need for purely digital solutions to digital challenges
Topics
Development | Sociocultural | Infrastructure
The global nature of refugee protection requiring international rather than national solutions
Speakers
– Likho Bottoman
– Audience
Arguments
Protecting refugee rights requires global conversation rather than just action by host countries due to international influences
UNHCR’s response to state-generated or tolerated harmful narratives needs clarification, especially in countries with limited mandate
Explanation
There was unexpected consensus from both a government representative and civil society that refugee protection challenges transcend national boundaries and require coordinated international responses, even when discussing local implementation
Topics
Human rights | Legal and regulatory | Development
Overall assessment
Summary
The discussion revealed strong consensus around the need for multi-stakeholder partnerships, community participation, localized approaches, and recognition that current systems are inadequate. There was also unexpected agreement on the value of non-digital solutions and the global nature of refugee protection challenges.
Consensus level
High level of consensus with significant implications for policy and practice. The agreement suggests a mature understanding of the complexity of information integrity challenges for displaced populations and points toward collaborative, community-centered, and contextually-sensitive approaches as the way forward. The consensus also indicates readiness for innovative solutions that combine multiple methodologies and stakeholder engagement.
Differences
Different viewpoints
Prioritization of digital resilience versus basic needs for displaced populations
Speakers
– Audience
– Mbali Mushathama
Arguments
Digital resilience initiatives should not overshadow urgent basic needs like food, water, and shelter for displaced communities
Different contexts require different approaches – South Africa’s progressive legislation allows focus on xenophobia rather than basic service access
Summary
An audience member questioned whether digital resilience should be prioritized when basic needs aren’t met, while Mbali argued that in South Africa’s context, progressive legislation already provides basic services, making xenophobia the primary challenge rather than basic needs access
Topics
Development | Human rights | Economic
Unexpected differences
Scope of information integrity challenges beyond misinformation
Speakers
– Audience
– Panel speakers
Arguments
True but negative information about refugees spreads faster due to algorithms and cognitive bias, causing significant damage to public perception
Various arguments about misinformation and disinformation
Explanation
An audience member raised an unexpected point that the panel hadn’t directly addressed – the problem of factually accurate but negative information being algorithmically amplified, which creates different challenges than traditional misinformation/disinformation. This highlighted a gap in the panel’s focus on false information versus the broader challenge of information ecosystem manipulation
Topics
Sociocultural | Human rights | Legal and regulatory
Overall assessment
Summary
The panel showed remarkable consensus on most issues, with speakers largely agreeing on problems and complementing each other’s perspectives rather than disagreeing. The main areas of difference were around prioritization (digital vs basic needs) and implementation approaches (digital vs non-digital methods)
Disagreement level
Very low level of disagreement among panelists, which suggests strong alignment on the fundamental challenges and approaches to digital resilience for displaced populations. The few disagreements were more about context-specific priorities and tactical approaches rather than fundamental philosophical differences. This high level of consensus may indicate either genuine alignment in the field or potential groupthink, and could benefit from more diverse perspectives in future discussions
Partial agreements
Partial agreements
Similar viewpoints
Both speakers understand xenophobia in South Africa as a complex issue rooted in the country’s diverse social fabric, with online hate speech directly translating into offline violence, particularly affecting children in educational settings
Speakers
– Mbali Mushathama
– Likho Bottoman
Arguments
Online incitement directly correlates with physical violence in host communities, affecting refugee children in schools through xenophobic bullying
South Africa’s multicultural complexity creates innate diversity management challenges that anti-foreigner narratives exploit
Topics
Sociocultural | Human rights | Development
Both speakers advocate for sophisticated, iterative approaches to addressing misinformation that move beyond simple linear solutions to embrace complex, multi-step methodologies
Speakers
– Therese Marie Uppstrom Pankratov
– Micheal Power
Arguments
Innovation processes are messy and require iterations, multiple testing, and redevelopments rather than linear solutions
Pre-bunking involves warning about narratives, explaining problems, providing preemptive refutation, and microdosing counternarratives before harmful content takes hold
Topics
Sociocultural | Development | Cybersecurity
Both speakers see platform engagement as essential but emphasize the need for meaningful collaboration that goes beyond surface-level consultation to include capacity building and trust-building with communities
Speakers
– Mbali Mushathama
– Oluwaseun Adepoju
Arguments
Engagement with platforms like TikTok helps understand moderation systems and provides opportunities for refugees to create counter-narratives
Big tech companies need to engage local actors and civil society organizations to build trust and improve reporting
Topics
Human rights | Legal and regulatory | Sociocultural
Takeaways
Key takeaways
Digital resilience for displaced communities requires multi-stakeholder partnerships combining humanitarian organizations, private sector, government, and civil society to address complex ‘wicked problems’
Pre-bunking strategies that proactively counter false narratives before they take hold are more effective than reactive debunking, with the Mzanzi Life board game demonstrating 43% perception change in three hours
Information integrity challenges for refugees extend beyond digital spaces to offline violence, requiring both digital and non-digital intervention approaches
Tech platform reporting mechanisms are inadequate for vulnerable populations due to lack of localization, awareness, trust issues, and fear of retaliation
Refugees must be included in policy conversations and solution development rather than having decisions made about them without their participation
Context-specific approaches are essential – South Africa’s focus on xenophobia differs from other contexts where basic service access may be the priority
Education and digital literacy interventions must extend beyond classrooms to counter narratives perpetuated in homes and communities
Resolutions and action items
The Mzanzi Life board game project is moving into second phase testing to conclude by August 2024, followed by printing and distribution with digital facilitation guides
Continued engagement with tech platforms like TikTok and Meta to improve reporting mechanisms and empower refugee communities to create counter-narratives
Development of anonymous reporting policies and accountability measures for those responsible for protecting vulnerable populations
Implementation of combination approaches using both digital and non-digital methods to reach all population segments
Establishment of safe spaces for refugees to share stories and participate in policy development conversations
Unresolved issues
How to address algorithmic amplification of negative but factually accurate information about refugees that reinforces harmful stereotypes
Balancing digital resilience initiatives with urgent basic needs like food, water, and shelter in resource-constrained environments
Addressing state-sponsored or state-tolerated disinformation campaigns against refugees, particularly in countries that haven’t ratified refugee conventions
Scaling independent reporting mechanisms and civil society interventions to match the volume of online hate speech and misinformation
Preparing for increased xenophobic content during upcoming local government elections in South Africa
Harmonizing scattered regulatory approaches across different government departments and jurisdictions
Suggested compromises
Using combination approaches that complement rather than replace each other – digital platforms alongside offline interventions like board games
Engaging local civil society organizations as trusted intermediaries between vulnerable populations and tech platforms for reporting
Finding supportive mechanisms within resistant state structures, such as working with competition authorities or human rights institutions when executive support is lacking
Developing context-specific solutions that address local priorities while maintaining global coordination on refugee protection principles
Thought provoking comments
Safeguarding information integrity is one of the key challenges of our times… in humanitarian operations, we tend to talk about information being protection… when we have access to quality information, it helps keep us safe. And when we don’t, it causes a significant risk.
Speaker
Therese Marie Uppstrom Pankratov
Reason
This reframes information integrity from an abstract concept to a concrete protection mechanism, establishing the life-or-death stakes of misinformation for vulnerable populations. It provides the foundational framework that justifies why digital resilience is as critical as physical safety.
Impact
This comment established the conceptual foundation for the entire discussion, shifting the conversation from viewing digital protection as secondary to recognizing it as fundamental to refugee safety. It influenced how subsequent speakers framed their contributions around protection rather than just technology access.
80% of the reporting platforms from the big techs are afterthoughts, after building the technology… The pressure was mounted by civil society organizations and technology activists to be able to do that. So it’s always challenging to even create the awareness about some of these tools for people using their platforms.
Speaker
Oluwaseun Adepoju
Reason
This exposes a fundamental flaw in how technology platforms approach vulnerable user protection, revealing that safety mechanisms are reactive rather than built-in by design. It challenges the assumption that existing reporting tools are adequate or accessible.
Impact
This comment shifted the discussion from focusing on how to better use existing reporting mechanisms to questioning their fundamental design and effectiveness. It led to deeper exploration of trust issues, localization needs, and the necessity for community-based alternatives to platform-provided solutions.
We cannot make decisions for them and about them without them… unless you’re a refugee and you have that lived experience, we can’t really dictate what works and what doesn’t work.
Speaker
Mbali Mushathama
Reason
This challenges the traditional top-down approach to refugee assistance and asserts the principle of meaningful participation. It highlights how well-intentioned interventions can fail without authentic community involvement.
Impact
This comment reinforced the participatory approach throughout the discussion and validated the community-centered methodology used in their South Africa project. It influenced how other panelists discussed their work, emphasizing consultation and co-creation rather than external solutions.
When we say yes to a fully digital approach, what we are saying no to… when we say yes to digital platforms, we’re actually saying no to the ability for us to reach those that Michael is talking about who still don’t have access to digital platforms.
Speaker
Likho Bottoman
Reason
This introduces critical thinking about digital exclusion and the unintended consequences of digital-first solutions. It challenges the assumption that digital solutions are inherently better and highlights the need for hybrid approaches.
Impact
This comment prompted reflection on the board game approach as complementary rather than inferior to digital solutions. It influenced the discussion toward recognizing that effective interventions require multiple modalities to ensure inclusive reach.
There’s a wide range of challenges… more importantly is the localization of the platforms… we had to work with two other organizations to be able to escalate the situation to the platform because these particular person didn’t want to go on the platform to use it because of a lot of historical issues when it comes to… people report in the past and nothing was done about it.
Speaker
Oluwaseun Adepoju
Reason
This reveals the breakdown of trust between vulnerable communities and formal reporting systems, showing how past failures create barriers to future help-seeking. It demonstrates the need for intermediary organizations and alternative pathways.
Impact
This comment deepened the conversation about why direct platform reporting fails and led to discussion of community-based intermediary solutions. It influenced the panel’s recommendations toward building trusted local partnerships rather than relying solely on platform improvements.
The practice in supporting vulnerable groups who are subjected to hate speech missing disinformation is wholly inadequate. There’s often re-victimization… from policing stations… The practice is simply not to involve people or to re-victimize or victim-blame through a series of processes.
Speaker
Michael Power
Reason
This exposes systemic failures across multiple institutions (not just platforms) and introduces the concept of re-victimization as a barrier to reporting. It shows how the entire ecosystem of protection can become harmful rather than helpful.
Impact
This comment expanded the scope of the problem beyond platform design to institutional culture and practice. It influenced the discussion toward recognizing that technological solutions must be accompanied by broader systemic changes in how institutions respond to vulnerable populations.
Overall assessment
These key comments fundamentally shaped the discussion by challenging assumptions about digital solutions and institutional responses to refugee protection. They moved the conversation from a narrow focus on technology tools to a broader understanding of systemic barriers, power dynamics, and the critical importance of community participation. The comments created a progression from identifying the stakes (information as protection) to exposing systemic failures (afterthought design, re-victimization) to asserting principles (nothing about us without us) and practical considerations (digital exclusion, trust breakdown). This created a more nuanced and realistic framework for understanding digital resilience that acknowledged both technological and social dimensions of the challenge. The discussion evolved from presenting solutions to critically examining why existing approaches fail and what fundamental changes are needed in how we design and implement protection mechanisms for vulnerable populations.
Follow-up questions
How can we effectively make policies around anonymous reporting that is effective?
Speaker
Oluwaseun Adepoju
Explanation
This addresses the critical need for vulnerable populations to report incidents without fear of retaliation, which is a major barrier to addressing digital violence and hate speech
How can we introduce policies that take lessons from practical issues to make anonymous reporting very easy and effective?
Speaker
Oluwaseun Adepoju
Explanation
This builds on the need for evidence-based policy development that addresses real-world challenges faced by displaced communities
How can we introduce accountability for people in charge of addressing these issues in government and in law enforcement?
Speaker
Oluwaseun Adepoju
Explanation
This addresses the problem of re-victimization and subjective handling of cases by authorities who should be protecting vulnerable populations
How do we ensure that digital resilience initiatives for refugees and IDPs in Africa do not overshadow their urgent needs, like access to food, water, and shelter?
Speaker
Beric Serbisa (online participant)
Explanation
This raises the important question of prioritization and resource allocation when addressing both basic needs and digital protection for displaced populations
How can we address the issue of true but negative information about refugees that spreads faster due to algorithms and cognitive bias?
Speaker
Audience member (former big tech employee)
Explanation
This identifies a gap in current approaches that focus on misinformation/disinformation but don’t address how factual negative content can be amplified to damage perceptions of vulnerable groups
What does the future look like for xenophobia monitoring and intervention, especially heading into local government elections in South Africa?
Speaker
Pumzele (audience member)
Explanation
This addresses the urgent need for sustained monitoring and intervention as political cycles can amplify xenophobic narratives that lead to offline violence
How does UNHCR respond when harmful narratives are either generated or tolerated by state actors?
Speaker
Olivia (London-based participant)
Explanation
This highlights the complex challenge of addressing misinformation when it comes from or is supported by government entities, particularly in contexts where UNHCR has limited mandate
What specific steps can be taken to target disinformation online by state and non-state actors in contexts like India where the state hasn’t ratified refugee conventions?
Speaker
Olivia (London-based participant)
Explanation
This addresses the operational challenges of protecting refugees in non-signatory countries where legal frameworks and government cooperation may be limited
How can we build more platforms outside of social media that encourage people to speak about offline situations of violence and discrimination?
Speaker
Oluwaseun Adepoju
Explanation
This recognizes that much violence against displaced persons happens offline and current reporting mechanisms are inadequate for addressing these situations
How can we create a global conversation as a global community about protecting the rights of refugees rather than leaving it to individual host countries?
Speaker
Likho Bottoman
Explanation
This suggests the need for international coordination and shared responsibility in addressing refugee protection challenges that transcend national boundaries
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation
Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation
Session at a glance
Summary
This discussion focused on the ROAMX framework, UNESCO’s Internet Universality Indicators that measure progress on World Summit on the Information Society (WSIS) commitments and digital development goals. The ROAMX acronym stands for Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues like gender equality and sustainability. Dr. Tawfik Jelassi from UNESCO opened by explaining that while digital technologies evolve rapidly, 2.6 billion people remain offline, with significant disparities between high and low-income countries.
The framework has been implemented in over 40 countries since 2018, with second-generation indicators launched in 2024 that include new dimensions like AI governance and environmental impact. Brazil pioneered the framework’s implementation and recently completed assessment using the revised indicators, revealing both progress in digital public services and persistent inequalities, particularly affecting women and rural populations. Fiji piloted a new capacity-building workshop approach that revealed significant gaps in inter-governmental coordination, even after extensive consultation processes during strategy development.
Speakers emphasized that ROAMX serves not just as an assessment tool but as a comprehensive framework for the entire policy lifecycle, from planning to monitoring and evaluation. The discussion highlighted persistent challenges including data gaps, particularly around gender-disaggregated information, and the need for meaningful connectivity rather than basic access. Participants stressed the importance of multi-stakeholder engagement and the framework’s potential to support national and regional Internet Governance Forums. The session concluded with calls for broader adoption of ROAMX as a strategic tool for inclusive digital transformation that leaves no one behind.
Keypoints
## Major Discussion Points:
– **ROAMX Framework Overview and Evolution**: The discussion centered on UNESCO’s ROAMX (Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues) framework for measuring digital development and WSIS implementation. Speakers highlighted the launch of second-generation indicators in 2024, which include new dimensions like AI governance, environmental impact, and meaningful connectivity.
– **Country Implementation Experiences**: Detailed presentations of ROAMX applications in Brazil (as the first pilot country implementing revised indicators) and Fiji (featuring a new capacity-building workshop approach). Brazil’s assessment revealed advances in digital public services but persistent inequalities, while Fiji’s experience demonstrated gaps in inter-governmental coordination despite consultation efforts.
– **Data Gaps and Gender Digital Divide**: Multiple speakers emphasized the critical lack of disaggregated data, particularly sex-disaggregated data, which hampers effective assessment of digital inclusion. The persistent gender digital divide was highlighted as a key challenge, with women underrepresented not just as users but as creators, decision-makers, and leaders in technology sectors.
– **ROAMX as a Multi-Purpose Tool**: The framework’s versatility was emphasized – it serves not only for periodic national assessments but also as a planning tool for strategy development, implementation monitoring, and evaluation. Speakers noted its potential to connect with national and regional Internet Governance Forums and support evidence-based policymaking.
– **Integration with WSIS Plus 20 and Global Digital Cooperation**: The discussion positioned ROAMX as a strategic tool for measuring progress on WSIS commitments and supporting the upcoming WSIS Plus 20 review, emphasizing its role in ensuring digital transformation remains human-centered and rights-based.
## Overall Purpose:
The session aimed to demonstrate how UNESCO’s ROAMX framework can drive WSIS implementation and digital cooperation by providing concrete examples of country applications, showcasing the framework’s evolution with second-generation indicators, and positioning it as a key measurement tool for the WSIS Plus 20 review process.
## Overall Tone:
The discussion maintained a consistently professional and collaborative tone throughout. It was informative and forward-looking, with speakers sharing practical experiences and lessons learned. The tone was optimistic about the framework’s potential while being realistic about persistent challenges like data gaps and digital divides. There was a strong emphasis on multi-stakeholder collaboration and inclusive approaches, reflecting the participatory nature of both the ROAMX framework and the broader Internet governance community.
Speakers
**Speakers from the provided list:**
– **Tatevik Grigoryan** – Session moderator, UNESCO staff member working on the ROMEX initiative
– **Tawfik Jelassi** – Assistant Director General of UNESCO for Communication and Information, delivered keynote remarks
– **Fabio Senne** – Project Coordinator at the Regional Centre of Studies on Information and Communication Technologies (CETIC.br), UNESCO Category 2 Institute; involved in initial IUI framework development and Brazil’s pilot assessments
– **Davide Storti** – Program Specialist at UNESCO for Digital Policies and Transformation, coordinates UNESCO’s WSIS-related activities (participated online)
– **Dorcas Muthoni** – Founder and Chief Executive Officer of Open World, a specialist computer software company established in Kenya; works on gender digital divide and women in technology leadership (participated online)
– **Guy Berger** – Described as “the father of the ROMEX” and regional ROMEX indicators; audience member who provided commentary
– **Chris Buckridge** – Independent consultant, analyst, and commentator in Internet governance and digital policy space; worked for over two decades with regional Internet registrars including APNIC; current MAG (Multi-stakeholder Advisory Group) member
– **Anriette Esterhuysen** – Human rights defender and computer networking pioneer from South Africa; former chair of the multi-stakeholder advisory group of the IGF; former executive director of the Association for Progressive Communications (APC); involved in ROMEX development and implementation
**Additional speakers:**
– **Camilla Gonzalez** – UNESCO colleague working on the ROMEX initiative (participated online, mentioned but did not speak in the transcript)
Full session report
# UNESCO ROMEX Framework: Driving WSIS Implementation and Digital Cooperation – Discussion Summary
## Introduction and Session Context
This early morning “day zero” session at IGF 2025 examined UNESCO’s ROMEX framework and its role in driving WSIS implementation and digital cooperation. The hybrid online and in-person discussion, moderated by Tatevik Grigoryan from UNESCO, brought together international experts to share implementation experiences and explore the framework’s potential applications.
After resolving initial technical difficulties with headset channels, the session proceeded with presentations from UNESCO officials and implementers from Brazil, Fiji, and Kenya, followed by commentary from Internet governance experts.
The ROMEX acronym represents five core dimensions: Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues (the X) including gender equality and sustainability. Since its launch in 2018, the framework has been implemented in over 40 countries, with second-generation indicators introduced in 2024.
## Opening Keynote: Technology and Digital Divides
Dr. Tawfik Jelassi, UNESCO’s Assistant Director General for Communication and Information, opened by quoting historian Melvin Kranzberg: “technology is neither good nor bad, nor is it neutral.” He emphasized that technology’s impact depends fundamentally on human choices, values, and system design.
Jelassi highlighted persistent global digital inequalities, noting that 2.6 billion people remain offline worldwide. The disparities are stark: while 93% of populations in high-income countries use the internet, only 27% in low-income countries have access. He positioned ROMEX as a strategic tool for evidence-based policymaking that has already demonstrated concrete policy outcomes across its 40+ country implementations.
## Framework Evolution and Applications
Davide Storti, UNESCO’s Programme Specialist for Digital Policies and Transformation, explained that ROMEX serves as a translation mechanism, converting WSIS ideals into measurable outcomes while providing a common language for diverse stakeholders in digital governance.
The second-generation indicators launched in 2024 incorporate new dimensions including artificial intelligence governance and environmental impact assessment. Storti emphasized that ROMEX’s value extends beyond periodic assessments to support the entire policy lifecycle, from strategy development through implementation monitoring and evaluation.
## Country Implementation Experiences
### Brazil: Comprehensive Assessment and Findings
Fabio Senne from CETIC.br detailed Brazil’s experience as both the original pilot country in 2018 and the first to implement second-generation indicators. Brazil’s assessment revealed significant advances in digital public services, with the gov.br platform now offering 4,500 services to 160 million users.
However, the assessment also uncovered persistent inequalities. Disaggregated data revealed that black women showed substantially lower levels of meaningful connectivity compared to other demographic groups, highlighting intersections of racial and gender inequalities in digital access.
Senne emphasized the critical importance of multi-stakeholder engagement, which improved data quality by accessing information from civil society and private sector sources that government data alone could not provide. The assessment also revealed coordination challenges within government structures, with participation remaining fragmented across different departments despite Brazil’s established multi-stakeholder frameworks.
Brazil committed to completing multi-stakeholder validation of their revised assessment and launching the final report by September-October.
### Fiji: Capacity Building and Coordination Gaps
Anriette Esterhuysen, a human rights defender and computer networking pioneer from South Africa, shared insights from Fiji’s implementation using a new capacity-building workshop approach. The most striking finding was a significant gap in inter-governmental coordination: despite an extensive eight-month consultation process during development of Fiji’s national digital strategy, two-thirds of government departments were unaware of the strategy’s existence.
This discovery highlighted a critical disconnect between policy development processes and actual implementation awareness across government structures. Esterhuysen noted that while the strategy development had involved extensive consultation, the reality of cross-government awareness was far more limited than anticipated.
The Fiji experience demonstrated ROMEX’s potential beyond assessment, with Esterhuysen observing that the framework “works extremely well in assessing a strategy” and “could work as well as a planning tool” throughout the full policy lifecycle.
### Kenya: Gender Digital Divides and Data Gaps
Dorcas Muthoni, founder and CEO of Open World in Kenya, highlighted the persistent gender digital divide and critical lack of sex-disaggregated data across multiple dimensions of digital participation. This data gap makes it difficult to assess true gender disparities in technology adoption, usage patterns, and particularly leadership roles within the technology sector.
Muthoni emphasized challenges women face in progressing to technology leadership positions, describing “lonely career journeys” with limited role models and support systems. This leadership gap means women’s perspectives are underrepresented in technology design, policy development, and strategic decision-making processes.
## Expert Commentary and Framework Applications
### Multi-stakeholder Engagement and Data-Driven Governance
Chris Buckridge, an independent consultant and Internet governance expert, articulated the relationship between inclusive and evidence-based approaches: “data-driven, it cannot be comprehensive unless it’s inclusive… But at the same time, inclusive governance can’t be effective, can’t be practical unless it is data-driven.”
Buckridge highlighted ROMEX’s potential to foster sustainable multi-stakeholder engagement and complement national and regional Internet governance initiatives, including his experience with EuroDIG events.
### Digital Literacy and Rights Education
Esterhuysen emphasized limitations of current digital literacy approaches, noting that many programs are “vendor-driven or device-focused” and fail to address broader digital citizenship complexities. She advocated for comprehensive approaches connecting rights education and civic education with technical skills development.
Esterhuysen also noted communication challenges with terms like “Internet Governance Forum,” observing that people find the concept difficult to understand and don’t grasp that it involves all aspects of digital cooperation, not just narrow technical governance.
### Foundational Principles and Emerging Technologies
Guy Berger, introduced by Tatevik as “the father of the European Union and the regional ROMEX indicators,” emphasized that “Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services.” This perspective suggests foundational digital governance principles remain relevant as technologies evolve.
A brief exchange between Berger and Esterhuysen revealed different perspectives on terminology, with Esterhuysen suggesting “Internet universality” might not be “future-proof” while acknowledging that people readily understand the underlying ROMEX principles.
## Persistent Challenges and Gaps
### Data and Coordination Issues
The discussion consistently highlighted the lack of comprehensive sex-disaggregated data across countries, making it difficult to assess and address gender digital divides. Coordination challenges within government structures emerged as a common theme, even in countries with established consultation mechanisms.
### Environmental and Emerging Technology Governance
Senne briefly noted that environmental issues like energy consumption and electronic waste are “largely overlooked” in current digital policies. The integration of AI governance into the second-generation indicators reflects growing recognition of the need to address emerging technologies within existing frameworks.
## Integration with WSIS Plus 20 and Global Digital Cooperation
Storti positioned ROMEX as a strategic tool for the upcoming WSIS Plus 20 review process, emphasizing its role in translating WSIS ideals into measurable outcomes. The framework’s comprehensive coverage of WSIS commitments and demonstrated implementation across multiple countries provides concrete evidence for assessing global progress on information society development goals.
## Key Recommendations and Next Steps
The session concluded with several concrete recommendations:
– UNESCO called on governments, regulators, civil society, and stakeholders to embrace ROMEX as a strategic tool for digital transformation
– Participants encouraged national and regional Internet Governance Forums to explore using the ROMEX framework for their initiatives
– Speakers emphasized the importance of addressing data gaps by turning them into policy recommendations
– The discussion highlighted potential for developing collaboration between ROMEX assessments and existing national/regional initiatives
## Conclusion
The discussion demonstrated strong consensus among diverse stakeholders about ROMEX’s practical value while identifying important areas for continued development. The framework’s evolution from a periodic assessment tool to a comprehensive policy lifecycle instrument reflects its adaptability and growing recognition across different contexts.
The combination of theoretical framework and practical implementation experiences from Brazil, Fiji, and Kenya provided concrete evidence of both the framework’s utility and persistent challenges in digital governance. The session successfully moved beyond simple advocacy to critical examination of how comprehensive frameworks can be more effectively integrated into digital policy development and implementation processes.
Session transcript
Tatevik Grigoryan: … We should go there. No, I’m not connected. Okay. Let’s just… I’ll just… If you have a question… If you have a question… It’s 20, okay. Can you send it to me, you don’t have it. You don’t have, do you have? You want to send it to me now. Apparently they can hear. Okay, my colleague sent me the link and apparently they can hear me now. Good morning everyone, online and here in the room, thank you so much for joining. You need to put a headset to be able to… follow us. Please everyone could you could you use the headset otherwise you won’t be able to hear us and those in the room you should select channel number five. Apologies to the colleagues online we’ll wait for the for the audience here to put on their headset. Okay so it’s channel number five so thank you so much again for joining us early in the morning online and here in the room. We’re very pleased to host you in this session focusing on the role of Romex and we’ll go on telling you a bit more what Romex stands for and its role in measuring the WSIS implementation the action lines ahead of the WSIS plus 20 review. I’m very pleased to introduce you an excellent lineup of speakers here with me and we’re joined today in the panel by Dr. Tawfik Jelassi the assistant director general of UNESCO for communication and information will deliver opening remark the keynote and I think without any further ado I’ll give the floor to Dr. Jelassi and then we’ll go into the discussion and I’ll introduce my panel. Thank you so much ADG for being here and please we very much look forward to your keynote remarks. Thank you.
Tawfik Jelassi: Thank you very much Tawfik, the panelists, participants, friends and colleagues. I’m very pleased to join you for this session on Romex driving WSIS implementation and digital cooperation. I would like to thank IGF for their support to UNESCO and for providing this opportunity for us to have an exchange on this topic. I’m also grateful to our speakers and Chris, who will be shortly introduced by the moderator. Their expertise and commitment have been instrumental in advancing the UNESCO work on Internet universality. As the WSIS Plus 20 review is underway, we are reminded that digital technologies are evolving faster than the frameworks that are designed to govern them. And yet 2.6 billion people remain offline as of today, most of them in the least developed regions. In low-income countries, only 27% of the population uses the Internet, compared to 93% in high-income countries. The cost of access, the lack of infrastructure, the entrenched inequalities, including gender gaps, continue to hinder digital inclusion. UNESCO has been advocating for a rights-based, human-centered and inclusive vision for the digital age. This framework gives emphasis to openness, accessibility, multi-stakeholder governance and capacity building. To ensure that this vision is not only aspirational but actionable, we need the right tools to identify gaps, guide reforms and measure progress. And this is where the ROM-X framework comes in. Since its initial launch back in 2018, and with the second-generation indicators which we released last year, the ROM-X has become a strategic enabler for national digital assessments. It supports evidence-based policymaking by helping countries assess their digital needs. The Digital Ecosystems Through the Lens of the ROMAX Principles Those who are not familiar with it, let me briefly remind you the elements of the ROMAX are standing for Human Rights, O for Openness, A for Accessibility and M for Multi-Stakeholder Participation and the X refers to cross-cutting issues such as sustainability, gender equality and online safety The revised indicators include new dimensions such as AI governance, environmental impact, privacy and meaningful connectivity aligning the framework with the global milestones such as the Net Mondial Plus 10 and the Global Digital Compact So far, more than 40 countries have applied the ROMAX framework In Argentina, as an example, the National Digital Assessment informed a legislation to reform data protection laws In Paraguay, the National Statistics Office began collecting disaggregated digital data A ROMAX capacity building workshop took place in Fiji earlier this year and has inspired digital policy planning involving national stakeholders Countries like Brazil and Uzbekistan have begun the pilot implementation of our second-generation indicators These outcomes are not isolated They reflect a growing recognition that data-driven inclusive governance is critical for the digital age However, the digital divide continues to persist especially for women and girls who remain underrepresented online and in digital policy making The revised ROMAX indicators These measures maintain a strong emphasis on gender inclusion, digital literacy, affordability, cultural norms, and safety concerns. This brings me to our call to action. We urge governments, regulators, civil society, and all stakeholders to embrace the ROM-X as a strategic tool to drive digital transformation. It offers a robust, adaptable, and forward-looking methodology to monitor WSIS implementation, align with STG targets, and ensure a digital development that is transparent, equitable, and accountable. As the historian Melvin Kranzberg reminded us, technology is neither good nor bad, nor is it neutral. The impact of technology is shaped by human intent, by the choices we make, the values that we want to protect, and the systems we design. Let’s develop, use, and govern technology in ways that promote shared progress. Let’s put people, rights, and equity at the center of our digital future. We believe that with the ROM-X, we have the means to achieve that. Thank you for your attention.
Tatevik Grigoryan: Thank you so much, ADG, and thank you for setting the stage and giving a comprehensive overview of what ROM-X stands for, and giving a few examples of how we demonstrated its value and power. And now I’ll go, and thank you so much again for being here. I know you won’t be able to stay until the end, but we very much appreciate and value your presence. As you gave the overview of the ROM-X, just to also mention that in addition to demonstrating the value of ROM-X and showcasing a few examples, including how it has been introduced now in Brazil and Uzbekistan, the revised indicators, the session will also focus on demonstrating… on integrating the relevance of ROMEX in a framework in assessing the progress on WSIS commitments and the SDGs. And I’ll go forward and introduce my speakers in the speaking order, but not in the sitting order. Fabio Sene, who is a project coordinator at the Regional Centre of Studies on Information and Communication Technologies. CETIC.br, which is also a UNESCO Category 2 Institute. Fabio has been involved in, of course, the initial IUI framework development. Brazil was the one who piloted the first assessment and the first one to pilot the new revised indicators, which we launched in 2024, last year at the IGF. I have Anriette Esterhuysen, who is a human rights defender and computer networking pioneer from South Africa. She’s a pioneer in using everyone knows ideas in Internet and communication technologies. She’s a former chair of the multi-stakeholder advisory group of the IGF. She used to be the executive director of the Association for Progressive Communications, and she still continues work with the APC and with many other entities, including with UNESCO. She’s been instrumental in both development of the initial indicators, the revision, and also the implementation of the workshop in Fiji. Online, we are joined by Dorca Smutoni, who is the founder and chief executive officer of Open World, a specialist computer software company she established in Kenya when she was only 24 years old. And finally, I have Chris Buckridge to my right, who is an independent consultant, analyst, and commentator in the Internet governance and digital policy space. He worked for more than two decades with regional Internet registrars, starting with APNIC. He’s a current MAG member, and he has many other, has had and has many other roles, which I would like to invite him to share with us. I will not read out, and I am joined also online by two of my colleagues, Davide Storti, a program specialist at UNESCO for Digital Policies and Transformation. Davide is coordinating our activities related to WSIS, and I am also joined by my colleague Camilla Gonzalez, who also works on Romex initiative. Thank you again. I would like to start by giving the floor to my colleague Davide Storti, who would just give a little bit more overview on this interaction of Romex and WSIS, and how the idea came about using the Romex to measure the WSIS implementation. Please, Davide.
Davide Storti: Good morning, everyone. Thank you, Tawfik. Yes, so as Elie Jelassi has mentioned, the technology goes super fast, and UNESCO has highlighted already a number of occasions of different shifts that happened in technology and in society. Therefore, when considering the WSIS as a process and the action lines that are leading down the foundational aspiration of the WSIS process, like access, inclusion, rights, the Romex indicators translate these ideals into measurable outcomes. The connection between the WSIS plus 20 and the different challenges brought up by these shifts through the lenses of the IUI indicators are the possibility of measuring the advancement of these technologies, Artificial Intelligence, the impact of digitalization, the status of indicators like gender equality or rights online of population, and also have a measurement of data protection, trust in the media, and misinformation, for example. So this framework may actually help or support the measurement of how WSIS plus 20, the WSIS framework, which is based on principles, how this evolves, how this is anchored to the reality, by allowing to catalyze some evidence-based results and also collaboration among the different stakeholders of the WSIS process. It provides, I should say, a common language for different stakeholders, a country-to-country reporting, also analysis, also a way of comparison to highlight the different position of evolutions. As was mentioned, a lot of, a big chunk of population is not online yet, so there are different aspects to be taken into account, and also give inputs to dialogues like the IGF through national and regional analysis of the progress and give some sort of diagnostic for guiding different investments by country or different needs assessment, and also needs in terms of policies and regulations, etc. So in the different action lines of the WSIS, the ROMEX provides some grounds for tracking participatory and transparent digital policymaking, for example, or how to examine the connectivity and how affordability comes through and how digital skills come through, or maybe giving some granular ways to measure online safety, data protection, a strategy for even cyber security, etc. So there is an opportunity to have a framework which has already some measurement, which has already been applied in different countries, and the new revision also helps us to be more precise in this kind of measurement. So if used properly and I think the panel today will give a different point of view on this matter. I think the enabling of national-level evidence of the raw mix applied to different countries may give a better view of what is the global impact of the WSIS framework overall and also guide through the review, also the findings of the indicators in different countries that may provide also some grounds for the review itself and for the future of the WSIS as the review will come up with that. So I think I look forward for this discussion and I invite all the IGF stakeholders to consider the IUI framework as one significant basis for the process of the WSIS as it comes forward. Thank you.
Tatevik Grigoryan: Thank you so much, Davide, for your excellent intervention and for your call to indeed adopt, also approach raw mix under this lens. And I think before I give the floor to Fabio who will now focus on the application of raw mix and give a few examples and show us the first impressive findings of the implementation of the revised indicators, which looks like this. And Fabio, bravo indeed that you had such already progress since the launch of the indicators. I wanted to acknowledge the presence of Guy Berger, who is sitting in the audience, who is the father of the European Union. and the regional ROMEX indicators. Thank you so much, Guy, for being here and I hope we can hear from you afterwards. But now, Fabio, please just tell us a bit more about the ROMEX in Brazil and the new application and how do you think it was effective, the new revised indicators and how were they perhaps a bit different from the first experience. Thank you.
Fabio Senne: Okay, so thank you very much, Tawfik. Thank you. I acknowledge all the speakers and panelists. It’s a very pleasure to be here. And setic.br and nick.br and cgi.br is in the very beginning of the process of the ROMEX, also the creation of the framework and also the implementation. As you said, Brazil was the first country to pilot this framework back in 2018. And now we accepted the challenge that Mr. Jelassi presented us back in December in the past IGF to renew the data collection on Brazil on the new second generation version of indicators. And we accepted this challenge and concluded the data collection phase of the project. I will bring here some initial results. But, of course, it will go through a multi-stakeholder validation and we don’t have the full report yet. But I’ll bring to you a few main results. Just to mention, as I said, that Brazil was involved in the discussion of the framework along with lots of consultations on the multi-stakeholder community. And back in 2019, we launched the first assessment report of the country in the area. IGF 2019 in Berlin and then along this process we also supported other countries especially Latin American countries to also implement this methodology so we had lots of exchange during this period and from 2023 to 2024 we also supported UNESCO in the revision of the indicators the five years revision that was expected to be concluded by UNESCO and now we are presenting we are implementing the the next version here just to to highlight a few a few preliminary findings of the discussions first of all if you take the case of Brazil and this is important to say that in our case CETIC.br and NIC.br are responsible for data collection and to deal with the technical team that are collecting all the indicators from from multiple sources but we have a multi-stakeholder advisory committee with the CGI.br which is helping us and supporting us as advising the whole process we have a first meeting of CGI.br that validated the start of the the process and and now after the data collection we will have validation from from the CGI.br but if you take a few advancements and challenges that we have so far so in the past year years Brazil has seen an intensification in public institutional debate on platform regulation and information integrity as as David mentioned here is an also a WSIS plus 20 topic and driven by the growing impact of disinformation, hate speech and how this affects the democratic processes. And discussion has focused on the responsibility of digital platforms in moderating harmful content and protecting users’ rights, especially in the light of judicial interventions that took care in the country, especially in the electoral bodies. However, we don’t have yet, there is a lack of consensus on how to approve a specific legislation in the topic and the debate is still fragmented in different political interests. And while there is a legal framework in place in the country, anchored by the Marco Civil, the Internet and the LGPD, which is our local GDPR, the enforcement of this process is still uneven and there are still critical gaps persistent. If you take the openness dimension, it’s very interesting because over this past five years, we have huge advances in the provision of digital public services and also with this dimension of DPIs. So, for instance, we have the platform gov.br in Brazil, nowadays offers 4,500 services online with more than 160 million users. And these initiatives supported administrative processes and increased access to public information in a more participatory government. However, these gains are not equally distributed, so there is still significant inequalities in access to these digital online services, especially among populations with low digital literacy, limited connectivity or disabilities. So, there are usability… and Tawfik Jelassi, Anriette Esterhuysen, Alexandre Barbosa, Dorcas Muthoni and significant investments in bridging coverage gaps in the country in this period. However, and also the concept of universal and meaningful connectivity has entered in the national policy conversation and debate and being addressed in several strategic plans that are under discussion. But there is a growing recognition that we have challenges. Connectivity remains unevenly distributed with rural access and lower-income groups, especially low-income classes, facing disadvantages. Gender and racial disparities are also relevant. We show in the report, for instance, that black women present lower levels of meaningful connectivity over time and those are exacerbated by digital skill gaps and mobile-only access to this strata of the population. So there is a need for equity-driven strategies that address these overlapping dimensions. In the case of multi-stakeholder participation, Brazil has a legal and institutional architecture that provides a solid foundation for multi-stakeholder participation through the Marco Civil da Internet and the institutional role of CGI.br, which embodies the principles of collaborative democracy. This is a model of democratic and transparency governance. This model is internationally recognized and has supported inclusive dialogues such as the Brazilian IGF that is coordinated by CGI.br. However, if we take broader digital policies, multi-stakeholder participation remains inconsistent. In many ministries and regulatory environments, the inclusion of stakeholders is still fragmented in terms of participation. And finally, to conclude, if we take the cross-cutting issues, one of the new indicators that was included in the framework is related to AI development and governance. So, you can say that Brazil advanced in AI governance in the past few years with the launch of the National Artificial Intelligence Strategy and the National AI Plan. However, the governance framework for AI is still in progress with a national law under discussion in Congress. And crucial aspects such as transparency, risk assessment and rights-based safeguards are still unsolved. And also multi-stakeholder engagement when it comes to AI. And if you take one new indicator that is environmental issues, this is one that we saw largely overlooked in digital policies so far. So, there are still issues such as energy consumption, e-waste and emissions that are not yet well integrated into the governance framework. So, this is a challenge that we identified by having this new indicator proposed. So, just these few overall remarks. Just to say that we are now presenting here these preliminary results. We will enter now in the phases of validation in our multi-stakeholder discussion and plan to, by September, October, launch the final report. So that’s it. And you can later discuss more on the implications of this. Thank you very much.
Tatevik Grigoryan: Thanks so much, Fabio, for presenting the findings. And it’s very interesting to observe both the progress and also the issues that persist. And it’s also interesting to see the application of these newly introduced indicators. I look forward to reading the report. I would like to now give the floor to Anriette. And Anriette, I would like you to please focus on Fiji. This year, for the first time ever, we introduced, piloted a new intervention in the margins of the Romex framework. Following the assessment, we piloted this capacity building workshop to support the multi-stakeholder advisory board, but also the global, not global, but the stakeholder’s wider community in Fiji to implement the recommendations focusing on basically digital policy making, policy implementation, capacity building, and having Romex assessment as basis and evidence for that. Anriette, would you please focus on that?
Anriette Esterhuysen: Thank you, Tatjavik. Well, yes, it was a really interesting experience. So what we did was that Fiji, a relatively small country, had recently approved a national digital strategy. And they’d completed a national assessment using the Romex framework. So we tried to bring these together. I mean in the first thing I mean your question that you had in the script was you know What do you and I want to stress? I mean, I’m glad you’re not asking me all the questions because I prefer ad-libbing But there was one question that you asked me which I think is important, which is How what what should countries do To When they’re implementing digital strategies at a high level, what should they do? And I really think The answer for us was clearly consultation Collaboration and connections and what was I think the most powerful learning of this workshop? Which was a policy implementation workshop how you can use the Romex framework to support implementation of the national digital strategy was that Even after about an eight-month period of the people developing the national digital strategy Believing that they’ve consulted thoroughly two-thirds of the government departments And we must have had about eight different ministries did not know about the national digital strategy so there was this disconnect between the people who developed the strategy who were convinced that their consultation process was perfect and The people in the government departments who have to implement this that the strategy who’d never heard of it And I think that’s one thing you can never Underestimate the complexity of different parts of government. We’re not even talking about multi-stakeholder collaboration here. We’re talking about intergovernmental Collaboration about the complexity in them actually working together Collaborating understanding who’s doing what and how they can make the connections between the different issues and I think for us as The team coming from UNESCO and people who’ve been involved in the Fiji national assessment I think we had a really powerful discovery and that is that the Romex framework is not just suited to Assessing a national internet environment. It actually works extremely well in assessing a strategy and before it’s being implemented it could work as well as a planning tool and assessing a design of a strategy or design of implementation and and Equally at the level of monitoring and evaluation so in fact what we found is that the Romex framework is suited to the full lifecycle of policy development and implementation from design to monitoring learning and evaluation and I think sometimes we forget people always talk about the Indicators, but they forget that the indicators actually are there to help you answer the primary modality of the framework, which are questions. And I’m going to give you an example. So, for example, in Theme F of the framework on social, economic and cultural rights, which is in Category R, the rights, the R of the R-O-A-M-X, the first question is, does the national strategy for digital development address economic, social and cultural aspects of digital rights? And then there are indicators. One of the indicators is evidence of inclusion. Now, you can apply this question as easily to a policy instrument as you can apply it to the national Internet context. And I think that’s what we found extremely useful. And I think we also learned that in spite of their best efforts to develop this national digital strategy, it tended to be very sort of supply driven. It focused a lot on infrastructure, on planning. It did not focus on rights at all. It overlooked rights. It might have had an emphasis on data protection, but I think aside from that, there wasn’t very much. It didn’t explicitly address multi-stakeholder, even though it used the term multi-stakeholder. Openness was treated in a very narrow way. And when it comes to gender, there was virtually no content on gender, some emphasis on girls and capacity building for women and girls. So I think that was the other learning, that even though people who develop these digital strategies are doing it to the best of their ability and they try and be as inclusive as possible, they tend to overlook R-O-M-A-M-X. And I think that’s the other thing that we found. There was a complementarity. There was a lens that was provided by the Romex framework, which really filled the gaps and connected the dots. You know Romex actually started at a conference called Connecting the Dots. And I think it still plays that role to connect the dots between initiatives aimed at building digital literacy, building access to infrastructure. And then I think, Tadej, the final thing I can share, and maybe we can come back to it, although our time is not that much left, is that what we found is that whereas the R-O-M-X and the principles endure, I think I think they’re very future-proof. The concept of Internet universality was not future-proof. In fact, people don’t really understand, because they do have Internet access. They might not have meaningful Internet access, or they might not have equal Internet access, but I think people found that concept of Internet universality difficult to relate to. But they did not find the principles of rights, openness, accessibility, multi-stakeholder, and the issues covered under cross-cutting difficult to relate to. They even found the concept of an Internet governance forum difficult to understand. We tried to propose the idea of a national Internet governance forum as a way of building more collaboration around the implementation of the national digital strategy, but when they hear the words Internet governance forum, it doesn’t convey to them the idea that it is a forum that actually involves all aspects of digital cooperation and governance. To me, that was a real revelation. Very useful, and I personally think that we have adaptability and utility in the Romex framework and principles that we’re only just beginning to discover.
Tatevik Grigoryan: Thanks so much, Anriette, for these insights. Actually, I know we’re behind the time, and I know you need to leave early, so I hope Chris and Dorcas Muthoni Online will forgive me if I ask you a follow-up question, mindful that you also need to leave early. Speaking of adaptability and looking ahead to WSIS plus 20 and the GDC, could you elaborate on the idea of how Romex can help ensure that the next phase of the global digital cooperation is more inclusive and grounded in human rights and equity, especially in the global south?
Anriette Esterhuysen: I think in a way I’ve answered that already. I think we need to use the framework not just to do these periodic national assessments. I think that’s very powerful. It works very well in a country like Brazil, where you do have an institution like NIC.br, like CETIC, you have the CGI, the Brazil Internet Steering Group, because you can come back and you can reflect and you can fill gaps. But I think it also works well as a planning tool and assessing strategies and the implementation of those strategies, and I think it can also be used at a monitoring and evaluation.
Tatevik Grigoryan: Thanks so much Anriette, I would now turn on our next speaker who is onlline Dr. Dorcas Muthoni, I would like to invite you to speak about your experience as you had a direct impact on digital transformation in Africa through your work. So would you please highlight some of the biggest implementation challenges that you had in turning these policies and strategies, since we’ve been also talking about the strategies, into results on the ground?
Dorcas Muthoni: Okay, thank you. Thank you very much for that question, and also the opportunity to just contribute to this panel. I just want to speak around specifically, you know, digital transformation across gender spaces or the gender digital divide, as well as the small business sectors. that I’ve worked on a lot in the recent past. And let me just say that one of the areas that we have found very, very challenging is just, for example, coming to gender, we want to assess and find out, you know, is there any sex desegregated data that’s available for us to do the analysis in terms of how to understand, you know, how is the penetration, for example, how is access, how are social norms affecting, you know, adoption and inclusivity in terms of digital transformation and what are the disparities in terms of technology adoption. And the truth of the matter is, there’s hardly any data. This is very challenging because then it means that this is one area that we are not really proactive in assessing. And I think now this would again also really impact national strategies. When you go to small businesses, there’s a lot of uptake of technology, especially mobile-driven access, which tends to have a very strong social aspect. But we want to assess productive use of this internet to impact these businesses. And what you find is that you then struggle to find, you know, data that you can rely on. And so what I find really outstanding about this ROMEX framework is that I think if we encourage a lot more national assessments, but also apart from national assessments, sometimes can, you know, take time because, you know, you need to convince policy makers you may not have a well-housed, you know, government department that’s keen on pursuing this kind of research, you know, based on other priority. Encourage other stakeholders to really take up these and. and help us access data that can allow us to have some baselines, some, you know, points of reflection and also encourage people to use that to take on certain actions that begin to change the trajectory because I know we are all very excited about many emerging technologies but what you find is that there’s a lot of people who only hear but they cannot really be part of the productive elements of how these technologies help and I think particularly when it comes to the gender digital divide one of the things we found very challenging is how do we get women in leadership because we want women to come in as users, we want them to embrace technology but when they’re in the technical areas how do we encourage them to go all the way up into leadership roles, into, you know, policy and decision-making roles and how do we support, with reference to data again, how do we support them first to that level because then they form the role models that will inspire other young generations and for some reason you find that a lot of women who succeed then want to go back and do something because they have had very lonely career journey so that’s one of the things that we have found lack of really data that allows us to assess from these kind of perspectives is really one of the biggest gaps. Then when we think about ROMEX, we’re working on something in one of my organizations called AFJ where we really support women growing the technology field and pursuing their careers and we’re very interested in, you know, gender equality and we want to see, you know, the women, you know, take these opportunities and so I found that the ROMEX framework is really a good element. I would really love to, you know, hear more about non-government implementations of assessments or processes because this is very, very, very interesting to my organization in the sense that, you know, we are working on, you know, some, you know, monitoring and evaluation and learning framework on some women in a leadership program and we want to find out, you know, how can we use these kinds of frameworks that have been, you know, worked on from different parts of the world and with a lot of, you know, research into it and we see how this can inform part of the initiatives that we take. And the other thing that’s important, again, there’s been a very big growth in terms of, you know, interest in entrepreneurship. A lot of startups, you know, all over the continent, a lot of, you know, developers going into this space, a lot of interest even from, you know, right from the university people wanting to get into this space. And I think the question is, again, we need to find out how is this actually impacting the growth of really productive technologies that are locally responsive in our continent. And I think this is one of the things that would need to assess and if we have, you know, a reference, a baseline, then this would really be adequate to help people who take initiatives to, you know, support the reduction of the digital divide, whether it’s gender or generally the participation from a highly productive level in terms of software development, whether it’s in open source communities or otherwise, the growth of high-scaling startups in the continent. This could actually, you know, help inform governments who maybe take the initiative to take on these kinds of assessments, but also researchers who just want to establish, you know, what’s going on from different parts of the economy, you know, across the continent. These are some of the comments that I will be able to share at this point and I’m happy to stay and take any questions that come through later.
Tatevik Grigoryan: Thanks so much for your valuable inputs, Dorcas. And indeed, you mentioned data gaps, which is a major issue across all the countries where we’ve implemented the RoMEx. And what we tend to do is turn these gaps into policy recommendations and indeed encourage the data gathering and also its availability. And I’m very actually pleased that Kenya was one of the first countries, along with Brazil as well, to implement the RoMEx indicators and also to do the first follow-up assessment to measure the progress they’ve made. Now I’d like to give the floor to Chris and ask you, Chris, to please speak. You have a really long-standing engagement with the internet government processes. Based on your experience, could you please elaborate on how do you see RoMEx contributing to more concrete, measurable follow-up on WSIS commitments? Thank you.
Chris Buckridge: Thank you, Tawfik, very much for having me here. I feel quite an expert, really, in comparison to many of my other speakers here today who’ve been far more involved in development of the RoMEx principles, in implementation of the assessments. My own experience of it has been a little bit more piecemeal, sort of watching and observing the development of this, dipping in occasionally in events such as this. And most recently that was in an event at EuroDIG, which is the European Dialogue on Internet Governance, one of the national and regional initiatives in the internet governance space. It’s very fitting in ways that this first session we have today of the IGF 2025, even if it’s perhaps meant a few less people in the room, it’s a good opportunity and time for us to consider the Romex principles, consider this project and how it fits into the broader Internet governance space because I think it is a really important practical development here. And I’m going back to some comments or a phrase that Mr. Jelassi used in his comments at the beginning, data-driven inclusive governance. I think this year, as we’re heading into the WSIS plus 20 years, we’re very focused on how Internet governance, how digital governance is evolving. That idea of data-driven inclusive governance is really important because those two concepts are very mutually supporting of each other. Data-driven, it cannot be comprehensive unless it’s inclusive, unless it’s drawing in all aspects of the community. But at the same time, inclusive governance can’t be effective, can’t be practical unless it is data-driven, unless it’s grounded in the kind of practical knowledge and awareness that a Romex assessment can provide. So I think Romex principles, as we look to the evolution of Internet governance, as we look to making practical output-focused implementations of Internet governance, this is a really important example that can be leveraged and can be developed and utilized by the whole community. I think in that sense, what I would see as an important discussion in the context of the Internet Governance Forum, in the context of its wider network of NRIs, International and Regional Initiatives is how that can all work together, how it can be complementary. I think the examples that Fabio spoke of in Brazil are really important, that sort of utilisation of NIC.br, CGI.br as a multi-stakeholder element of the assessment process. The Romex assessment process always includes that multi-stakeholder advisory committee and many countries won’t have that situation that Brazil very luckily had of having a pre-existing institution that could serve that function. But I think that in itself is a real opportunity because we can see there are two possibilities here. There is the possibility of a Romex assessment being initiated and actually using or working closely with an existing national or regional initiative to provide and foster that multi-stakeholder input. But on the other hand, if there is not a pre-existing national or regional initiative, a Romex assessment and its multi-stakeholder advisory committee could be a really useful catalyst for developing that kind of sustainable, ongoing multi-stakeholder engagement by the community. And that’s going back to a bit what Anriette was saying. Both Romex assessment can be a one-off or can be a recurring tool, but it can also be a method for generating and fostering sustainable multi-stakeholder engagement in these digital governance processes, in these digital governance understanding and development. So I think the opportunities for complementarity between Romex and everything else that is developing and going on in the Internet governance space is really important. And I think it’s one reason why it’s so good to be talking about it here at the Internet Governance Forum.
Tatevik Grigoryan: Thanks so much, Chris, and thank you so much for pointing out to collaboration and work with national and regional initiatives, IGFs. We indeed call on the national and regional initiatives and we stand ready to work with them to advance and to unroll the assessments. of Romex at their local context. I think, mindful of time, I wanted now to open the floor to the audience, both online and in the room. If you have any questions to any of the panelists, any reactions or feedback, I would be very interested to hear from Guy Berger, as the father of the Romex, as I mentioned. I would be delighted if you could start the interventions from the audience, please. You have to go to the mic.
Guy Berger: Thank you. Hello? Yes, we can hear you. We can hear you, it’s okay. Yes. Thank you so much for the presentation, and wonderful to see this system evolving and being the subject of a panel like this. So, it just struck me that for some people, in the dazzle of AI, they may think that the term Internet universality is quaint and old-fashioned, but actually, of course, we would not have AI, we would not have data in AI if we did not have connectivity. And the important thing, I think, about this term Internet universality is that it sensitizes us, as was said, that many people don’t have connectivity, and that impoverishes everybody. But second of all, that connectivity is about not just people having access to content and services, but people having access to produce content and services. And so, if we really want to see a world with many more alternatives to the big digital… players, if we want to see much more content in local languages, then we’ve got to put this emphasis on internet universality because it is the foundation for everything else that’s happening in the digital world. And so I think that this tool, these internet universality indicators, RoamX, is a really valuable way for a country to take stock of where the gaps are in terms of actually enabling their society as a whole to have equitable opportunities to become producers and creators in the digital economy and to contribute to the global tech stack. And at the moment, we don’t have that. We’ve got too many big dominant players and much too little participation reflecting the ground-up possibilities that humanity could have from these technologies. So I really commend these indicators as a way to produce an evidence base for progress that can really unleash a lot more participation. Because if we don’t have universality of the internet, all this other stuff is just going to be of limited benefit. Thank you.
Tatevik Grigoryan: Thank you so much, Guy. Etienne, would you like to react, please?
Tawfik Jelassi: Yes, I would like to follow up on what Guy Berger just said. Guy just mentioned the importance of having digital infrastructure and connectivity in order to create content and services. And I would like to add a third pillar, if I may, which is the digital literacy, which is the capacity building and the capacity development for people to leverage the digital infrastructure towards creating content and services. I think these are three what I would call critical success factors to ensure this internet universality and meaningful connectivity. and here I want to refer to an international conference that UNESCO organized a couple of weeks ago which is on capacity building in the fields of AI and digital transformation for the public sector. So again the emphasis on the capacity building because our studies, our surveys show that in order to bridge the gap we need really to have this wide capacity building, digital literacy in this new digital age and AI era. Otherwise people cannot be, we cannot have this inclusive information society or this inclusivity that was mentioned earlier. So I think the digital skills and capacity building is a third key pillar. I wanted to add what Guy rightly said. Thank you.
Tatevik Grigoryan: Thanks so much ADG. Anriette, did you want to react or do we take questions?
Anriette Esterhuysen: If there’s another question I’d rather take the other question. Otherwise I’ll react.
Tatevik Grigoryan: Are there any questions in the audience? No? Any questions online? I don’t see any. Anriette, you can go ahead please.
Anriette Esterhuysen: So my reaction then is really just, I think Guy, the concept of internet universality is just, as I said, I think digital inclusion is a more meaningful concept for people. I think internet universality is just harder for people to relate to. So that’s just a reflection but I agree with everything else you’ve said. And then in response to what Taufik said about digital literacy, I think the capacity development is absolutely essential. But I think here the Romex framework is actually quite useful to assess how digital literacy programs are designed, developed and implemented. Because so many digital literacy programs are vendor driven or actually just teach people how to use their devices. They’re not linked with rights education, citizen civic education as it’s called, it’s not really enabling people to fully understand the complexity of the social media environment. And I think even just using the ROMEX frameworks, diversity, gender issues, to assess a digital literacy program is going to produce a digital literacy program. So you’re absolutely right, but we have to also be realistic about the fact that so many digital literacy programs are themselves not connecting the dots.
Tatevik Grigoryan: Good. Thanks so much, Anriette. Are there any further comments or questions, whether online or in the room? I don’t see any, so thank you so much. I would then now like to give one minute to each panelist to give their final reflections, anything you wanted to say. We’ll start from Fabio, please, Fabio.
Fabio Senne: Thank you, Tawfik. Well, no, just to stress a few more practical results that we can see in this process. I think one of them is that multistakeholder engagement is not good in terms of the process itself, but also in terms of the quality of the data you can gather. So this is something very interesting that we saw in this last implementation of the model. Many sources of information coming from the civil society, from the private sector that are more or less hidden in official documentation. So this is very key for the process. And a second thing that was already mentioned is the need for data disaggregation and to really understand the topic. So, for instance, in gendered gaps in Brazil, if you take just the main picture of access, basic access, you don’t see. huge gaps in terms of access. But when it goes to meaningful connectivity in a deeper analysis, you can see very huge gaps. So breaking the data indicators into more disaggregation, I think this is something that ROMACS indicators can do for really not just giving a ranking or who is better and which country is better, but also to give a roadmap for action. I think this is the main characteristics of ROMACS indicators. Thank you.
Tatevik Grigoryan: Thanks so much Fabio and thank you for pointing out the issue of the ranking. I think what countries have been valuing a lot is that ROMACS indeed doesn’t do any ranking or comparison and it’s a fully voluntary assessment aimed at guiding and helping the country. So this is something very important to point out which has been appreciated by all the stakeholders. Chris, would you like to go next?
Chris Buckridge: Sure, thank you Tawfik. I’ll be brief here. I know we’re wrapping up. I’ll use my time just to agree very strongly with Guy’s point about the link between internet universality and so much else of our digital society. I think that’s a very live and active discussion at the moment as we’re looking at the Internet Governance Forum. As Anriette said, an Internet Governance Forum doesn’t necessarily capture for many people the full breadth of what our digital society now means, but I think the ROMACS framework does a really good job of highlighting and reinforcing how interlinked and interreliant all of these aspects are. So really important.
Tatevik Grigoryan: Thanks so much. Dorcas, would you like to have a concluding one-minute remarks?
Dorcas Muthoni: Thank you. Yeah, I would just like to say that. I agree with the input that disaggregated data that allows us to pick up different perspectives. For example, gender equality across different areas of assessment would be really important because without that kind of information then initiatives will tend to be a bit general and that could actually continue to then encourage persistence for example in the gender digital divide which really even as we were starting the forum we it was very clear that this is seen to be one of the areas that we persistently struggle with across the board and I think for me I look at it not just from usage and adoption and access but also ability for women to participate in the production and creation of technologies and be decision makers and policy makers this again is one of the other things that we need to look at because then it informs how much we will inspire other generations to come into these areas because it’s a big gap we struggle you know in terms of being the only one in the room or no woman in the room when it comes to a lot of these opportunities so that’s very important and the other thing I could say is that the sustainability you know when we get this moving how well will we be able to sustain and I think that’s the purpose of having regular assessment is also very important because then we know what we have achieved are we keeping up or are we retarding or you know what’s going on this is really important because we cannot move this world backwards we are only going forward so if we get to know what’s happening today and the actions that are being taken in terms of the policy interventions then we can see the effectiveness of policy so that’s my input I’m trying to connect thank you.
Tatevik Grigoryan: Thank you so much Dorcas and also very much for pointing out the gender digital divide which is one of the key issues that we’re also trying to address and close this gap of course with support and collaboration for with all actors. Ariette.
Anriette Esterhuysen: Thanks Tativic. I mean I started off by saying that effective implementation of a national digital transformation strategy needs consultation and collaboration and connecting or connections and I think that for me is this Anriette Esterhuysen, Anriette Esterhuysen, Alexandre Barbosa, Dorcas Muthoni we are going to have more impact and it will be more inclusive. I’m also very excited by the idea of the national and regional IGFs beginning to explore how they can use the Internet Universality ROAMx framework.
Tatevik Grigoryan: Thanks so much, Anriette. Thank you so much to all the panelists. Before I give the floor to ADG Jelassi for closing the session I wanted to thank each one of you, Dorcas Online, David and Camilla Online, Fabio, Chris and Anriette and Guy for your valuable contributions. I don’t cease to learn every time we have a discussion around ROAMx. I am really excited to see the report on Brazil. Thank you so much for your long-standing support to ROAMx and thank you so much for all the wonderful ideas and calls which we will take forward and take into stock for consideration and action when we carry forward with the ROAMx implementation. Thank you so much again. And ADG, would you like to give concluding remarks to close the session?
Tawfik Jelassi: Thank you, Tatevika. I’ll be very brief. First of all, I would like to thank all the participants online but also in the room who came here for this relatively early morning session on day zero of the IGF. Clearly, you have shown commitment, engagement and interest in the subject matter we focused on during this session. I would like to thank also the panelists for sharing with us their expert insights but also the practical country experiences. Tawfik Jelassi, Anriette Esterhuysen, Alexandre Barbosa, Dorcas Muthoni I think ultimately, as many of the speakers, including Guy Berger’s remarks, it’s all about digital inclusion, and in the United Nations we have an expression that we use quite often, digital inclusion has to leave no one behind. This is very important, it’s at the heart of the Rome Acts, and it’s along these three pillars which I mentioned, and were mentioned obviously by the speakers, digital connectivity, digital literacy skills, and digital services and content. Stay tuned, if you would like to take this discussion further, feel free to contact us at UNESCO, or one of the panelists featured in this session, and enjoy IGF for the days ahead.
Anriette Esterhuysen: Thanks ADG.
Tawfik Jelassi
Speech speed
118 words per minute
Speech length
1037 words
Speech time
526 seconds
ROMEX stands for Rights, Openness, Accessibility, Multi-stakeholder participation, with X representing cross-cutting issues like sustainability and gender equality
Explanation
Jelassi explains the acronym ROMEX, where R stands for Human Rights, O for Openness, A for Accessibility, M for Multi-Stakeholder Participation, and X refers to cross-cutting issues such as sustainability, gender equality and online safety. This framework provides a comprehensive approach to assessing digital ecosystems.
Evidence
The revised indicators include new dimensions such as AI governance, environmental impact, privacy and meaningful connectivity aligning the framework with global milestones such as the Net Mondial Plus 10 and the Global Digital Compact
Major discussion point
ROMEX Framework Overview and Purpose
Topics
Development | Human rights | Legal and regulatory
Agreed with
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
ROMEX framework provides comprehensive assessment methodology for digital development
ROMEX serves as a strategic enabler for national digital assessments and evidence-based policymaking
Explanation
Jelassi argues that ROMEX provides the right tools to identify gaps, guide reforms and measure progress in digital development. The framework supports evidence-based policymaking by helping countries assess their digital needs and ecosystems.
Evidence
Since its initial launch in 2018, and with the second-generation indicators released last year, ROMEX has become a strategic enabler for national digital assessments
Major discussion point
ROMEX Framework Overview and Purpose
Topics
Development | Legal and regulatory
Agreed with
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
ROMEX framework provides comprehensive assessment methodology for digital development
The framework has been applied in over 40 countries with concrete policy outcomes
Explanation
Jelassi demonstrates the practical impact of ROMEX by citing its widespread adoption and concrete results. The framework has moved beyond theory to produce tangible policy changes in multiple countries.
Evidence
In Argentina, the National Digital Assessment informed legislation to reform data protection laws. In Paraguay, the National Statistics Office began collecting disaggregated digital data. Countries like Brazil and Uzbekistan have begun pilot implementation of second-generation indicators
Major discussion point
ROMEX Framework Overview and Purpose
Topics
Development | Legal and regulatory | Human rights
Agreed with
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
Multi-stakeholder engagement is essential for effective digital governance and policy implementation
2.6 billion people remain offline globally, with only 27% of low-income country populations using internet compared to 93% in high-income countries
Explanation
Jelassi highlights the persistent global digital divide by presenting stark statistics about internet access disparities. This data underscores the urgent need for frameworks like ROMEX to address digital inequalities.
Evidence
2.6 billion people remain offline as of today, most of them in the least developed regions. In low-income countries, only 27% of the population uses the Internet, compared to 93% in high-income countries
Major discussion point
Digital Divide and Inclusion Challenges
Topics
Development | Digital access
Agreed with
– Fabio Senne
– Anriette Esterhuysen
– Dorcas Muthoni
Agreed on
Persistent digital divides require targeted interventions, especially for marginalized groups
Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity
Explanation
Jelassi argues that having digital infrastructure and connectivity alone is insufficient for meaningful digital participation. He emphasizes that digital literacy and capacity building form a third critical pillar necessary for people to effectively leverage digital infrastructure.
Evidence
UNESCO organized an international conference on capacity building in AI and digital transformation for the public sector. Studies show that bridging the gap requires wide capacity building and digital literacy in the new digital age and AI era
Major discussion point
Internet Universality and Future Digital Cooperation
Topics
Development | Capacity development | Sociocultural
Agreed with
– Anriette Esterhuysen
– Guy Berger
Agreed on
Digital literacy and capacity building are fundamental requirements for meaningful digital participation
Davide Storti
Speech speed
94 words per minute
Speech length
512 words
Speech time
323 seconds
ROMEX translates WSIS ideals into measurable outcomes and provides common language for stakeholders
Explanation
Storti explains how ROMEX bridges the gap between the foundational aspirations of WSIS (like access, inclusion, rights) and practical measurement. The framework enables evidence-based results and collaboration among different WSIS stakeholders by providing a shared framework for assessment.
Evidence
The framework helps measure advancement of technologies like Artificial Intelligence, impact of digitalization, status of indicators like gender equality or rights online, and measurement of data protection, trust in media, and misinformation
Major discussion point
ROMEX Framework Overview and Purpose
Topics
Development | Legal and regulatory | Human rights
Agreed with
– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
ROMEX framework provides comprehensive assessment methodology for digital development
Fabio Senne
Speech speed
118 words per minute
Speech length
1311 words
Speech time
662 seconds
Brazil was the first country to pilot ROMEX in 2018 and has now implemented the revised second-generation indicators
Explanation
Senne describes Brazil’s pioneering role in ROMEX implementation, from being the first pilot country to now implementing the updated framework. This demonstrates Brazil’s continued commitment to the ROMEX methodology and its evolution.
Evidence
Brazil was involved in the discussion of the framework with multi-stakeholder consultations. In 2019, they launched the first assessment report at IGF Berlin. From 2023-2024, Brazil supported UNESCO in revising the indicators
Major discussion point
ROMEX Implementation and Country Experiences
Topics
Development | Legal and regulatory
Brazil shows advances in digital public services but persistent inequalities in access, especially for marginalized groups
Explanation
Senne presents a nuanced view of Brazil’s digital progress, acknowledging significant improvements in government digital services while highlighting ongoing disparities. The assessment reveals that gains are not equally distributed across different population groups.
Evidence
The platform gov.br offers 4,500 services online with over 160 million users. However, significant inequalities persist in access to digital services, especially among populations with low digital literacy, limited connectivity or disabilities
Major discussion point
ROMEX Implementation and Country Experiences
Topics
Development | Digital access | Human rights
Gender and racial disparities persist, with black women in Brazil showing lower levels of meaningful connectivity
Explanation
Senne’s analysis reveals intersectional digital inequalities in Brazil, where race and gender compound to create particularly disadvantaged groups. This finding demonstrates the importance of disaggregated data analysis in understanding digital divides.
Evidence
Black women present lower levels of meaningful connectivity over time, exacerbated by digital skill gaps and mobile-only access to this strata of the population
Major discussion point
Digital Divide and Inclusion Challenges
Topics
Human rights | Gender rights online | Development
Agreed with
– Tawfik Jelassi
– Anriette Esterhuysen
– Dorcas Muthoni
Agreed on
Persistent digital divides require targeted interventions, especially for marginalized groups
Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources
Explanation
Senne argues that involving multiple stakeholders in the ROMEX assessment process enhances the quality and comprehensiveness of data collection. This approach reveals information that might be hidden in official documentation alone.
Evidence
Many sources of information coming from civil society and private sector are more or less hidden in official documentation. The multi-stakeholder advisory committee with CGI.br helps validate the process
Major discussion point
ROMEX Implementation and Country Experiences
Topics
Development | Legal and regulatory
Agreed with
– Tawfik Jelassi
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
Multi-stakeholder engagement is essential for effective digital governance and policy implementation
Anriette Esterhuysen
Speech speed
151 words per minute
Speech length
1397 words
Speech time
552 seconds
Fiji’s capacity building workshop revealed that government departments were unaware of their own national digital strategy despite consultation efforts
Explanation
Esterhuysen describes a significant discovery during Fiji’s ROMEX workshop: despite an eight-month consultation process, two-thirds of government departments had no knowledge of the national digital strategy. This highlights the complexity of intergovernmental collaboration and the disconnect between strategy development and implementation.
Evidence
About eight different ministries did not know about the national digital strategy, even after the developers believed they had consulted thoroughly. This showed the disconnect between strategy developers and implementers
Major discussion point
ROMEX Implementation and Country Experiences
Topics
Development | Legal and regulatory
Agreed with
– Tawfik Jelassi
– Fabio Senne
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
Multi-stakeholder engagement is essential for effective digital governance and policy implementation
ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation
Explanation
Esterhuysen argues that ROMEX’s utility extends far beyond periodic assessments to encompass the entire policy lifecycle. The framework can serve as a planning tool, strategy assessment tool, and monitoring/evaluation instrument, making it highly versatile for policy work.
Evidence
The ROMEX framework is suited to the full lifecycle of policy development and implementation from design to monitoring learning and evaluation. It works as well as a planning tool and in assessing design of strategies
Major discussion point
ROMEX as a Comprehensive Policy Tool
Topics
Development | Legal and regulatory
Agreed with
– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Chris Buckridge
– Tatevik Grigoryan
Agreed on
ROMEX framework provides comprehensive assessment methodology for digital development
The framework can assess policy instruments and national digital strategies, not just internet environments
Explanation
Esterhuysen discovered that ROMEX’s questions and indicators can be applied directly to evaluate policy documents and strategies, not just national internet contexts. This expands the framework’s applicability significantly beyond its original scope.
Evidence
For example, in Theme F on social, economic and cultural rights, the question ‘does the national strategy for digital development address economic, social and cultural aspects of digital rights?’ can be applied to policy instruments as easily as to national Internet context
Major discussion point
ROMEX as a Comprehensive Policy Tool
Topics
Development | Human rights | Legal and regulatory
ROMEX provides a lens that fills gaps in digital strategies, which often overlook rights, gender, and multi-stakeholder approaches
Explanation
Esterhuysen found that even well-intentioned digital strategies tend to be supply-driven and focus primarily on infrastructure while neglecting crucial elements. ROMEX serves as a complementary lens that identifies and addresses these systematic gaps.
Evidence
Fiji’s national digital strategy was supply-driven, focused on infrastructure and planning, did not focus on rights at all, overlooked multi-stakeholder approaches, treated openness narrowly, and had virtually no content on gender
Major discussion point
ROMEX as a Comprehensive Policy Tool
Topics
Human rights | Gender rights online | Development
Agreed with
– Tawfik Jelassi
– Fabio Senne
– Dorcas Muthoni
Agreed on
Persistent digital divides require targeted interventions, especially for marginalized groups
Digital literacy programs need to connect rights education and civic education, not just device usage training
Explanation
Esterhuysen argues that many digital literacy programs are inadequate because they focus only on technical skills rather than comprehensive digital citizenship. She advocates for programs that integrate rights awareness and civic education to help people understand the complexity of digital environments.
Evidence
Many digital literacy programs are vendor driven or just teach people how to use devices. They’re not linked with rights education or civic education, and don’t enable people to understand the complexity of social media environments
Major discussion point
Internet Universality and Future Digital Cooperation
Topics
Sociocultural | Online education | Human rights
Agreed with
– Tawfik Jelassi
– Guy Berger
Agreed on
Digital literacy and capacity building are fundamental requirements for meaningful digital participation
Disagreed with
– Guy Berger
Disagreed on
Terminology preference for Internet Universality vs Digital Inclusion
Dorcas Muthoni
Speech speed
149 words per minute
Speech length
1306 words
Speech time
525 seconds
Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities
Explanation
Muthoni identifies a critical data gap that hampers efforts to understand and address gender inequalities in digital access and adoption. Without proper disaggregated data, it becomes challenging to develop targeted interventions or measure progress in closing gender digital divides.
Evidence
When assessing gender digital divide, penetration, access, how social norms affect adoption and inclusivity, and disparities in technology adoption, there’s hardly any data available for analysis
Major discussion point
Digital Divide and Inclusion Challenges
Topics
Human rights | Gender rights online | Development
Agreed with
– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
Agreed on
Persistent digital divides require targeted interventions, especially for marginalized groups
Women face challenges progressing to leadership roles in technology, creating lonely career journeys and limiting role models
Explanation
Muthoni describes systemic barriers that prevent women from advancing to leadership positions in technology sectors. This creates a cycle where the lack of female role models discourages other women from pursuing or persisting in technology careers.
Evidence
Women who succeed in technology want to give back because they have had very lonely career journeys. There’s a need to support women to reach leadership roles in technical areas, policy and decision-making roles to form role models for young generations
Major discussion point
Digital Divide and Inclusion Challenges
Topics
Human rights | Gender rights online | Economic
Chris Buckridge
Speech speed
132 words per minute
Speech length
731 words
Speech time
330 seconds
Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective
Explanation
Buckridge argues that data-driven and inclusive governance are mutually reinforcing concepts. Effective governance cannot be truly data-driven without inclusive participation, and inclusive governance cannot be practical without being grounded in comprehensive data and evidence.
Evidence
Data-driven governance cannot be comprehensive unless it’s inclusive, drawing in all aspects of the community. Inclusive governance can’t be effective unless it is data-driven and grounded in practical knowledge that a ROMEX assessment can provide
Major discussion point
ROMEX as a Comprehensive Policy Tool
Topics
Development | Legal and regulatory
Agreed with
– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Tatevik Grigoryan
Agreed on
ROMEX framework provides comprehensive assessment methodology for digital development
ROMEX can foster sustainable multi-stakeholder engagement and complement national/regional internet governance initiatives
Explanation
Buckridge sees ROMEX as both benefiting from and contributing to multi-stakeholder governance structures. The framework can work with existing initiatives like national IGFs, or help catalyze new multi-stakeholder engagement where none exists.
Evidence
ROMEX assessment can work with existing national/regional initiatives to provide multi-stakeholder input, or if no pre-existing initiative exists, it can be a catalyst for developing sustainable multi-stakeholder engagement in digital governance processes
Major discussion point
Internet Universality and Future Digital Cooperation
Topics
Development | Legal and regulatory
Agreed with
– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Tatevik Grigoryan
Agreed on
Multi-stakeholder engagement is essential for effective digital governance and policy implementation
Guy Berger
Speech speed
130 words per minute
Speech length
332 words
Speech time
152 seconds
Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services
Explanation
Berger argues that despite the excitement around AI and new technologies, internet universality remains crucial as the foundation that enables all other digital developments. He emphasizes that true universality means people can both consume and create digital content and services.
Evidence
We would not have AI or data in AI without connectivity. Internet universality enables people to have access not just to content and services, but to produce content and services, contributing to alternatives to big digital players and content in local languages
Major discussion point
Internet Universality and Future Digital Cooperation
Topics
Development | Infrastructure | Sociocultural
Agreed with
– Tawfik Jelassi
– Anriette Esterhuysen
Agreed on
Digital literacy and capacity building are fundamental requirements for meaningful digital participation
Disagreed with
– Anriette Esterhuysen
Disagreed on
Terminology preference for Internet Universality vs Digital Inclusion
Tatevik Grigoryan
Speech speed
117 words per minute
Speech length
1933 words
Speech time
984 seconds
ROMEX demonstrates its value through successful implementation in multiple countries including Brazil and Uzbekistan with revised indicators
Explanation
Grigoryan emphasizes that ROMEX has proven its effectiveness through practical applications across different countries. The framework has evolved with revised indicators that are being piloted in Brazil and Uzbekistan, showing its adaptability and continued relevance.
Evidence
Brazil and Uzbekistan have begun the pilot implementation of revised indicators, and the session focuses on demonstrating how ROMEX has been introduced in these countries
Major discussion point
ROMEX Implementation and Country Experiences
Topics
Development | Legal and regulatory
ROMEX serves as a framework for assessing progress on WSIS commitments and SDGs through evidence-based policy making
Explanation
Grigoryan positions ROMEX as a tool that can measure and evaluate progress toward international commitments like WSIS and Sustainable Development Goals. The framework provides evidence-based foundations for policy decisions and progress tracking.
Evidence
The session focuses on integrating the relevance of ROMEX framework in assessing the progress on WSIS commitments and the SDGs
Major discussion point
ROMEX Framework Overview and Purpose
Topics
Development | Legal and regulatory
Agreed with
– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
Agreed on
ROMEX framework provides comprehensive assessment methodology for digital development
ROMEX capacity building workshops support multi-stakeholder advisory boards and wider stakeholder communities in policy implementation
Explanation
Grigoryan describes a new intervention approach where ROMEX assessments are followed by capacity building workshops. These workshops help stakeholders implement recommendations and use assessment findings as evidence for digital policy making and implementation.
Evidence
A capacity building workshop took place in Fiji to support the multi-stakeholder advisory board and wider stakeholder community in implementing recommendations focusing on digital policy making, policy implementation, and capacity building
Major discussion point
ROMEX as a Comprehensive Policy Tool
Topics
Development | Capacity development
Agreed with
– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
Agreed on
Multi-stakeholder engagement is essential for effective digital governance and policy implementation
ROMEX provides a voluntary assessment approach that avoids ranking or comparison between countries
Explanation
Grigoryan emphasizes that ROMEX is designed as a supportive tool rather than a competitive assessment mechanism. Countries appreciate that the framework focuses on guidance and assistance rather than creating hierarchies or comparisons between nations.
Evidence
ROMEX doesn’t do any ranking or comparison and it’s a fully voluntary assessment aimed at guiding and helping the country, which has been appreciated by all stakeholders
Major discussion point
ROMEX Framework Overview and Purpose
Topics
Development | Legal and regulatory
ROMEX stands ready to collaborate with national and regional IGF initiatives to advance local assessments
Explanation
Grigoryan calls for collaboration between ROMEX and existing governance structures like national and regional Internet Governance Forums. This partnership approach aims to leverage existing multi-stakeholder mechanisms to implement ROMEX assessments at local levels.
Evidence
UNESCO calls on national and regional initiatives and IGFs and stands ready to work with them to advance and unroll the assessments of ROMEX at their local context
Major discussion point
Internet Universality and Future Digital Cooperation
Topics
Development | Legal and regulatory
Agreements
Agreement points
ROMEX framework provides comprehensive assessment methodology for digital development
Speakers
– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Arguments
ROMEX stands for Rights, Openness, Accessibility, Multi-stakeholder participation, with X representing cross-cutting issues like sustainability and gender equality
ROMEX serves as a strategic enabler for national digital assessments and evidence-based policymaking
ROMEX translates WSIS ideals into measurable outcomes and provides common language for stakeholders
ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation
Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective
ROMEX serves as a framework for assessing progress on WSIS commitments and SDGs through evidence-based policy making
Summary
All speakers agree that ROMEX provides a valuable, comprehensive framework for assessing digital development that encompasses rights, openness, accessibility, and multi-stakeholder participation while serving multiple purposes from assessment to policy planning
Topics
Development | Legal and regulatory | Human rights
Multi-stakeholder engagement is essential for effective digital governance and policy implementation
Speakers
– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Arguments
The framework has been applied in over 40 countries with concrete policy outcomes
Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources
Fiji’s capacity building workshop revealed that government departments were unaware of their own national digital strategy despite consultation efforts
ROMEX can foster sustainable multi-stakeholder engagement and complement national/regional internet governance initiatives
ROMEX capacity building workshops support multi-stakeholder advisory boards and wider stakeholder communities in policy implementation
Summary
Speakers consistently emphasize that meaningful multi-stakeholder participation is crucial for successful digital policy development and implementation, with ROMEX serving as a tool to facilitate this engagement
Topics
Development | Legal and regulatory
Persistent digital divides require targeted interventions, especially for marginalized groups
Speakers
– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Dorcas Muthoni
Arguments
2.6 billion people remain offline globally, with only 27% of low-income country populations using internet compared to 93% in high-income countries
Gender and racial disparities persist, with black women in Brazil showing lower levels of meaningful connectivity
ROMEX provides a lens that fills gaps in digital strategies, which often overlook rights, gender, and multi-stakeholder approaches
Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities
Summary
All speakers acknowledge that significant digital inequalities persist, particularly affecting women, racial minorities, and people in low-income regions, requiring evidence-based approaches to address these gaps
Topics
Development | Human rights | Gender rights online | Digital access
Digital literacy and capacity building are fundamental requirements for meaningful digital participation
Speakers
– Tawfik Jelassi
– Anriette Esterhuysen
– Guy Berger
Arguments
Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity
Digital literacy programs need to connect rights education and civic education, not just device usage training
Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services
Summary
Speakers agree that technical access alone is insufficient and that comprehensive digital literacy, including rights awareness and civic education, is essential for people to meaningfully participate in digital society
Topics
Development | Sociocultural | Online education | Human rights
Similar viewpoints
Both speakers emphasize the critical importance of comprehensive, disaggregated data collection that includes multiple stakeholder perspectives to understand and address digital inequalities effectively
Speakers
– Fabio Senne
– Dorcas Muthoni
Arguments
Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources
Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities
Topics
Development | Human rights | Gender rights online
Both speakers view ROMEX as a comprehensive tool that can support various stages of policy work while emphasizing the interconnected nature of data-driven and inclusive approaches to governance
Speakers
– Anriette Esterhuysen
– Chris Buckridge
Arguments
ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation
Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective
Topics
Development | Legal and regulatory
Both speakers emphasize that internet universality and meaningful connectivity require more than just technical infrastructure – they need comprehensive capacity building to enable people to be both consumers and creators in the digital economy
Speakers
– Tawfik Jelassi
– Guy Berger
Arguments
Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity
Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services
Topics
Development | Infrastructure | Sociocultural
Unexpected consensus
ROMEX as a policy planning and evaluation tool beyond assessment
Speakers
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan
Arguments
The framework can assess policy instruments and national digital strategies, not just internet environments
ROMEX can foster sustainable multi-stakeholder engagement and complement national/regional internet governance initiatives
ROMEX capacity building workshops support multi-stakeholder advisory boards and wider stakeholder communities in policy implementation
Explanation
While ROMEX was originally conceived as an assessment framework, speakers discovered unexpected consensus around its utility as a comprehensive policy tool that can be used for planning, strategy evaluation, and ongoing governance processes, expanding its application beyond periodic assessments
Topics
Development | Legal and regulatory
The concept of ‘Internet universality’ may be outdated while ROMEX principles remain relevant
Speakers
– Anriette Esterhuysen
– Guy Berger
Arguments
ROMEX provides a lens that fills gaps in digital strategies, which often overlook rights, gender, and multi-stakeholder approaches
Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services
Explanation
There was unexpected consensus that while the term ‘Internet universality’ may be difficult for people to relate to in the AI era, the underlying ROMEX principles remain highly relevant and future-proof, suggesting a need to evolve terminology while maintaining core concepts
Topics
Development | Sociocultural
Overall assessment
Summary
The speakers demonstrated remarkably high consensus across all major aspects of ROMEX implementation and digital governance. Key areas of agreement include: the comprehensive value of the ROMEX framework for digital assessment and policy work; the critical importance of multi-stakeholder engagement; the persistence of digital divides requiring targeted interventions; and the need for holistic approaches to digital literacy and capacity building.
Consensus level
Very high consensus with no significant disagreements identified. The speakers built upon each other’s points constructively, with practical implementers (Brazil, Fiji, Kenya) validating the theoretical framework presented by UNESCO officials. This strong consensus suggests ROMEX has achieved broad acceptance among diverse stakeholders and demonstrates its practical utility across different contexts. The implications are positive for ROMEX’s continued development and adoption, as the framework appears to have successfully bridged the gap between academic theory and practical implementation needs.
Differences
Different viewpoints
Terminology preference for Internet Universality vs Digital Inclusion
Speakers
– Guy Berger
– Anriette Esterhuysen
Arguments
Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services
Digital literacy programs need to connect rights education and civic education, not just device usage training
Summary
Guy Berger defended the continued relevance of ‘Internet universality’ as a foundational concept, while Anriette Esterhuysen suggested that ‘digital inclusion’ is a more meaningful and relatable concept for people to understand
Topics
Development | Sociocultural
Unexpected differences
Overall assessment
Summary
The discussion showed remarkable consensus among speakers with only minor terminological preferences and approaches to implementation differing
Disagreement level
Very low level of disagreement. The speakers were largely aligned on the value and importance of the ROMEX framework, the challenges of digital divides, and the need for inclusive digital governance. The only notable disagreement was a terminological preference between ‘Internet universality’ and ‘digital inclusion,’ which does not affect the substantive policy recommendations. This high level of consensus suggests strong foundational agreement on the framework’s value and approach, which bodes well for its continued development and implementation.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the critical importance of comprehensive, disaggregated data collection that includes multiple stakeholder perspectives to understand and address digital inequalities effectively
Speakers
– Fabio Senne
– Dorcas Muthoni
Arguments
Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources
Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities
Topics
Development | Human rights | Gender rights online
Both speakers view ROMEX as a comprehensive tool that can support various stages of policy work while emphasizing the interconnected nature of data-driven and inclusive approaches to governance
Speakers
– Anriette Esterhuysen
– Chris Buckridge
Arguments
ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation
Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective
Topics
Development | Legal and regulatory
Both speakers emphasize that internet universality and meaningful connectivity require more than just technical infrastructure – they need comprehensive capacity building to enable people to be both consumers and creators in the digital economy
Speakers
– Tawfik Jelassi
– Guy Berger
Arguments
Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity
Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services
Topics
Development | Infrastructure | Sociocultural
Takeaways
Key takeaways
ROMEX framework (Rights, Openness, Accessibility, Multi-stakeholder participation, plus cross-cutting issues) serves as an effective tool for measuring WSIS implementation and guiding evidence-based digital policymaking
The framework has demonstrated practical value across over 40 countries, with concrete policy outcomes including legislative reforms and improved data collection practices
ROMEX works throughout the full policy lifecycle – from design and planning to implementation, monitoring, and evaluation – not just as a one-time assessment tool
Digital divides persist globally with 2.6 billion people offline, and significant inequalities exist even within countries that have made digital progress, particularly affecting women, racial minorities, and rural populations
Multi-stakeholder engagement is essential for both effective policy implementation and quality data collection, but coordination challenges exist even within government departments
Data disaggregation is crucial for understanding true digital inequalities – surface-level access statistics can mask deeper connectivity and usage gaps
Internet universality remains foundational for emerging technologies like AI, requiring not just access but the ability for people to produce and create digital content and services
Digital literacy programs need comprehensive approaches that include rights education and civic engagement, not just technical device training
Resolutions and action items
UNESCO calls on governments, regulators, civil society, and stakeholders to embrace ROMEX as a strategic tool for digital transformation
Brazil will complete multi-stakeholder validation of their revised ROMEX assessment and launch the final report by September-October
Encourage national and regional Internet Governance Forums to explore using the ROMEX framework for their initiatives
Promote non-governmental implementations of ROMEX assessments to support broader stakeholder engagement
Address data gaps by turning them into policy recommendations and encouraging improved data gathering and availability
Develop collaboration between ROMEX assessments and existing national/regional initiatives to foster sustainable multi-stakeholder engagement
Unresolved issues
Lack of comprehensive sex-disaggregated data across countries makes it difficult to properly assess and address gender digital divides
Environmental impact indicators are largely overlooked in digital policies and governance frameworks
AI governance frameworks remain incomplete in many countries, with crucial aspects like transparency and rights-based safeguards still unresolved
Multi-stakeholder participation remains inconsistent across different government ministries and regulatory environments
The concept of ‘Internet Governance Forum’ is poorly understood by many stakeholders, limiting engagement in digital cooperation processes
Sustainability of ROMEX implementation and regular assessments requires ongoing commitment and resources
Digital literacy programs often remain vendor-driven or device-focused rather than comprehensive rights-based approaches
Suggested compromises
Use ‘digital inclusion’ terminology instead of ‘Internet universality’ as it is more relatable and meaningful to stakeholders
Leverage ROMEX assessments as catalysts for creating multi-stakeholder advisory committees in countries lacking existing institutions
Combine ROMEX framework with national digital strategy development to ensure comprehensive coverage of rights, openness, accessibility, and multi-stakeholder principles
Encourage both governmental and non-governmental implementations of ROMEX to broaden participation and impact
Thought provoking comments
Technology is neither good nor bad, nor is it neutral. The impact of technology is shaped by human intent, by the choices we make, the values that we want to protect, and the systems we design.
Speaker
Tawfik Jelassi
Reason
This quote from historian Melvin Kranzberg reframes the entire discussion by challenging the common assumption that technology is neutral. It emphasizes human agency and responsibility in shaping digital outcomes, which directly supports the need for frameworks like ROMEX that embed human rights and values into digital governance.
Impact
This philosophical foundation set the tone for the entire session, establishing that digital transformation requires intentional, values-based approaches rather than purely technical solutions. It provided the conceptual framework that justified all subsequent discussions about ROMEX as a tool for ensuring technology serves human development.
Even after about an eight-month period of the people developing the national digital strategy believing that they’ve consulted thoroughly, two-thirds of the government departments… did not know about the national digital strategy
Speaker
Anriette Esterhuysen
Reason
This revelation from the Fiji workshop exposed a critical gap between policy development and implementation that goes beyond technical issues to fundamental governance challenges. It highlighted how even well-intentioned consultation processes can fail dramatically.
Impact
This comment shifted the discussion from celebrating ROMEX assessments to acknowledging the complex realities of policy implementation. It led to deeper exploration of how ROMEX could serve not just as an assessment tool but as a bridge between strategy development and actual implementation, emphasizing the need for sustained multi-stakeholder engagement.
The ROMEX framework is not just suited to assessing a national internet environment. It actually works extremely well in assessing a strategy… it could work as well as a planning tool… suited to the full lifecycle of policy development and implementation from design to monitoring learning and evaluation
Speaker
Anriette Esterhuysen
Reason
This insight expanded the conceptual boundaries of ROMEX beyond its original assessment function, revealing its potential as a comprehensive policy tool. It demonstrated how frameworks can evolve beyond their initial design to serve broader purposes.
Impact
This comment fundamentally reframed how participants viewed ROMEX’s utility, moving from seeing it as a periodic assessment tool to understanding it as an integrated policy lifecycle instrument. It opened new avenues for discussion about practical applications and sparked interest from other speakers about implementation possibilities.
The concept of Internet universality was not future-proof… people found that concept of Internet universality difficult to relate to. But they did not find the principles of rights, openness, accessibility, multi-stakeholder, and the issues covered under cross-cutting difficult to relate to
Speaker
Anriette Esterhuysen
Reason
This observation challenged a core UNESCO concept while validating the ROMEX framework itself. It provided crucial feedback about how terminology and framing affect stakeholder engagement and understanding.
Impact
This comment created a moment of tension in the discussion, as it directly challenged UNESCO’s foundational concept. It prompted Guy Berger to defend the importance of ‘Internet universality’ and led to a nuanced exchange about terminology versus substance, ultimately enriching the conversation about effective communication of digital inclusion concepts.
There’s hardly any data… This is very challenging because then it means that this is one area that we are not really proactive in assessing… When you go to small businesses… you then struggle to find data that you can rely on
Speaker
Dorcas Muthoni
Reason
This comment highlighted a fundamental challenge that undermines evidence-based policymaking – the absence of disaggregated data, particularly for gender and small business impacts. It connected the technical framework discussion to real-world implementation barriers.
Impact
This intervention grounded the theoretical discussion in practical realities, leading other speakers to emphasize the importance of data disaggregation. It reinforced the value proposition of ROMEX by highlighting how it can identify and address critical data gaps that policymakers might otherwise overlook.
Data-driven inclusive governance… those two concepts are very mutually supporting of each other. Data-driven, it cannot be comprehensive unless it’s inclusive… But at the same time, inclusive governance can’t be effective, can’t be practical unless it is data-driven
Speaker
Chris Buckridge
Reason
This comment articulated a sophisticated understanding of the symbiotic relationship between evidence-based policy and participatory governance, showing how ROMEX addresses both dimensions simultaneously.
Impact
This insight helped synthesize earlier discussions about multi-stakeholder engagement and evidence-based policy, providing a theoretical framework that connected various speakers’ practical experiences. It elevated the conversation by showing how ROMEX addresses fundamental governance challenges rather than just technical assessment needs.
Overall assessment
These key comments transformed what could have been a routine presentation of ROMEX achievements into a sophisticated exploration of digital governance challenges and solutions. The discussion evolved from initial technical presentations to deeper questions about policy implementation, stakeholder engagement, and the relationship between assessment frameworks and real-world change. Anriette Esterhuysen’s insights particularly drove this evolution, challenging assumptions and expanding the conceptual scope of ROMEX’s utility. The interplay between theoretical frameworks (Jelassi’s technology neutrality quote) and practical realities (Muthoni’s data gaps, Esterhuysen’s Fiji experience) created a rich dialogue that demonstrated both the potential and limitations of current approaches to digital governance. The session successfully moved beyond advocacy for ROMEX to critical examination of how such frameworks can be more effectively integrated into the full spectrum of digital policy development and implementation.
Follow-up questions
How can non-government implementations of ROMEX assessments be conducted and what frameworks exist for this?
Speaker
Dorcas Muthoni
Explanation
She expressed interest in using ROMEX frameworks for monitoring and evaluation in her organization’s women in leadership program, indicating a need for guidance on non-governmental applications
How can National and Regional Internet Governance Forums (NRIs) integrate and utilize the ROMEX framework?
Speaker
Chris Buckridge and Anriette Esterhuysen
Explanation
Both speakers highlighted the potential for collaboration between ROMEX assessments and existing NRIs, with Anriette expressing excitement about NRIs exploring how to use the framework
How can the concept of ‘Internet Governance Forum’ be better communicated to convey its broader scope of digital cooperation and governance?
Speaker
Anriette Esterhuysen
Explanation
She noted that people found the term difficult to understand and didn’t grasp that it involves all aspects of digital cooperation, not just narrow internet governance
How can environmental sustainability indicators be better integrated into digital governance frameworks?
Speaker
Fabio Senne
Explanation
He identified that environmental issues like energy consumption, e-waste and emissions are largely overlooked in digital policies and need better integration
What strategies can address the persistent gender digital divide, particularly in leadership and decision-making roles in technology?
Speaker
Dorcas Muthoni
Explanation
She highlighted the challenge of supporting women to reach leadership positions in technology and the lack of data to assess progress in this area
How can disaggregated data collection be improved to better understand digital inequalities across different demographic groups?
Speaker
Fabio Senne and Dorcas Muthoni
Explanation
Both speakers emphasized the critical need for better disaggregated data to understand gaps in meaningful connectivity, particularly for marginalized groups like black women in Brazil
How can ROMEX be used as a planning and monitoring tool throughout the full lifecycle of policy development, not just for assessment?
Speaker
Anriette Esterhuysen
Explanation
She discovered that ROMEX could work as a planning tool and for monitoring/evaluation, suggesting this application needs further exploration and development
How can multi-stakeholder participation be made more consistent across different government ministries and regulatory environments?
Speaker
Fabio Senne
Explanation
He noted that while Brazil has good multi-stakeholder frameworks, participation remains fragmented across different government departments
How can digital literacy programs be redesigned to connect rights education, civic education, and understanding of complex digital environments?
Speaker
Anriette Esterhuysen
Explanation
She pointed out that many digital literacy programs are vendor-driven or device-focused and don’t address the broader complexity of digital citizenship
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Day 0 Event #59 How to Develop Trustworthy Products and Policies
Day 0 Event #59 How to Develop Trustworthy Products and Policies
Session at a glance
Summary
This discussion was a workshop session at IGF 2020 titled “How to Develop Trustworthy Products and Policies,” nicknamed “Project Manager for a Day” by Google. The session was moderated by Jim Prendergast and featured Google speakers Will Carter (AI policy expert) and Nadja Blagojevic (trust manager) who aimed to give participants insight into the role of product managers at Google and the challenges they face when launching products.
Nadja began by explaining that product managers identify problems to solve, develop vision and strategy, create roadmaps, and coordinate with teams including user experience (UX) designers and engineers. She emphasized the importance of iterative design and validation at different fidelity levels, noting that small changes in language and design can significantly impact product adoption. The speakers presented two case studies: AI Overviews, which uses generative AI to provide comprehensive responses to complex search queries with high-quality sources, and About This Image, a tool that helps users understand the context and credibility of images online, including detection of AI-generated content through SynthID watermarking.
Following the presentations, participants broke into groups to brainstorm product ideas focusing on information quality, news credibility, and privacy. The in-person groups developed concepts for flagging AI-generated or false news content in search results, while the online group, led by Hassan Al-Mahmoud from Kuwait’s telecommunications authority, proposed an AI-powered system to automate domain name registration verification using document recognition and validation. All groups emphasized the need for collaboration between engineering, UX, legal teams, and subject matter experts, while considering cultural competency and building user trust. The session highlighted the complex considerations involved in product development, particularly around information quality and trustworthiness in the digital age.
Keypoints
## Major Discussion Points:
– **Product Management at Google**: Overview of how product managers identify problems, develop vision and strategy, create roadmaps, and coordinate with UX designers and engineers to deliver features that solve user needs
– **AI-powered Features and Trust**: Case studies of Google’s AI Overviews and “About This Image” feature, demonstrating how the company approaches building trustworthy AI products with quality controls, source verification, and transparency tools
– **Information Quality and News Credibility**: Multiple breakout groups focused on developing features to help users identify reliable news sources, detect AI-generated content, and provide context about information credibility through visual indicators and fact-checking partnerships
– **Domain Registration Automation**: Presentation of a real-world case study from Kuwait’s domain authority (.kw) exploring how AI tools could streamline government processes for validating commercial entity documentation and domain name registration
– **Cross-sector Collaboration Needs**: Discussion of how addressing online trust and information quality requires partnerships between private companies, government agencies, fact-checking organizations, and civil society groups
## Overall Purpose:
The discussion was designed as an interactive workshop called “Project Manager for a Day” to give participants hands-on experience with product management challenges at Google, specifically focusing on how to develop trustworthy products and policies while balancing various stakeholder needs and technical constraints.
## Overall Tone:
The tone was educational and collaborative throughout, beginning formally with structured presentations but becoming increasingly interactive and engaged during the breakout sessions. Participants showed genuine enthusiasm for tackling real-world problems, and the facilitators maintained an encouraging, supportive atmosphere while acknowledging the complexity of the challenges being discussed. The session ended on a positive note with appreciation for the collaborative dialogue between different sectors.
Speakers
– **Will Carter** – AI policy expert with extensive experience in shaping government policies and regulations on AI; currently works on leading AI policy in the knowledge and information team at Google, where he leads engagement on AI policy and regulatory standards with senior policy makers around the world; previously worked at the Center for Strategic and International Studies focusing on international technology policy issues
– **Jim Prendergast** – Works with the Galway Strategy Group; serves as moderator for the session
– **Nadja Blagojevic** – Knowledge and information trust manager at Google with over 15 years of experience in the tech industry; expert in online safety and digital literacy; based in London; has held various leadership positions at Google including leading work across Europe on family safety and content responsibility
– **Hassan Al-Mahmid** – From Kuwait, works at the Communication and Information Technology Regulatory Authority (CITRA); in charge of the .kw domain space; responsible for domain name registrations and policy making for Kuwait’s country code top-level domain
– **Audience** – Multiple audience members participated in discussions and breakout sessions
**Additional speakers:**
– **Nidhi** – Joining from India; academic doing PhD work that lies between tech and public policy in various areas of ethics
– **Abdar** – From India; works as an internet governance intern at National Internet Exchange of India, working between tech and policy
– **Oliver** – Appears to be event staff managing time and logistics (mentioned as giving time signals from the back of the room)
Full session report
# Workshop Report: “How to Develop Trustworthy Products and Policies”
## Executive Summary
This report summarizes the “Project Manager for a Day” workshop session held during IGF, titled “How to Develop Trustworthy Products and Policies.” The one-hour interactive session (9-10 AM on day zero) was designed as an educational experience led by Google representatives to give participants hands-on insight into product management challenges, particularly focusing on developing trustworthy products and policies in the digital age.
The workshop engaged both in-person and online participants in collaborative problem-solving exercises, resulting in three concrete product proposals addressing news credibility, government process automation, and information quality. The session successfully demonstrated the complexities of product development while providing practical experience in collaborative problem-solving.
## Session Structure and Participants
### Facilitators and Speakers
The session was moderated by **Jim Prendergast** from the Galway Strategy Group. The primary speakers were **Nadja Blagojevic**, Google’s Knowledge and Information Trust Manager based in London (joining remotely), and **Will Carter**, an AI policy expert from Google.
Key participants included **Hassan Al-Mahmid** from Kuwait’s Communication and Information Technology Regulatory Authority (CITRA), **Nidhi**, a PhD researcher from India working on tech and public policy ethics, and **Abdar**, an internet governance intern at the National Internet Exchange of India.
### Workshop Format
The session followed a structured approach:
1. Introductions and product management fundamentals
2. Case studies of Google’s AI-powered features
3. Collaborative breakout sessions (15-20 minutes)
4. Final presentations (2-3 minutes each)
Technical challenges with remote participation were noted, with some audio difficulties for online participants.
## Product Management Fundamentals
Nadja Blagojevic explained that product managers at Google are responsible for identifying problems to solve, developing vision and strategy, creating roadmaps, and coordinating with cross-functional teams. She emphasized the collaborative nature of product development, noting that product managers work closely with UX designers and engineers throughout the development process.
The iterative design process was highlighted as crucial, with products validated at different fidelity levels throughout development. Blagojevic noted that seemingly minor changes in language and design can significantly impact product adoption.
She distinguished between obvious improvements and less obvious innovations that solve problems users don’t realize they have, using Google Street View as an example of addressing a latent need for location visualization.
## Case Studies: Google’s AI Features
### AI Overviews
Nadja presented AI Overviews as an example of how Google approaches trustworthy AI implementation. This feature uses generative AI to provide comprehensive responses to complex search queries, appearing only when they add value beyond regular search results. The feature is designed to show only information supported by high-quality results and includes safeguards against hallucination.
### About This Image
Will Carter presented “About This Image,” a tool designed to help users understand the context and credibility of images online, including detection of AI-generated content. The tool provides contextual information about image sources and authenticity.
Central to this tool is SynthID, Google’s digital watermarking technology that embeds detectable markers in AI-generated images. These watermarks remain identifiable even after alterations such as cropping or resizing. Carter noted that all images created with Google’s consumer AI tools are marked with SynthID.
## Breakout Session Outcomes
### In-Person Groups: News Credibility Solutions
The physical room was divided into two groups that focused on news credibility and information quality challenges. Their proposals included:
1. **Visual credibility indicators**: Adding flags to Google search results to indicate whether news articles are false or AI-generated
2. **News classification system**: Rating content on a spectrum from neutral to sensationalist to help users make informed decisions
The groups recognized that implementing such systems would require collaboration with cultural competency experts and appropriate legal frameworks to understand news sources across different contexts.
### Online Group: Government Process Automation
Hassan Al-Mahmid led the online group in developing a proposal for improving Kuwait’s .kw domain registration process through AI automation. Currently, the process requires manual document verification and takes 48 hours to complete. The proposed solution would use AI image recognition to validate trade licenses and match domain names to business names, potentially reducing processing time to minutes.
The system would also suggest alternative domain names when conflicts arise and could integrate with other government entities to streamline verification processes. Al-Mahmid acknowledged that implementation would require consultation with legal departments regarding confidential data handling and determining acceptable documentation standards.
The project timeline was estimated at six months, though government integration requirements might extend this timeframe.
## Key Themes and Approaches
### User Empowerment Through Transparency
Participants agreed that providing context to users represents an effective approach to information quality, rather than making unilateral content decisions. This philosophy emphasizes user empowerment through transparency, allowing individuals to make informed decisions based on comprehensive information about sources and credibility indicators.
### AI as Enhancement Tool
There was consensus on the role of AI as a tool for verification and enhancement rather than replacement of human judgment. AI was positioned as augmenting human decision-making capabilities rather than supplanting human oversight entirely.
### Multi-Stakeholder Collaboration
All speakers recognized that addressing information quality challenges requires collaboration between the public sector, private sector, academia, and civil society.
## Practical Outcomes
### Concrete Proposals
The session generated three specific product proposals:
1. **News Article Credibility System**: Visual indicators and classification systems for search results to inform users about news article reliability
2. **AI-Powered Domain Registration**: Automated system for validating commercial entity documentation in government processes
3. **Contextual Information Tools**: Systems that provide users with background information to make informed decisions about content credibility
### Commitments
Hassan Al-Mahmid agreed to present Kuwait’s domain registration AI automation project as a detailed case study. Will Carter committed to remaining available throughout IGF week for follow-up questions and discussions.
## Challenges Identified
The discussion highlighted several ongoing challenges:
– **Cultural competency**: Developing information quality systems that work across different political and cultural environments
– **Implementation complexity**: Balancing innovation with regulatory compliance, particularly in government contexts
– **Success measurement**: Establishing metrics for evaluating information quality initiatives
– **Automation oversight**: Determining appropriate balance between automated systems and human oversight
## Conclusion
The workshop successfully demonstrated the complexity of developing trustworthy products and policies while providing participants with practical experience in collaborative problem-solving. The session revealed common ground around user empowerment through transparency, multi-stakeholder collaboration, and AI as a verification enhancement tool.
The three concrete proposals developed during the workshop provide starting points for addressing information quality challenges, while the collaborative approach modeled during the session offers a framework for future multi-stakeholder engagement in digital governance challenges.
Session transcript
Jim Prendergast: Patients, as we kick off the IGF 2020, it’s always a challenge with day zero, 9 a.m., for everybody to find the room, find their way around the venue, get through security, and as you see, get rid of some of the tech gremlins that we have sometimes. My name’s Jim Prendergast. I’m with the Galway Strategy Group. I’m gonna sort of moderate this session for you. Officially, it’s titled How to Develop Trustworthy Products and Policies. But the folks at Google sort of have an internal nickname for it. It’s called Project Manager for a Day. So what we essentially wanna do is give you an overview of what it’s like to be a product manager at Google. How do you balance all the different challenges when it comes to launching a product into the marketplace? All the different factors that these folks have to take into consideration before you actually see a product and some of the different feedback cycles that it goes through and some of the challenges that, frankly, you face on a day-to-day basis. What I’m gonna do is I’m gonna introduce our two speakers. We have one speaker here in person and then one speaker online. And then they’re gonna give a quick overview, some case studies to sort of show you what they deal with on a regular basis. And they’ll discuss some of the different considerations that do go into the product development. And then next what we’ll do is we’re gonna do essentially two breakout groups. One will be the in-person participation, folks here in the room. Will’s gonna work with you through some tabletop exercises for about 20, 25 minutes. And then Nadja’s gonna, fingers crossed, work with the online participants to accomplish the same. From a technical standpoint, I think the easiest way to not hear the people talking to each other online is for all of us to take our headsets off. That seems to be the shortest way to solve that tech issue with the online and the offline participants. during remote participation, which of course is an important aspect of the IGF. So let me get going here and do some introductions. First, we have Will Carter. Will’s an AI policy expert with extensive experience shaping government policies and regulations on AI, working with product teams to develop and deploy AI responsibly in real world applications. Currently works on leading AI policy in the knowledge and information team at Google, where he leads engagement on AI policy and regulatory standards with senior policy makers around the world. He’s advised senior leadership and C-suite executives on AI policy strategy and implementation, and developed and implemented AI policies and governance across the company. Prior to joining Google, he was with the Center for Strategic and International Studies, where he focused his research on international technology policy issues, including emerging technologies and artificial intelligence. So if you’ve got a question about AI, this is your guy. Joining us remotely is Nadja Blagojevic. She’s based in London. She is a knowledge and information trust manager at Google with over 15 years of experience in the tech industry. She’s an expert in online safety and digital literacy, and she’s held various leadership positions at Google, including leading work across Europe on family safety and content responsibility. So with that, what I’m gonna do is throw it over to Nadia to kick us off with the case studies to help set the stage for us. Nadia?
Nadja Blagojevic: Great, thanks very much, Jim, and thank you very much, everyone, for being with us here this morning. So without further ado, we will jump right in. I’m very excited to be talking with you all about what product managers do at a company like Google, and as with most jobs, there’s no one right way to do it, if you ask. A hundred people, you’ll probably get a hundred different answers, but there are some common elements that we will talk about today. So you can think of a product manager as the person who’s responsible for figuring out at its core what the problem is that needs to be solved. Sometimes it’s very easy to identify what a problem is. For example, once word processors were built, it was fairly obvious that a spell checker would be an improvement. But some things can be less obvious. For example, with Google Street View, when we first launched, it wasn’t clear to the degree to which seeing a location before a drive or a trip or contemplating a move could be. This feature was a less obvious addition to an online map, and it solved a problem that most people didn’t even realize that they had. So the PM focuses on identifying that problem and then building out a vision, a strategy, and a roadmap. The vision should really be informed by the problem that you’re trying to solve. It should be a stable, long-term, high-level overview of what that problem is and really how you’re going to tackle it. The strategy helps you navigate and leverage the technology and the ecosystem factors that will be playing out over the lifetime of your product. Your strategy should be relatively stable, and your roadmap is really thinking about how you sequence what you’re going to do to build your specific feature and move towards your vision. Your roadmap usually changes pretty frequently. In consumer tech, if you build a roadmap and it’s accurate for a year, you’re very lucky. PMs partner really closely to coordinate teams and deliver the right features, right data, users, sales, marketing at all the right times in the product development lifecycle. And we really try to make sure that we are also the ultimate champions of our products, both inside the company and externally. And the goal is really to… to make sure that we’re building something of value so that our broader teams and stakeholders can evangelize what we build as well. As product managers, we work really closely with our colleagues in user experience, which is sometimes abbreviated as UX, to iteratively design and validate what we’re building at progressively higher levels of fidelity. It’s very expensive to change something that’s fully developed, but it’s very inexpensive to put a wireframe or a rough sketch of a product in front of someone that we want to use the product and ask questions like, would you use this? What will you use it for? What doesn’t make sense? What’s missing? It can be really amazing, but these small changes in language and wording and also insights can lead to huge impacts in adoption. And lastly, but certainly not least, our engineer counterparts. Engineers build and maintain products. They make them work reliably and quickly for users. And both UX and Eng are included when we do our roadmapping and strategy setting. We build better plans of roadmaps when we have all three functions working together from the get-go to sort of build out that roadmap and set the strategy and vision. So as Jim mentioned, we’ll go through a couple of quick case studies to give you a sense of how we approach product development, walking through a couple of features that we’ve developed here at Google. So talking now about AI overviews. Not yet. Could I just interrupt real quick? To the guys in the back, can we display the slides in the Zoom and on the screen? Is that possible? There we go. Great, thanks very much. And if we could just advance to the next slide, please. We’ll just go right into our AI overviews case study. Great. So building on our years of innovation and leadership and search, AI overviews are part of Google’s approach to provide helpful responses to queries from people around the world. They use generative AI to provide key information about a topic or a question. And they were really designed to show up on queries where they can add additional benefit beyond what people might already be getting from search, where we have high confidence in the overall quality of the responses. So for example, if you look on the query to the right of the screen, you can see that AI overviews let you ask more complex questions. This query is asking for help on how to stand out on a first time apartment application. And you can see you get a really nuanced answer. You get corroborating links here and additional resources to dive in and learn more. And you get that kind of information and extra help in a very digestible way. You can see here the user experience elements and the design with the bullet points, for example, or the placement of the links in this response. And on the next slide, talking a little bit about that sort of bar of high quality. For AI overviews, we’ve designed it to only show information that’s supported by high quality results from across the web, meaning that generally AI overviews don’t hallucinate in the ways that other LLM experiences might. We think this is especially, this is important kind of across the board, but also especially important for queries that might be particularly sensitive for a given reason. And for these kinds of queries, whether they’re about something maybe health-related or finance-related or seeking certain types of advice, we have an even higher quality bar for showing information from reliable sources. We also have built into the product that for these queries, AI overviews will inform people when it’s important to seek out expert advice or to verify the information that’s being presented. And then finally here, we also have a set of links and a display panel here on the right-hand side with more additional resources for relevant web pages right within the text of the AI overviews. And we’ve seen really positive results showing these links to supporting pages directly within AI overviews is driving higher traffic to publisher sites. And because of AI overviews, we’re seeing that people are asking longer questions, diving more deeply into complex subjects, and uncovering new perspectives, which means more opportunities for people to discover content from publishers, from businesses, and from creators. I’ll hand over now to Will to talk about About This Image.
Will Carter: Thanks, Nadia, and thank you all for coming today. I’m going to talk a little bit about another feature that we launched in 2023 called About This Image. Google Search has built-in tools that really are designed to help users find high-quality information, but also to make sense of the information that they’re interacting with online. And About This Image and SynthID are designed to help users understand the context and the credibility of images they’re interacting with online, including understanding if those tools have been generated or if those images have been generated by Google’s AI tools. So with Google Image Search results, you can click on the three dots above the image, and that will show you the image’s history, which includes. Jim Prendergast, Mevan Babakar, Jim Prendergast, Jim Prendergast, Jim Prendergast, Jim Prendergast, other sites that accurately describe the original context and origin of the image. And it allows you to really understand the evidence and perspectives across a variety of sources related to the image. And finally it allows you to see the image’s metadata. So increasingly, publishers, content creators, and others are adding metadata, tags that provide additional information and context about an image that can provide a variety of information including whether or not it’s been generated, enhanced, or manipulated by AI. Which is increasingly important to understand as powerful image generation and image alteration engines are widely available. So one of the key ways that we do this is using a tool called SynthID. Which is a tool for watermarking and identifying AI generated content. Basically what this does is it embeds a digital watermark directly into the pixels of an image generated by Google’s AI image generation tools. That’s important because even when the image has been altered, for example by cropping it or screenshotting it, or resizing or recoloring or flipping the image, those watermarks can still be detected, making it more robust to adversarial behavior. And all images made with Google’s consumer AI tools are marked with SynthID. And that means that if you encounter an image through Google search, that is generated by a Google AI tool, you will be able to see that in About This Image. So this last GIF here shows how we’ve recently integrated About This Image into one of our other products, Circle to Search. So Circle to Search allows you to select something on the screen and access additional information about it. In this case, you can circle an image and get About This Image information to get context about images that you interact with online, which can be a really powerful way, again, to really understand that context and make sure that the image that you’re interacting with is being used in the way that was intended with appropriate context and accurately. So I’ll pass back to Jim for our activity.
Jim Prendergast: Yeah, sure. So thanks, Will. So, you know, sort of just give you a high level of all the different things that product managers have to consider working with their teams, the privacy rights, some of the metadata you talked about with the image. So what we’re gonna do now, I realize it’s early, hopefully you’ve all had your coffee and are ready to be a little interactive, is we’re gonna break out into two, maybe three breakout groups. I’d figure two in the physical room and one in the online room, just based upon how many folks we have. And what we’re gonna do is we’re gonna ask you to think a little bit for about 15 minutes or so, come up with some ideas. There’ll be some instructions on the next slide that Will’s gonna walk you through. And then what we’ll do is we’ll come back and share some ideas and thoughts for the final 15 minutes or so. So Will, why don’t you show them what they’re working with?
Will Carter: All right. So basically we’re going to have you break out into groups and nominate one PM. That’s going to be the person who’s kind of leading and presenting on behalf of your group. You pick an area of focus and we have a couple of options for you, but you’re welcome to pick something else. if you prefer, but info quality, news and privacy are some of the areas that we are actively working on every day. So the idea is, come up with an idea. Come up with a feature that you think we could add to Google search to address one of these issues. Or make up your own product. Then you’ll pitch your ideas to your VPs, that’s us, and argue for resources based on what you need in order to make this real. What you think the return on investment that you could generate from this product. And that doesn’t necessarily just mean how do you make money from it, but also how do you add value for the user, address a specific problem that our users are encountering in the way that they engage with our products. And don’t forget about the various things that you’re going to need to make this a reality. So that’s that UXR and support that Nadia was talking about earlier. But also, what is your go to market strategy? What are your success metrics? What is a realistic timeline or roadmap? You’ll have about 15 or 20 minutes to do this activity and we’ll be, Nadia and I will be engaging with your groups to help you work through this exercise. So good luck and maybe, what do you think? We can, yep. Okay, maybe we can divide right about here. So in the red, right there. You to this side, everyone else to that side. We can have our two groups in the room.
Jim Prendergast: All right, and Will’s gonna come down and prime the creative engine for everybody. And then Nadia’s got the online folks as well. So we’ll come back in 15 minutes and share experiences. And I know there was a question that we had in the chat room and we’ll answer that when we come back from the breakout as well. Thanks.
Nadja Blagojevic: Great, and so for everyone online, could you please try coming off mute and saying good morning?
Hassan Al-Mahmid: Thank you. Hello and good morning, everyone. Hello. Basically, we are in Norway right now, but we arrived early in the morning, we couldn’t attend the session.
Nadja Blagojevic: Ah, I see.
Hassan Al-Mahmid: And then we attend afternoon sessions in person. We’re from Kuwait, we’re from the Communication and Information Technology Regulatory Authority, etc. My name is Hassan Al-Mahmid, and I’m in charge of the cctlz.kw.
Nadja Blagojevic: Wonderful, it’s wonderful to have you with us. Are others able to come off of mute?
Audience: Hi Nadia, can you hear me? Yes, I can hear you. Hi, this is Nidhi, and I’m joining in from India, so hello. I am an accommodation, and I’m doing my PhD, and somewhere lies between tech and public policy, and various areas of ethics, so I’m very happy to be here. Good to see you.
Nadja Blagojevic: Wonderful, great to see you as well. All right, wonderful, it’s good to know that everyone’s able to come off mute. At this point, I’d like to ask everyone to please unmute yourself, because for the next few minutes we’ll be having a group discussion. Which I will not be leading, that will fall to you all. So as Will and Jim mentioned, for this next session in the breakout, we will be, rather you will be, brainstorming an idea as product managers. And it can be related to Google search, it can be related to another Google product, or just any technology idea that you think solves a problem. Can everyone please come off mute?
Audience: Yeah, please just confirm if you can hear me. Yes. Yeah, I’m Abdar. I’m from India. So I’m working as an internet governance intern at National Internet Exchange of India. So I work somewhere in between tech and policy.
Nadja Blagojevic: Wonderful. Yeah. And I’ll pose the question to the group. When you think about a product that you would like to build or a problem that you would like to solve, what springs to mind? And this is open to the entire group, please.
Hassan Al-Mahmid: Well, I do really have a lot of real case scenarios and like some projects undergoing right now. I can share some information with you and maybe if you guys are interested to help us develop the appropriate policies or get insights from you for the upcoming products in the .kw domain space. If you’re interested, I can pitch the idea for you guys and move with it. Or otherwise, I’m really open to work with the other team, other team members on other ideas. And then it’s all going to benefit us all on the way of how we’re going to think of building the policies and what aspects we need to consider when making strong and cohesive policies.
Nadja Blagojevic: Great. Other thoughts from the group?
Audience: I think if I heard Hassan correctly, that he has an idea and probably would like to share that with us and we can. sort of stitch that together, is that correct?
Hassan Al-Mahmid: Yes, that’s correct. I do have like some ideas from our day job, you know, I can share with you. For example, since we are in charge of the .kw domain space, we are thinking of implementing AI tools to help us make the registration process for domain names in Kuwait, the faster and easy process with the benefit of AI, we can like process the domain request almost immediately without wait for someone to look up the documents and make all the choices. So just I’ll give you a brief of how the domain space works in Kuwait. We do have two zones to register. For example, if you would like to register name.com.kw, since we have the extension .com.kw, it represents a commercial entity in Kuwait. So there are some set of requirements for that entity to register, such as having a valid trade license in Kuwait, they have to have a representative in Kuwait, someone either is going to be a Kuwaiti citizen or someone with a work permit in Kuwait. So these kinds of documentations are being like right now, manually uploaded throughout the portal. And then it has to be checked by a person to validate all the information and making sure that the domain registration request is valid. But we are thinking of implementing right now AI tools and some sort of integration between the government entities. So to make the process seamless, and we can have like the domain up and running. within minutes instead of, for example, 48 hours right now. Great. And when you think about building out this AI tool, what kind of resources do you think you would need to be able to develop it? And this is sort of a question for the group. I can give them a hint, basically. Yeah. The process is gonna be similar somehow, like the client who would like to register a domain name, they will need at the moment to upload their trade license. Okay? Once this is uploaded, we can use an image recognition tool to validate the document and make sure it’s not a fraudulent document. One of the regulations and the policies we have in Kuwait that the domain name is being registered for the commercial entity, it has to be matching the name of the entity in the commercial trade license. So we can, with that image recognition tool or text recognition tool, it can match the requested domain name, for example, with the name of the trade license. And if it finds a conflict, it shouldn’t reject the request, it should like pop up some sort of suggestions for the client to pick names from. That’s one example.
Nadja Blagojevic: And what kinds of sort of internal partnerships, which departments do you think, whether that’s UXR or engineering, would you need to work with legal departments? Who would you need to work with to be able to have the tool be able to do what you’ve just described?
Hassan Al-Mahmid: Well, we enjoy at our department, that’s just a one man show. Basically, we do set the policies. and we do have control of the technical aspects of the whole registration process. But we do seek some help from the legal department, that’s for sure, because we have to set some sort of guideline when uploading these documents, and we need to check with the legal department what kind of documents we should accept and how to handle this information, and sensitive where is it going to be, confidential data, it can be shared, what kind of level of confidentiality with these documents being uploaded, how to be handled and whether we can share them with the third parties or not. Yes, great. I mean data privacy and data security seem like they’d be very essential for the product development process. When you think about timeline, do you have an estimated time frame for how long something like this might take to develop? Usually these sort of tools are, the beauty of it, there are a lot of out-of-shelf solutions ready to be picked up and integrated. So we are expecting around six months to be honest, this is the time frame to have it done, in technical aspects, but since we are working with governmental entities here and maybe we need some governmental integration, you know how the government sometimes the time might extend to more than six months. Six months is more optimistic. I like that very much. We always encourage optimism, even though the entire repetition of government work that takes a lot of time, we always push for more. Efficiency and faster time, even though.
Nadja Blagojevic: All right. I think this is great. I think maybe we have a hand raised.
Audience: Yeah, so I had an opinion on that. Sure, go ahead. Yeah, so basically what Hasan is saying is, what I’m understanding is right. He’s saying there needs to be a capacity building, making the public servants familiar with this and integrating this AI into their framework. Right? Is that right, what I’m understanding?
Hassan Al-Mahmid: Yeah, that’s correct. Yes.
Audience: You’ll have to train the public servants on how to use these tools. Basically, there needs to be a capacity building.
Hassan Al-Mahmid: Yeah, there has to be some sort of training on how to use these tools. Yeah, that’s absolutely correct.
Nadja Blagojevic: Hey, everybody. Does anyone on the call have ideas about what we should ask our vice presidents for in terms of resources to develop this kind of capacity building?
Audience: We should tell them to be patient. I agree with that. The process takes time and you’ll have to be patient. Hasan, if you are looking into some global case studies, then you can look into Argentina. They also have some similar program to this.
Hassan Al-Mahmid: Thank you for the insight. We have a couple of success stories in the region. mostly in the United Arab Emirates. They do have implemented some AI tools and I believe also Qatar also they have that sort of tools. We are in talks with them at the moment to benefit from their experience. Since we are like the GCC countries in the Middle East, the Gulf countries, we almost share the same policies and we have also the same structure for domain names. So it’s much easier to get experience from these countries who are more advanced and they’re being very helpful but definitely we are looking to Argentina and we have also looked into Australia also. They have a really great content for domain names, very beneficial.
Nadja Blagojevic: I think we’ll be rejoining the group in about two minutes and so when we go back into the main group, Hassan, would you like to present as the product manager?
Hassan Al-Mahmid: Yeah, definitely, but I would also would love to. Hassan is our representative.
Nadja Blagojevic: Any final thoughts from anyone else on the call or questions or points that we think should be made as Hassan pitches this idea?
Audience: You should communicate what you’re doing to the public because since it’s a public sector, you’ll have to… communicate with them, even the failures as well. So, you know, to build trust.
Nadja Blagojevic: All right, Akhtar, do you have suggestions of how to do that?
Audience: No, as you’re doing, you can just give out small press briefings and something like that, even on your website.
Hassan Al-Mahmid: Yeah, definitely. We do usually have some press releases and briefs sometimes whenever we enable new features in .kw namespace. For example, last year in September when we released the roadmap for registering second-level domain names, that means your name .kw direct without .com or .org and I think it’s just going to be your name and .kw. We have released the roadmap on how you’re going to register these domain names and what are the places it’s going to be released on. Basically, yeah, we do regular press releases whenever we have new features. And this is one of the best ways to communicate with the public aside from social media.
Audience: Because they’re the ultimate users, so you’ll also need their interaction and their feedback. So if there’s no interaction, we’ll not get proper feedback.
Hassan Al-Mahmid: Yeah, and one thing that came to my mind, we are in the process of releasing a dispute resolution policy for domain names in Kuwait and it’s a national dispute resolution policy. When we released that policy, we seeked public consultation. We have the brief on the website. and we gave participants around 60 days to participate and give their idea on what are the policies and what has to be changed or improved. And we have received really good feedback from the public.
Audience: That’s really nice to hear. And 60 days is a good time frame.
Hassan Al-Mahmid: Yes, and this is the approach we’re doing in Sitra, Kuwait. Sitra, Kuwait basically is the TRA, Regulatory Authority for Information and Communication. So right now, whenever we release a new policy, we push it to the public consultation to get feedback. And then we analyze, get the feedback and improve. And then we release the final version.
Audience: Good to hear.
Nadja Blagojevic: Great, so it sounds like we will be rejoining the main group in just a second. And so Hassan will be our representative presenting the product idea. And we’ll also hear from the other two groups that have been workshopping their product ideas in person at IGF.
Audience: Hassan, make us proud.
Jim Prendergast: I hate to break up the creative process, especially at this hour since it’s going. But we do need to come back because they are going to throw us out at 10 o’clock. I promised all of you.
Audience: We’re only like 10 minutes away from the forum, by the way.
Hassan Al-Mahmid: Well, it’s now raining. And then after this session, yeah, we will join you guys on the floor, inshallah.
Jim Prendergast: Okay. Hello, everybody.
Nadja Blagojevic: Great chance to meet you up in person.
Jim Prendergast: Can you all hear us in the online world?
Audience: I am not from India, so I’m not lucky.
Jim Prendergast: Okay. Nadia, can you hear us from where you are? Yes. Okay, great. Well, I was listening actually to all three groups and I was impressed that the creative juices got flowing at this hour in particular with all the jet lag and everything else. So congratulations to everybody who partook. Will, you want to share some insights before we ask? Actually, let me ask you the question while the other groups get organized and prepared to read out to us. So the question that did come in after, before the break was, how do you scrape high quality content and what are the parameters of what you call high quality? And while Will is answering that, each group spokesperson get ready to give us like a two to three minute readout from your deliberations. Thanks.
Will Carter: I wish there was a simple answer to this question. This is something that we struggle with every day and that remains an area of significant innovation and investment for Google. There are a few approaches that we are taking currently and like I said, they just continue to evolve all the time as we try to figure out how to do this better and better. One way is to work with fact-checking organizations around the world that can validate information for us and do additional research and those partnerships are really key. Another way is to identify news sources that are consistently providing high quality information that are independent and that are generally reliable and validated by fact-checkers. And, but really at the end of the day, I think the most important thing that we do is provide context to our users as much as we can about where the information that they’re interacting with came from. So that’s providing additional links, providing counter arguments, providing access to metadata and additional information because there is no one
Jim Prendergast: first, and then we’ll go to the online group, and then the group to the right. So did you nominate a spokesperson, or? OK, great. There should be a mobile microphone, right? I put it on the table. There you go.
Audience: Can you hear me? OK, great. You said two minutes? OK. So in our group, we discussed a feature that would be added to Google search results that include news articles. And the goal of the feature is to give users information about the validity of the news article, some kind of flag or visual signal to show them if they’re looking at something trustworthy. We specifically talked about identifying news that can be known to be false or known to be generated by AI. And we would, if we are able to determine that, add a flag to show that to users that they are looking at something that is AI-generated. And they can still view it, but it would just be kind of a visual cue. We discussed some of the ways to kind of generate this information using fact-checking organizations that are credible and based on the country or location where they’re. reviewing information. We talked a bit about some of the resources needed to do this. Of course, you need an engineering and UX team, but we also talked about kind of cultural competency and having a group or some type of experts on knowing news sources and what kind of the cultural dialogue is in different contexts and also kind of the legal and legal framework to know that. And yeah, talking about the the ROI of this feature, we talked about why a company like Google should incorporate this feature and the ROI would be increasing trust in the product, giving users insight into the information they’re looking at, which is something they’re seeking and would be a unique value that would bring them to use Google search as opposed to other search engines and generally increasing the trust in the product and making the user more able to rely on the information they’re getting would encourage usage and expected roadmap. We didn’t really get that far, but this is the idea we came up with.
Nadja Blagojevic: Great. No, that’s it. You covered a lot of territory in a short period of time, especially with a cold start, so appreciate that. So I’m not sure who was nominated to represent the online participants, but we will unmute you if you try and talk or Nadia, do you recall who was your spokesperson? Yes, that would be Hassan. Hassan, are you able to come off mute?
Hassan Al-Mahmid: Hello and good morning, everyone.
Jim Prendergast: Good morning.
Hassan Al-Mahmid: My name is Hassan Al-Mahmoud. I’m from Sitra, Kuwait, which is basically the TRA for the country. I represent the .kw domain space in Kuwait. I’m in charge for the domain name registrations and the policy making. And with my colleagues in the online session, we have discussed a feature that would be added for .kw domain name registrations. For example, the current process right now, we do have two zones. We have a restricted zone for registration and unrestricted zone. What we mean by restricted, the third level domain names such as .com.kw, yourname.com.kw, it represents a commercial entity. So in order to, for example, to register a domain under .com.kw, you will have to fulfill some requirements, which are, you have to be a commercial, an official commercial entity in Kuwait with a valid trade license and has to be registered by someone who’s actually in Kuwait, based in Kuwait, either a Kuwaiti citizen or someone with a work permit. So the process right now is semi-manual, we would say, because whoever need to register a domain name, they need to upload some sort of documents like the trade license, their civil ID, for example. And these are being checked manually by one of the employees of the .kw domain space. And then we can grant that domain name registration. But we are looking into some other solutions that might make the process much faster, much easier. We are thinking of implementing AI tools to do this sort of scrubbing and checks. Because one condition, if you’d like to register a .com.kw domain name, the domain you select, it has to be matching with your trade license. or your trademark license. So, instead of doing that manually checking, we can have some sort of scrubbing that will check the name of the license or the name of the trademark and then it will process the request almost immediately. And in case of, for example, whoever is in jurisdiction.com.kw is selecting a name that doesn’t match the trademark or the license, we can, using the AI tool, would give them suggestions what are the appropriate domain names that can be registered.
Jim Prendergast: Great. Thanks, Hasan. We are short on time. I’m getting the clock ticking down sign from Oliver in the back. So, real briefly to our folks in the room on the right.
Audience: Yeah, I’ll be very brief, seeing as that we’re building on the product that was mentioned earlier, but on a public news classification. So, to what Will was saying about creating an informed audience, we would want to build on the, right now when you go on Google Search, you have three dots when you come up with a news article that provides you some context about the news outlet. This feature isn’t right now in the news aggregator tab when you go to Google News. So, we’d like to build on that to have a classification where based on a little spectrum of either being neutral contact or sensationalist content, we would give users the information that they would need to make an informed decision on what they think is credible and trust. That’s really hard to define internally and externally. Just again, building on the other team, we would work with UX and engineering, but also leveraging subject matter expertise at Google, especially with the Google News Initiative team and also just Google News, to ensure that they’re helping us build a framework that can then be taken to product. And in terms of ROI, well, of course, we want to drive user engagement and by providing additional context and other links that within the Google ecosystem, they’re able to continue staying on the platform. continuing to engage with the content that Google would provide. But also, at the end of the day, it’s about providing more context and building information quality online. Again, subject to their own understanding of what they are being the users, but also what quality looks like in different political contexts. So yeah, I think we’re all interested in news credibility.
Jim Prendergast: Yeah, no, that is definitely a common theme. And this being the beginning of the IGF, I’m sure that’s a theme that will carry on for the next several days. Well, I’m impressed. I mean, some really good ideas, some really good thoughts.
Will Carter: Definitely.
Jim Prendergast: Do you want to react? And maybe between you and Nadia, it closes up in the next 90 seconds or so?
Will Carter: Sure, I’ll keep it brief and then kick it over to Nadia. I think that there’s a reason that these issues are top of mind. These are things that I think we’re all struggling with on a day-to-day basis, whether it’s companies like Google that are trying to solve these problems or users on the web that are trying to understand all this information that’s inundating us every day and how to make sense of it and how to understand what is and isn’t credible. You guys have come up with some really great ideas. And I think this gives you a sense of how, when you think of a problem that you interact with every day, how do you actually start to translate that into a product division, identify your needs, turn it into something that can actually work and solve that problem day-to-day? This is what we do at Google. This is exactly what our workday looks like. So I’m really excited to have you all participate in this process. Nadia?
Nadja Blagojevic: Yes, just fully agree with Will. It is wonderful to be with you and hear everyone’s ideas. And these are all topics that we care very deeply about internally at Google. And we’re very grateful for the opportunity to be here and be in dialogue with you all. , to hear your points of view, to learn from you, and to share what we’re doing, not only in terms of how we think about product development and design, and how we’ve approached some of these issues within our own suite of products, but also, you know, to sort of share and be in exchange when it comes to, you know, our philosophies, and, you know, ultimately these topics will need robust collaboration between public, private sector, academia, civil society. So thank you very much for being with us right from the very beginning of day zero, and very much hope you enjoy the rest of your IGF.
Jim Prendergast: Great. Thanks, Nadia. And speaking of collaboration, I’m getting the hook from Oliver in the back of the room. So thanks, everybody, for participating both online and in person. Joel will be here for the rest of the week. So if you have any questions, track him down. That’s how these IGFs work if you’ve never been. So thanks, everybody, and have a great meeting. Bye-bye.
Will Carter:
Nadja Blagojevic
Speech speed
150 words per minute
Speech length
1781 words
Speech time
709 seconds
Product managers identify problems to solve, build vision/strategy/roadmap, and coordinate teams to deliver features
Explanation
Product managers are responsible for figuring out what problems need to be solved, which can range from obvious improvements like spell checkers to less obvious features like Google Street View. They focus on building a stable long-term vision, strategy to navigate technology factors, and roadmaps that sequence feature development.
Evidence
Examples provided include spell checker as an obvious improvement to word processors, and Google Street View as a less obvious feature that solved problems people didn’t realize they had
Major discussion point
Product Management at Google
Topics
Digital business models
Product managers work closely with UX teams to iteratively design and validate products at different fidelity levels
Explanation
Product managers collaborate with user experience teams to design and validate products progressively, starting with wireframes and rough sketches before full development. This approach is cost-effective since it’s expensive to change fully developed products but inexpensive to test early concepts with users.
Evidence
Mentioned that small changes in language, wording, and insights from early testing can lead to huge impacts in adoption
Major discussion point
Product Management at Google
Topics
Digital business models
Agreed with
– Jim Prendergast
– Audience
Agreed on
Product development requires cross-functional collaboration and user-centered design
Product managers collaborate with engineers who build and maintain products, with all three functions working together from the beginning
Explanation
Engineers are responsible for building and maintaining products to work reliably and quickly for users. Both UX and engineering teams are included in roadmapping and strategy setting from the start, as better plans emerge when all three functions collaborate from the beginning.
Major discussion point
Product Management at Google
Topics
Digital business models
AI overviews use generative AI to provide key information and show up on queries where they add benefit beyond regular search results
Explanation
AI overviews are part of Google’s approach to provide helpful responses using generative AI, designed to appear on queries where they can add additional benefit beyond standard search results. They allow users to ask more complex questions and receive nuanced answers with corroborating links.
Evidence
Example provided of a query asking ‘how to stand out on a first time apartment application’ which receives a nuanced answer with bullet points, links, and additional resources
Major discussion point
AI-Powered Search Features and Quality
Topics
Digital business models | Interdisciplinary approaches
AI overviews are designed to only show information supported by high-quality results and don’t hallucinate like other LLM experiences
Explanation
AI overviews have a high quality bar and only display information supported by high-quality web results, which prevents hallucination issues common in other large language model experiences. For sensitive queries about health, finance, or advice, there’s an even higher quality standard and the system informs users when expert advice should be sought.
Evidence
Mentioned that AI overviews inform people when it’s important to seek expert advice or verify information, and show links to supporting pages that drive higher traffic to publisher sites
Major discussion point
AI-Powered Search Features and Quality
Topics
Content policy | Consumer protection
Building information quality requires robust collaboration between public sector, private sector, academia, and civil society
Explanation
Addressing information quality challenges cannot be solved by any single entity alone but requires collaborative efforts across different sectors. This multi-stakeholder approach is essential for developing effective solutions to information credibility issues.
Major discussion point
News Credibility and Information Quality Solutions
Topics
Content policy | Interdisciplinary approaches
Agreed with
– Will Carter
– Audience
Agreed on
Information quality requires collaborative approaches and providing context to users
Jim Prendergast
Speech speed
181 words per minute
Speech length
1209 words
Speech time
399 seconds
Product development involves balancing multiple challenges and considerations before launching products into the marketplace
Explanation
Product managers at Google must balance numerous different challenges and factors when launching products, including privacy rights, metadata considerations, and various feedback cycles. The session aims to show participants what it’s like to be a product manager dealing with these day-to-day challenges.
Evidence
Mentioned privacy rights and metadata considerations as examples of factors that must be balanced
Major discussion point
Product Management at Google
Topics
Digital business models | Privacy and data protection
Agreed with
– Nadja Blagojevic
– Audience
Agreed on
Product development requires cross-functional collaboration and user-centered design
Will Carter
Speech speed
167 words per minute
Speech length
1148 words
Speech time
412 seconds
There is no simple answer to identifying high-quality content – it requires partnerships with fact-checking organizations and identifying reliable news sources
Explanation
Identifying high-quality content is a complex challenge that Google struggles with daily and continues to invest in solving. The approach involves working with fact-checking organizations worldwide for validation and identifying news sources that consistently provide reliable, independent information.
Evidence
Mentioned partnerships with fact-checking organizations and identifying consistently reliable and independent news sources validated by fact-checkers
Major discussion point
AI-Powered Search Features and Quality
Topics
Content policy | Freedom of the press
Disagreed with
– Audience
Disagreed on
Approach to defining and identifying high-quality content
The most important approach is providing context to users about where information came from through additional links and metadata
Explanation
Rather than trying to be the sole arbiter of information quality, Google focuses on giving users as much context as possible about information sources. This includes providing additional links, counter arguments, and access to metadata so users can make informed decisions.
Evidence
Mentioned providing additional links, counter arguments, and access to metadata as ways to give users context
Major discussion point
AI-Powered Search Features and Quality
Topics
Content policy | Freedom of expression
Agreed with
– Nadja Blagojevic
– Audience
Agreed on
Information quality requires collaborative approaches and providing context to users
Disagreed with
– Audience
Disagreed on
Approach to defining and identifying high-quality content
About This Image helps users understand context and credibility of images online, including if they were generated by AI tools
Explanation
About This Image is a feature launched in 2023 that helps users understand the context and credibility of images they encounter online. Users can click on three dots above an image to see its history, other sites that describe its original context, and metadata that may indicate if it was AI-generated.
Evidence
Feature shows image history, sites describing original context and origin, and metadata tags that can indicate if images were generated, enhanced, or manipulated by AI
Major discussion point
Image Verification and AI-Generated Content Detection
Topics
Content policy | Digital identities
SynthID embeds digital watermarks in AI-generated images that remain detectable even after alterations like cropping or resizing
Explanation
SynthID is a watermarking tool that embeds digital watermarks directly into the pixels of images generated by Google’s AI tools. These watermarks are robust and can still be detected even when images are altered through cropping, screenshotting, resizing, recoloring, or flipping.
Evidence
Watermarks remain detectable after cropping, screenshotting, resizing, recoloring, or flipping, making them robust against adversarial behavior
Major discussion point
Image Verification and AI-Generated Content Detection
Topics
Content policy | Digital identities | Intellectual property rights
Agreed with
– Hassan Al-Mahmid
Agreed on
AI tools can significantly improve efficiency in content verification and processing
All images made with Google’s consumer AI tools are marked with SynthID for identification in search results
Explanation
Google has implemented a comprehensive approach where every image generated by their consumer AI tools receives a SynthID watermark. This means users can identify AI-generated images from Google tools when they encounter them through Google search using the About This Image feature.
Evidence
Integration with Circle to Search feature allows users to circle an image and get About This Image information for context
Major discussion point
Image Verification and AI-Generated Content Detection
Topics
Content policy | Digital identities | Consumer protection
Hassan Al-Mahmid
Speech speed
136 words per minute
Speech length
1716 words
Speech time
753 seconds
Current .kw domain registration requires manual document verification which takes 48 hours, but AI tools could process requests immediately
Explanation
The current domain registration process in Kuwait requires manual verification of documents like trade licenses and civil IDs, taking up to 48 hours for approval. By implementing AI tools and integrating with government entities, the process could be completed within minutes instead of the current lengthy timeframe.
Evidence
Current process requires manual checking of uploaded documents by employees, while proposed AI integration could make domains ‘up and running within minutes instead of 48 hours’
Major discussion point
Domain Registration Process Improvement
Topics
Capacity development | Digital access | Alternative dispute resolution
AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise
Explanation
The proposed AI system would use image and text recognition to validate uploaded trade licenses and ensure domain names match the business names on official documents. When conflicts are found, instead of rejecting requests, the system would provide suggested alternative domain names that comply with regulations.
Evidence
Example given of validating that requested domain name matches the name on trade license, and providing suggestions when conflicts are found rather than outright rejection
Major discussion point
Domain Registration Process Improvement
Topics
Digital business models | Alternative dispute resolution | Intellectual property rights
Agreed with
– Will Carter
Agreed on
AI tools can significantly improve efficiency in content verification and processing
Implementation would require legal department consultation for handling confidential data and determining acceptable documents
Explanation
The AI tool implementation requires collaboration with legal departments to establish guidelines for document handling, determine acceptable document types, and address data privacy concerns. Legal consultation is essential for determining confidentiality levels and whether documents can be shared with third parties.
Evidence
Need to check with legal department about what documents to accept, how to handle sensitive/confidential data, and whether information can be shared with third parties
Major discussion point
Domain Registration Process Improvement
Topics
Privacy and data protection | Data governance | Legal and regulatory
The project timeline is optimistically six months but may extend longer due to government integration requirements
Explanation
While the technical implementation using off-the-shelf AI solutions could be completed in six months, the involvement of governmental entities and required integrations may extend the timeline significantly. The six-month estimate represents an optimistic scenario for the technical aspects alone.
Evidence
Mentioned that ‘there are a lot of out-of-shelf solutions ready to be picked up and integrated’ but ‘since we are working with governmental entities… the time might extend to more than six months’
Major discussion point
Domain Registration Process Improvement
Topics
Capacity development | Digital business models
Audience
Speech speed
155 words per minute
Speech length
984 words
Speech time
379 seconds
Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated
Explanation
The proposed feature would provide users with visual signals or flags in Google search results to indicate the validity of news articles, specifically identifying content known to be false or generated by AI. Users could still view the content but would receive visual cues about its nature and credibility.
Evidence
Feature would use fact-checking organizations that are credible and based on country/location for validation
Major discussion point
News Credibility and Information Quality Solutions
Topics
Content policy | Freedom of the press | Consumer protection
Agreed with
– Nadja Blagojevic
– Will Carter
Agreed on
Information quality requires collaborative approaches and providing context to users
Disagreed with
– Will Carter
Disagreed on
Approach to defining and identifying high-quality content
Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts
Explanation
Implementing news credibility features requires more than just technical resources – it needs cultural competency experts who understand news sources and cultural dialogue in different contexts, as well as appropriate legal frameworks. This recognizes that news credibility varies across different cultural and legal environments.
Evidence
Mentioned need for ‘cultural competency and having a group or some type of experts on knowing news sources and what kind of the cultural dialogue is in different contexts’
Major discussion point
News Credibility and Information Quality Solutions
Topics
Content policy | Cultural diversity | Legal and regulatory
Agreed with
– Nadja Blagojevic
– Jim Prendergast
Agreed on
Product development requires cross-functional collaboration and user-centered design
Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions
Explanation
The proposed system would classify news content on a spectrum ranging from neutral to sensationalist, building on existing Google features that provide context about news outlets. This classification would help users make informed decisions about content credibility while acknowledging that trust is difficult to define both internally and externally.
Evidence
Would build on existing three-dot feature in Google Search that provides context about news outlets, extending it to Google News aggregator tab
Major discussion point
News Credibility and Information Quality Solutions
Topics
Content policy | Freedom of the press | Consumer protection
Disagreed with
– Will Carter
Disagreed on
Approach to defining and identifying high-quality content
Agreements
Agreement points
Information quality requires collaborative approaches and providing context to users
Speakers
– Nadja Blagojevic
– Will Carter
– Audience
Arguments
Building information quality requires robust collaboration between public sector, private sector, academia, and civil society
The most important approach is providing context to users about where information came from through additional links and metadata
Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated
Summary
All speakers agreed that addressing information quality challenges requires multi-stakeholder collaboration and providing users with contextual information rather than making unilateral content decisions. This includes partnerships with fact-checking organizations and giving users tools to make informed decisions.
Topics
Content policy | Interdisciplinary approaches | Freedom of expression
AI tools can significantly improve efficiency in content verification and processing
Speakers
– Will Carter
– Hassan Al-Mahmid
Arguments
SynthID embeds digital watermarks in AI-generated images that remain detectable even after alterations like cropping or resizing
AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise
Summary
Both speakers demonstrated how AI tools can automate and improve verification processes – Carter with image authenticity verification through SynthID, and Al-Mahmid with document verification for domain registration. Both emphasized AI’s ability to process and validate content more efficiently than manual methods.
Topics
Digital business models | Content policy | Digital identities
Product development requires cross-functional collaboration and user-centered design
Speakers
– Nadja Blagojevic
– Jim Prendergast
– Audience
Arguments
Product managers work closely with UX teams to iteratively design and validate products at different fidelity levels
Product development involves balancing multiple challenges and considerations before launching products into the marketplace
Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts
Summary
All speakers recognized that successful product development requires collaboration across multiple disciplines including UX, engineering, legal, and cultural expertise. They emphasized the importance of iterative design, user validation, and considering diverse stakeholder needs.
Topics
Digital business models | Cultural diversity | Legal and regulatory
Similar viewpoints
Both emphasized the importance of providing users with contextual information and classification systems to help them evaluate content credibility, whether for images or news articles. They shared the philosophy of empowering users with information rather than making decisions for them.
Speakers
– Will Carter
– Audience
Arguments
About This Image helps users understand context and credibility of images online, including if they were generated by AI tools
Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions
Topics
Content policy | Consumer protection | Freedom of expression
Both recognized that technical solutions must be accompanied by appropriate legal frameworks and expertise. They understood that implementing AI-powered systems requires careful consideration of legal, cultural, and regulatory contexts.
Speakers
– Hassan Al-Mahmid
– Audience
Arguments
Implementation would require legal department consultation for handling confidential data and determining acceptable documents
Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts
Topics
Legal and regulatory | Privacy and data protection | Cultural diversity
Unexpected consensus
Transparency and user empowerment over content control
Speakers
– Will Carter
– Audience
– Nadja Blagojevic
Arguments
The most important approach is providing context to users about where information came from through additional links and metadata
Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated
Building information quality requires robust collaboration between public sector, private sector, academia, and civil society
Explanation
It was unexpected that both Google representatives and audience members converged on the philosophy of transparency and user empowerment rather than platform-controlled content moderation. Instead of advocating for removing or blocking questionable content, all parties favored providing users with tools and context to make their own informed decisions.
Topics
Content policy | Freedom of expression | Consumer protection
AI as a tool for verification rather than replacement of human judgment
Speakers
– Will Carter
– Hassan Al-Mahmid
– Audience
Arguments
All images made with Google’s consumer AI tools are marked with SynthID for identification in search results
AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise
Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions
Explanation
There was unexpected consensus that AI should augment rather than replace human decision-making. All speakers viewed AI as a tool for providing information and suggestions rather than making final determinations about content validity or user choices.
Topics
Digital business models | Content policy | Consumer protection
Overall assessment
Summary
The discussion revealed strong consensus around user empowerment through transparency, multi-stakeholder collaboration for information quality, and AI as a verification tool rather than decision-maker. Speakers agreed on the importance of cross-functional product development and providing contextual information to users.
Consensus level
High level of consensus with significant implications for content policy and platform governance. The agreement suggests a shift toward transparency-based approaches rather than top-down content control, emphasizing user agency and collaborative solutions to information quality challenges.
Differences
Different viewpoints
Approach to defining and identifying high-quality content
Speakers
– Will Carter
– Audience
Arguments
There is no simple answer to identifying high-quality content – it requires partnerships with fact-checking organizations and identifying reliable news sources
The most important approach is providing context to users about where information came from through additional links and metadata
Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated
Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions
Summary
Will Carter emphasizes providing context and partnerships with fact-checkers rather than making definitive quality judgments, while audience members propose more direct classification systems with visual flags and spectrum-based ratings to guide users
Topics
Content policy | Freedom of the press | Consumer protection
Unexpected differences
Overall assessment
Summary
The main area of disagreement centers on content quality assessment approaches – whether to provide context for user decision-making versus implementing direct classification systems
Disagreement level
Low to moderate disagreement with significant implications for content policy approaches. The disagreement reflects fundamental tensions between platform neutrality and active content curation, which has broader implications for how information quality challenges should be addressed in search and news platforms
Partial agreements
Partial agreements
Similar viewpoints
Both emphasized the importance of providing users with contextual information and classification systems to help them evaluate content credibility, whether for images or news articles. They shared the philosophy of empowering users with information rather than making decisions for them.
Speakers
– Will Carter
– Audience
Arguments
About This Image helps users understand context and credibility of images online, including if they were generated by AI tools
Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions
Topics
Content policy | Consumer protection | Freedom of expression
Both recognized that technical solutions must be accompanied by appropriate legal frameworks and expertise. They understood that implementing AI-powered systems requires careful consideration of legal, cultural, and regulatory contexts.
Speakers
– Hassan Al-Mahmid
– Audience
Arguments
Implementation would require legal department consultation for handling confidential data and determining acceptable documents
Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts
Topics
Legal and regulatory | Privacy and data protection | Cultural diversity
Takeaways
Key takeaways
Product management at Google involves identifying problems, building vision/strategy/roadmap, and coordinating cross-functional teams including UX and engineering from the beginning
High-quality content identification has no simple solution and requires partnerships with fact-checking organizations, identifying reliable sources, and most importantly providing context to users through metadata and additional links
AI-powered features like AI overviews and About This Image are designed to help users understand information credibility and context, with built-in safeguards against hallucination
SynthID watermarking technology allows detection of AI-generated images even after alterations, with all Google AI-generated images being marked
Government domain registration processes can be significantly improved through AI automation, reducing processing time from 48 hours to minutes
News credibility solutions require cultural competency, legal frameworks, and classification systems to help users make informed decisions about information quality
Building trustworthy information systems requires robust collaboration between public sector, private sector, academia, and civil society
Resolutions and action items
Hassan Al-Mahmid will present Kuwait’s .kw domain registration AI automation project as a case study, with optimistic 6-month timeline for implementation
Participants developed three concrete product proposals: news article credibility flags, AI-powered domain registration automation, and news classification spectrum system
Will Carter committed to being available throughout the IGF week for follow-up questions and discussions
Unresolved issues
No definitive solution provided for identifying high-quality content – remains an ongoing challenge requiring continuous innovation
Cultural competency and legal framework requirements for news credibility systems were identified but not fully addressed
Timeline uncertainties for government integration projects due to bureaucratic processes
How to balance automated AI decision-making with human oversight in sensitive areas like domain registration and news credibility
Specific metrics for measuring success of information quality initiatives were not established
Suggested compromises
Providing context and metadata to users rather than making definitive quality judgments about information
Using visual flags and classification systems that inform users rather than censoring content
Implementing AI automation while maintaining human oversight for sensitive decisions
Seeking public consultation periods (like Kuwait’s 60-day feedback process) when implementing new policies
Leveraging existing partnerships with fact-checking organizations rather than building internal validation systems from scratch
Thought provoking comments
There is no one right way to do it, if you ask a hundred people, you’ll probably get a hundred different answers, but there are some common elements… Sometimes it’s very easy to identify what a problem is. For example, once word processors were built, it was fairly obvious that a spell checker would be an improvement. But some things can be less obvious. For example, with Google Street View, when we first launched, it wasn’t clear to the degree to which seeing a location before a drive or a trip or contemplating a move could be… This feature was a less obvious addition to an online map, and it solved a problem that most people didn’t even realize that they had.
Speaker
Nadja Blagojevic
Reason
This comment is insightful because it introduces the fundamental challenge of product management – identifying problems that users don’t even know they have. It demonstrates the difference between obvious improvements and innovative solutions that create new value propositions.
Impact
This comment set the conceptual foundation for the entire discussion by establishing that product management involves both solving known problems and discovering latent needs. It primed participants to think beyond obvious solutions in their breakout exercises.
I wish there was a simple answer to this question. This is something that we struggle with every day and that remains an area of significant innovation and investment for Google… but really at the end of the day, I think the most important thing that we do is provide context to our users as much as we can about where the information that they’re interacting with came from.
Speaker
Will Carter
Reason
This comment is thought-provoking because it acknowledges the complexity and ongoing challenges in content quality assessment, while pivoting to transparency as a practical solution. It shows intellectual honesty about limitations while offering a constructive approach.
Impact
This response validated the difficulty of the problem participants were grappling with and shifted the focus from perfect solutions to transparency-based approaches. It influenced all three breakout groups to incorporate context and transparency elements in their proposed solutions.
You should communicate what you’re doing to the public because since it’s a public sector, you’ll have to communicate with them, even the failures as well. So, you know, to build trust… Because they’re the ultimate users, so you’ll also need their interaction and their feedback. So if there’s no interaction, we’ll not get proper feedback.
Speaker
Audience member (Akhtar)
Reason
This comment is insightful because it introduces the critical dimension of public accountability and transparency in government technology projects. It emphasizes that trust-building requires communicating both successes and failures, which is often overlooked in product development discussions.
Impact
This comment elevated the discussion from technical implementation to governance and public trust considerations. It prompted Hassan to elaborate on Kuwait’s public consultation processes and demonstrated how different sectors (public vs. private) have different stakeholder accountability requirements.
We are thinking of implementing AI tools to help us make the registration process for domain names in Kuwait, the faster and easy process… So these kinds of documentations are being like right now, manually uploaded throughout the portal. And then it has to be checked by a person to validate all the information… But we are thinking of implementing right now AI tools and some sort of integration between the government entities.
Speaker
Hassan Al-Mahmid
Reason
This comment is thought-provoking because it presents a real-world case study of AI implementation in government services, highlighting the practical challenges of balancing automation with regulatory compliance and fraud prevention.
Impact
This concrete example grounded the theoretical discussion in practical reality and shifted the online breakout group’s focus to a specific, implementable solution. It demonstrated how product management principles apply across different sectors and regulatory environments.
There’s a reason that these issues are top of mind. These are things that I think we’re all struggling with on a day-to-day basis, whether it’s companies like Google that are trying to solve these problems or users on the web that are trying to understand all this information that’s inundating us every day and how to make sense of it.
Speaker
Will Carter
Reason
This comment is insightful because it acknowledges the universal nature of information quality challenges, creating common ground between tech companies and users. It validates that these aren’t just corporate problems but societal challenges affecting everyone.
Impact
This comment provided validation for the participants’ concerns and created a sense of shared purpose. It reinforced that the breakout exercise wasn’t just theoretical but addressed real problems that affect all stakeholders in the information ecosystem.
Overall assessment
These key comments shaped the discussion by establishing a framework that moved from theoretical product management concepts to practical, real-world applications with societal implications. Nadja’s opening comment about solving unknown problems set an innovative mindset, while Will’s honest acknowledgment of ongoing challenges with content quality created space for nuanced solutions rather than perfect answers. The audience contributions, particularly around public accountability and the Kuwait domain registration case study, grounded the discussion in practical governance considerations and demonstrated how product management principles apply across sectors. The convergence on information credibility and transparency across all breakout groups shows how these foundational comments successfully oriented participants toward addressing fundamental trust and quality challenges in digital products. The discussion evolved from a product management tutorial into a collaborative exploration of how technology can serve public trust and information integrity.
Follow-up questions
How do you scrape high quality content and what are the parameters of what you call high quality?
Speaker
Audience member (via chat)
Explanation
This is a fundamental question about Google’s content quality assessment methods that was asked but only partially answered, indicating need for more detailed exploration of quality parameters and scraping methodologies
What kind of resources would be needed to develop AI tools for document validation and domain registration processes?
Speaker
Nadja Blagojevic
Explanation
This question was posed to help Hassan think through the practical requirements for implementing AI in government processes, but requires further detailed analysis of technical, legal, and human resources
What kinds of internal partnerships and departments would be needed for AI tool development in government settings?
Speaker
Nadja Blagojevic
Explanation
This explores the organizational structure and collaboration requirements for implementing AI in public sector, which needs more comprehensive mapping of stakeholder involvement
How to effectively communicate AI implementation progress and failures to the public in government projects?
Speaker
Audience member (Akhtar)
Explanation
This addresses the critical need for transparency and trust-building in public sector AI implementations, requiring development of communication strategies and frameworks
What are effective methods for public consultation on new technology policies?
Speaker
Hassan Al-Mahmid (implicitly through discussion of 60-day consultation periods)
Explanation
While Hassan shared their approach, this raises broader questions about best practices for engaging public input on technology policy development across different contexts
How to define and implement cultural competency in news credibility assessment across different contexts?
Speaker
First breakout group
Explanation
The group identified the need for cultural expertise in determining news credibility, but this requires deeper research into how cultural context affects information assessment
How to create effective classification systems for news content (neutral vs sensationalist) across different political contexts?
Speaker
Third breakout group
Explanation
This group proposed a news classification system but acknowledged the challenge of defining quality across different political contexts, requiring further research into objective classification methodologies
What are the best practices for capacity building and training public servants on AI tools?
Speaker
Audience member discussing Hassan’s project
Explanation
This was identified as a critical need for Hassan’s project but requires systematic research into effective training methodologies for government AI adoption
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Open Forum #30 High Level Review of AI Governance Including the Discussion
Open Forum #30 High Level Review of AI Governance Including the Discussion
Session at a glance
Summary
This discussion focused on the current state and future directions of global AI governance, featuring perspectives from government officials, international organizations, and private sector representatives. The panel was moderated by Yoichi Iida, former Assistant Prime Minister of Japan’s Ministry of Internal Affairs, who outlined the evolution of AI governance from early initiatives in 2016 through recent developments including the OECD AI principles, the Hiroshima AI process, and the UN Global Digital Compact.
Lucia Russo from the OECD emphasized three strategic pillars: moving from principles to practice, providing evidence-based policy guidance, and promoting inclusive international cooperation. She highlighted the merger of the Global Partnership on AI with the OECD, expanding membership to 44 countries including six non-OECD members. Abhishek Singh from India’s Ministry of Electronics stressed the importance of democratizing AI access, particularly for the Global South, advocating for equitable access to compute resources, inclusive datasets, and capacity building initiatives.
Juha Heikkila from the European Commission clarified that the EU AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly policies. Melinda Claybaugh from Meta emphasized the need to connect existing frameworks to avoid fragmentation and duplication, calling for a shift from principle development to practical implementation.
Ansgar Koene from EY highlighted the growing need for robust governance frameworks as organizations move AI from experimental to mission-critical applications. All participants agreed on the importance of moving from principles to practice, building capacity globally, and ensuring inclusive participation in AI governance discussions. The conversation concluded with recognition that while AI and internet governance share some similarities, AI governance faces unique challenges requiring specialized approaches tailored to diverse use cases and risk profiles.
Keypoints
## Major Discussion Points:
– **Evolution and Current State of Global AI Governance**: The discussion traced the development of international AI governance from early initiatives in 2016 through major frameworks like OECD AI Principles (2019), the EU AI Act (2023), and the Hiroshima AI Process, highlighting how governance has evolved to address new challenges posed by generative AI technologies.
– **Moving from Principles to Practice**: A central theme emphasized by multiple speakers was the critical need to translate established AI governance principles into concrete, actionable policies and implementation frameworks, including developing toolkits, assessment mechanisms, and practical guidance for organizations and governments.
– **Inclusivity and Global South Participation**: Significant focus on ensuring equitable access to AI technologies, compute resources, and decision-making processes for developing countries and the Global South, with emphasis on capacity building, democratizing AI access, and preventing concentration of AI power in a few companies and countries.
– **Interoperability and Avoiding Fragmentation**: Discussion of the challenge of coordinating multiple international AI governance frameworks while avoiding regulatory fragmentation, with emphasis on finding common ground, connecting existing initiatives, and streamlining efforts to prevent duplication.
– **Multi-stakeholder Collaboration and Implementation**: Examination of roles and responsibilities of different stakeholders (governments, international organizations, private companies, civil society) in implementing AI governance, with focus on transparency, accountability, and collaborative approaches to address global AI challenges.
## Overall Purpose:
The discussion aimed to assess the current landscape of global AI governance and chart a path forward for international cooperation. The panel sought to evaluate existing frameworks, identify priorities for different stakeholders, and explore how to effectively implement AI governance principles while ensuring inclusivity and avoiding regulatory fragmentation.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared commitment to responsible AI development. Speakers demonstrated alignment on core principles while acknowledging different approaches and challenges. The tone was professional and forward-looking, with participants building on each other’s points rather than expressing disagreement. There was a sense of urgency about moving from theoretical frameworks to practical implementation, but this was expressed through cooperative problem-solving rather than criticism of current efforts.
Speakers
**Speakers from the provided list:**
– **Yoichi Iida** – Former Assistant Prime Minister of the Japanese Ministry of Internal Affairs, Chair of the OECD Digital Policy Committee
– **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology
– **Lucia Russo** – OECD Economist at AI and Digital Emerging Technologies Division
– **Ansgar Koene** – Global AI Ethics and Regulatory Leader from E&Y Global Public Policy
– **Melinda Claybaugh** – Director of Privacy and AI Policy from META
– **Juha Heikkila** – Advisor for International Aspects of Artificial Intelligence from European Commission
– **Audience** – Unidentified audience member who asked a question
**Additional speakers:**
– **Shinichiro Terada** – From the University of Takyushu, Japan (audience member who asked a question about AI governance compared to Internet governance)
Full session report
# Global AI Governance Discussion: From Principles to Practice
## Introduction and Context
This discussion examined the current state and future directions of global artificial intelligence governance, bringing together perspectives from government officials, international organisations, and private sector representatives. The panel was moderated by Yoichi Iida, former Assistant Prime Minister of Japan’s Ministry of Internal Affairs and current Chair of the OECD Digital Policy Committee.
The conversation focused on assessing existing international cooperation mechanisms, identifying priorities for different stakeholders, and exploring pathways for translating established principles into practical implementation while ensuring global inclusivity.
## Current State of AI Governance Frameworks
### OECD’s Evolution and Approach
Lucia Russo from the OECD outlined the organisation’s strategic evolution from establishing foundational principles in 2019 to providing comprehensive policy guidance. She emphasised three strategic pillars: moving from principles to practice, providing evidence-based policy guidance through initiatives such as the AI Policy Observatory, and promoting inclusive international cooperation.
A significant development has been the merger of the Global Partnership on AI with the OECD, expanding membership to 44 countries, including six non-OECD members (India, Serbia, Senegal, Brazil, Singapore, and one other). The OECD is developing a toolkit to help countries implement AI principles, though specific details about its format and functionality were not elaborated.
### EU AI Act and Regional Implementation
Juha Heikkila from the European Commission clarified that the EU AI Act regulates specific uses of AI rather than the technology itself, employing a risk-based approach. He explained that “about 80% according to our estimate, maybe even 85% of AI systems…would be unaffected” by the legislation, addressing misconceptions about its scope.
The EU’s engagement extends beyond its own regulatory framework to include participation in G7, G20, Global Partnership on AI, and various international summits, aiming to support global coordination while maintaining compatibility with EU objectives.
### Hiroshima AI Process Progress
The discussion highlighted progress in the Hiroshima AI process, with Lucia noting that 20 companies submitted reports to the OECD website on April 22nd, demonstrating industry engagement with the code of conduct and guiding principles agreed by G7 nations.
## Key Stakeholder Priorities
### Industry Perspective: Moving Beyond Principles
Melinda Claybaugh, Director of Privacy and AI Policy from Meta, stressed the importance of shifting focus from establishing additional principles to translating existing frameworks into actionable measures. She proposed three specific areas for continued work:
– Continuing to build policy toolkits
– Creating libraries of resources including evaluations and benchmarks
– Continuing the global scientific conversation
Ansgar Koene from EY emphasised the need for reliable, repeatable assessment methods for AI systems, highlighting the importance of standards development and transparency in evaluation methods.
### Government Priorities: Capacity and Implementation
Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Information Technology, emphasised that operational implementation requires enhanced regulatory capacity for testing AI solutions and practical translation of agreed principles into concrete actions. He highlighted India’s efforts to make compute accessible at very low cost, noting that “high-end H100s, H200s are made available at a cost less than a dollar per GPU per hour.”
## Major Challenge: Democratising AI Access
### Global South Participation and Resource Access
Abhishek Singh articulated the challenge of ensuring that Global South countries become genuine stakeholders in AI decision-making processes rather than passive recipients of frameworks developed elsewhere. He emphasised the need for:
– Access to high-end compute resources
– More inclusive datasets that represent diverse global contexts
– A global repository of AI solutions, similar to digital public infrastructure models
Singh noted the current concentration of AI power “in a few companies within a few countries” and called for more democratic participation in AI governance and development.
### Infrastructure and Capacity Building
The discussion revealed significant challenges in ensuring equitable access to technical infrastructure necessary for AI development. Singh proposed creating a global depository of AI solutions that could enable more equitable AI development across different countries and contexts, addressing issues like deepfakes and misinformation that particularly affect developing nations.
## International Cooperation and Coordination
### Managing Framework Proliferation
Participants acknowledged both the benefits and challenges of multiple AI governance initiatives. While demonstrating international cooperation, there are concerns about potential fragmentation. Juha Heikkila noted that despite apparent multiplication of efforts, there are consistent elements such as risk-based approaches across different frameworks.
Melinda Claybaugh emphasised the risk of fragmentation for companies developing global technologies, highlighting the need for approaches that respect different national contexts while maintaining sufficient consistency for global deployment.
### Role of International Organisations
The conversation highlighted the important role of international organisations in facilitating coordination. Participants discussed emerging initiatives such as the UN Scientific Panel on AI, with Juha noting it as “quite a crucial component,” and mentioned two UN resolutions, “one led by US and one led by China.”
## AI Governance versus Internet Governance
An audience question from Shinichiro Terada from the University of Takyushu prompted discussion about differences between AI and Internet governance. Juha Heikkila explained that AI governance differs fundamentally because AI extends beyond Internet applications to include embedded systems, robotics, and autonomous vehicles, requiring different approaches tailored to AI-specific characteristics.
Despite these differences, Abhishek Singh suggested that AI governance should adopt multi-stakeholder principles from Internet governance while recognising that AI requires enhanced global partnership due to the concentration of control in fewer corporations.
## Future Directions and Commitments
### Immediate Next Steps
Several concrete commitments emerged from the discussion:
– India will host an AI Impact Summit in February, focusing on operationalising inclusive AI governance principles
– Continued development of the OECD toolkit for implementing AI principles
– Ongoing Hiroshima AI process reporting with industry participation
– Building libraries of evaluation resources and benchmarks for AI assessment
### Long-term Strategic Directions
The discussion pointed towards creating shared resources that could support more equitable AI development globally, including the proposed global repository of AI solutions. There was emphasis on building capacity building networks as outlined in Global Digital Compact implementation.
## Conclusion
The discussion revealed strong consensus on the urgent need to move from principle establishment to practical implementation of AI governance frameworks. While significant progress has been made in establishing international cooperation mechanisms, major challenges remain in ensuring equitable access to AI technologies and meaningful participation by developing countries.
Key areas requiring continued attention include addressing resource inequities, building regulatory capacity globally, and coordinating multiple governance frameworks to prevent fragmentation while respecting different national approaches. The path forward requires sustained commitment from all stakeholders and innovative approaches to resource sharing and capacity building that go beyond traditional models of international cooperation.
Session transcript
Yoichi Iida: The Capital of Philosophy Hi, Bishak. How are you? Good morning everybody! And good morning, good afternoon, good evening, probably, depending on the place where you are, to online participants. My name is Yoichi Iida, the former Assistant Prime Minister of the Japanese Ministry of Internal Affairs, and also the Chair of the OECD Digital Policy Committee. Thank you very much for joining us. Today we are discussing the current situation and also some foresight on global AI governance. We have very excellent speakers on my left side. So, let me introduce briefly my speakers before they take the floor and make their own self-introduction. So, from my end, first Dr. Ansuga Kone, the Global AI Ethics and Regulatory Leader from E&I Global Public Policy. Next to him, Mr. Abhishek Singh, the Under-Secretary from the Indian Ministry of Electronics and Information Technology. Thank you very much, Abhishek. Next to him, we have Lucia Russo, from OECD Economist at AI and Digital Emerging Technologies Division. So next to her, we have Ms. Melinda Kuebo, Director of Privacy and AI Policy from META. Thank you very much for joining us. And last but not least, we have Dr. Juha Heikkila, Advisor for International Aspects of Artificial Intelligence from European Commission. Thank you very much for joining us. So, AI governance. As all of you know, we are seeing rapid changes in technologies, but also the policy formulation. Japanese government started the international discussion on AI governance as early as year 2016, when we made a proposal on an international discussion on AI governance at G7 and also OECD. So, this proposal led to the agreement on first international and intergovernmental principle as OECD AI principles in the year 2019, and also the G7 discussion led to the launch of global partnership on AI GPA in the year 2020. Also, UNESCO started the discussion on ethical AI recommendations, and the European Commission started the discussion on AI governance framework, which led to the enactment of AI Act in the year 2023. After these years, we saw a rapid change in AI technology in particular near the end of 2022 rapid rise of CHAP-GPT and we saw a lot of new types of risks and the challenges brought by new AI technology that was the background why we started the discussion at G7 on Hiroshima process we wanted to respond to the new risks and the challenges brought by generative AI and near the end of the year G7 agreed on code of conduct and guiding principles of Hiroshima AI process and this effort led to the launch of reporting framework for code of conduct of Hiroshima process in the year 2024 and this year we saw 20 reports by AI companies publicized on OECD website on the 22nd of April. In the meantime UN also started the discussion on AI governance and we saw the agreement on UN resolutions on related to AI two resolutions one led by US and one led by China. UN also started the discussion on global digital compact which concluded in September 2024 and we are now in the process of GDC follow-up and also in the beginning of discussion on WSIS plus 20 review. So this is the rapid and the short history of AI governance Thank you very much for this wonderful discussion and over the last probably several years and against this background I would like to discuss with these excellent speakers on what are the priorities and the emphasis in these discussions for different stakeholders in AI ecosystem and what are their perspectives. So, let me begin with Lucia from OECD. So, what do you think your priorities and emphasis are in promoting international or global AI governance and what international initiatives and frameworks do you consider very significant at present and also for the future discussion for countries, for international organizations and other stakeholders? What is your view?
Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoichi mentioned, we have started working at the OECD together with countries like Japan and multi-stakeholder groups on international AI governance back in 2019 and we have continued that work throughout the years to move from these principles that were adopted by countries into policy guidance on how to put them into practice. And the role of the OECD has been since then to be a convener of countries and multi-stakeholder groups and to provide policy guide and analytical work to support this. evidence-based and understanding of the risks and opportunities of artificial intelligence. So I think in terms of the role for the OECD there are three main strategic pillars so this moving from principles to practice and that is undertaken through several initiatives that range from a broad expert community that is supporting our work and to providing metrics for policymakers and this is through our OECD or AI policy observatory that provides trends and data but also a database of national AI policies that allows countries to see what others are doing and also learn from experiences across the globe and third to promote inclusive international cooperation and in that regard a key major milestone was achieved last July 2024 when the global partnership on AI and OECD merged and joined forces to promote safe secure and trustworthy AI that would again broaden the geographic scope beyond OECD members. We have now 44 members of the global partnership on AI and these include six countries that are not OECD members including India, Serbia, Senegal, Brazil and Singapore and so the idea is that that broader geographic scope will also increase as we proceed and so that will foster even more effective inclusive conversations with these , and we have a lot of opportunities for these countries. And in terms of priorities that we see, of course, it was mentioned the Hiroshima AI process, and that is an initiative that we see as very prominent, because it allows having a common standardised framework for these principles that were adopted by the Japanese government, but more than that is also the transparency element that is very important, because it’s not only about committing to these principles, it’s also demonstrating that companies are acting upon these principles and sharing in a transparent way which concrete actions they are taking to put them into practice. And this is really important, because it’s not only for companies, it’s also for companies themselves to have a learning experience, and, again, both for countries, and for companies themselves that can share these initiatives and learn what they are doing in practice to promote the different principles that we see in the framework. So, these are the areas where the OECD will continue working. Evidence, inclusive, multi-stakeholder co-operation, and guidance on policies.
Yoichi Iida: Okay. Thank you very much. Actually, OECD AI principles agreed in 2019 paved a robust foundation for national and international AI governance. So, I think that was very much supportive, and also we learned quite a lot from these principles. Japan enacted a new AI law only last month, but there are a lot of reflections from AI principles of OECD into our own AI law. So thank you very much. So I would like to invite two speakers from governmental bodies. So now I turn to Abhishek. Thank you very much for joining us. From the government perspective, what do you think your priorities and emphasis in developing AI governance, and also what do you evaluate the current situation?
Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI governance and how we can work together with the global community, especially with the work which is done at the OECD and in various forums, whether it’s the UN high-level advisory body on AI, the G7 Hiroshima process, the G20 initiatives at Brazil and now South Africa. So like the whole world together, we are trying to address a common issue with regard to how we can leverage the power of this technology, how we can use it for larger social good, how we can use it for enabling access to services, how it can lead to enabling, to empowering people at the last mile. So that has been the principle mantra of what we have been doing in India. We have a large country and we do believe that AI can be a kinetic enabler for empowering people and enabling access to education, healthcare, the remotest corners of the country in various languages and enabling a voice interface for empowering people. To do this, we need to have a balanced, pro-innovation, inclusive approach towards development of the technology. We need to ensure that access to AI, compute, the data sets, algorithms and other tools for building safe and trusted AI is Good morning, everyone, and welcome to this session of the Global South, where we’re going to be talking about how to make AI more equitable. Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries. If you have to democratise this, if you have to kind of ensure that the country, the Global South, become a stakeholder in the conversations around, we need to have this principle ingrained in all the countries around the world. This principle is well ingrained in all the countries that we chaired and following last year in Serbia and coming in Slovakia, this principle is well ingrained. The OECD Inclusive Framework also that we came up for GP 2.0, it also defines that we need to become much more inclusive, we need to bring countries of Global South at the decision-making tables, and towards this, the initiatives at Global Digital Compact also define how do we actually make it happen, how do we ensure that a researcher in a remote corner in a low- and medium-income country has access to similar compute that a researcher in the Silicon Valley has. We need to create frameworks. The AI Action Summit that France co-chaired along with India, there was a concept of current AI that came in which required commitments, financial commitments, to build up an institutional framework for funding such initiatives, for adopting AI-based technology, so that is something that we need to continue, and as we move from the French Summit to the India Summit that we’ll be hosting next year in February, we’ll need to work with the entire AI community to institutionalize this. How do we ensure that, in India we are making compute accessible at a very low cost, like the high-end H100s, H200s are made available at a cost less than a dollar per GPU per hour. Can we build up a similar framework so that researchers in low- and medium-income countries also get access to something similar? Can we build up a data-sharing protocol, a protocol in which when models are trained, the data sets are much more inclusive, the data sets from context-sharing… . We have a model in the DPI ecosystem, there is a global depository of DPI solutions. Can we build up a global depository of AI solutions which can be accessible to more countries? That’s something that we need to work on when we are working at global governance frameworks. And there are tools. How do we do privacy enhancing? How do we do anonymization of data? How do we ensure that we are able to prevent the damage that deepfakes can cause? How do we build up a global repository of AI solutions which can be accessible to more countries? How do we build up a global repository of AI solutions which can be accessible to more countries? How do we build up a global depository of AI solutions which can be accessible to more countries? How do we ensure that we are able to prevent the damage that deepfakes can cause? How do we democracies across the world are facing this challenge of misinformation on social media? And AI sometimes becomes an enabler for that. Can we develop tools for watermarking AI content, can we develop global frameworks so that social media companies become part of this whole ecosystem, so we can prevent the risks that democracies have? And how do we build up a global repository of AI solutions which can be accessible to more countries? And how do we ensure that, including building capacities across the world, we will be able to build up an AI ecosystem that will be more fair, more balanced, more equitable? So we are working with the global community towards this, and I hope that this discussion will further contribute to creating such enabling frameworks.
Yoichi Iida: Thank you very much for the very comprehensive remark. I believe the ultimate objective of governance is to make use of this data. I’m the director of the Global Governance Framework and I’m here today to talk about AI as a technology, as much as possible, but also without concern. So this is a point we need to share and also the common objective of building up the Global Governance Framework. Having said this, Yuhua, people say, you know, AI act may be a little bit too strict and bringing the excessive regulation. I don’t know, what is your opinion and what is the priorities or requirement of EU?
Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand that the AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary. So there are these statements that it regulates AI, it doesn’t actually, it regulates certain uses of AI which are considered to be either too harmful or dangerous or too risky so there need to be some safeguards in place. So in fact it’s innovation friendly because about 80% according to our estimate, maybe even 85% of AI systems that we see around would be unaffected by it. And it applies equally to everyone placing AI systems on the EU market, whether they are European, Asian, American, you name it. So in that sense it creates a sort of level playing field and it prevents fragmentation. So we have uniform rules in the European Union, we don’t have a patchwork of rules. It’s not as if we wouldn’t have regulation without the AI Act because the member states of the European Union would have proceeded to regulate. But the regulation is just one aspect of our activities and it’s a common misconception that we only do regulation. We actually invest a lot in innovation, we’ve been doing that over the years and we’ve always done it. The third pillar, in addition to trust, regulation, excellence, innovation, research, etc. The third pillar is international engagement. We think that because some of the challenges, or many of them, related to AI are actually boundaries. They are global. We think that cooperation is both necessary and useful. So we want to be involved and we engage bilaterally and multilaterally to support the setting up of a global level playing field for trustworthy human-centric AI. And we build coalitions with those who share the objectives. We want to have AI for the good of us all. So we want to promote the responsible stewardship and democratic governance of AI. But we also do cooperation on technical aspects. So, for example, cooperation on AI safety, support to innovation and its take up in some key sectors. So we do this both bilaterally with a number of partner countries, which is increasing. But we’re also involved in all the key discussions. G7, so the Hiroshima process was already mentioned, Hiroshima Friends. G20, the Global Partnership on AI. So we are a founding member, the European Union is a founding member of the Global Partnership on AI. So we’ve been involved in that from the very, very beginning. Now, of course, in an integrated partnership with the OECD. And with the OECD, of course, we are involved in all the key working groups which relate to AI. We are a member of the Network of AI Safety Institutes. And we’ve been actively involved also in the summits, Bletchley, Seoul, Paris. And then the upcoming summit in India is, of course, also where we will be involved in. And, of course, we are also, via the member states, we are involved in. is the director of the Global Digital Compact. We are a global digital compact and we have a lot of work to do and we have a lot of work to do to promote the global digital compact and its implementation which is now in a critical phase. And basically we do this from two perspectives. On the one hand, we do it to promote our goals which I listed and then also to ensure that whatever conclusions, declarations and statements that are made in the global digital compact, we ensure that they are compatible with our strategy and compatible also with our regulation so that we don’t end up in a situation where we have international commitments which are somehow conflicting with what is our strategy in general
Ansgar Koene: and then our regulation in particular. So this is basically the rationale for our engagement and our involvement. Thank you.
Yoichi Iida: Thank you very much for the detailed explanation and we really understand, you know, the EU act is objected to pursuing the innovation-friendly environment across the EU region. And we also discussed in G7, you know, the different countries, different jurisdictions have different backgrounds, different social or economic conditions, so the approaches to AI governance have to be different from one from another, but still, that is why we need to pursue interoperability across different jurisdictions, different frameworks, and I’m personally impressed by the approach by the European Commission in the discussion on the code of practice, which is very open to all stakeholders, so the EU and it gives us a lot for our partners to discuss this case. Thank you very much. The private sector people were also very much impressed when they joined the discussion and submitted their comments, which were much reflected to the current text, and we are expecting the very good result from the discussion as a code of practice as part of the AI Act. Thank you very much. Now I turn to the other stakeholders. So Melinda, from the perspective of a big AI company, how do you evaluate the current situation of global AI governance? And also, what are the priorities or what are the requirements as a private company in the governance framework, and what do you expect?
Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving the opening remarks and listing all of the frameworks and the acronyms and all of the principles and bodies that are involved here, it’s really remarkable the work that has gone on in the last couple of years in the international community on AI governance. And there’s just been an incredible proliferation of frameworks and principles and codes and governing strategies. And I think at this moment, it’s really important to consider connecting the dots. I think we don’t want to continue down the road of duplication and proliferation and continued putting down of principles. I think we’ve largely seen a similarity and a coherence of approach around the various frameworks that have been put out at a high level. And I think it’s really important at this point to think about how do we connect these frameworks and these principles. Thank you. I’m going to talk a little bit about how we connect these principles and these frameworks. Because if we do not think about that, then we are at risk. I think it was mentioned of fragmentation. And from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach. And so I think it’s really important to think about what do we have in common and how do we draw connections between these principles. Another priority is really moving from principle to practice. And I’ve been encouraged to see this as a kind of theme in conversations throughout a few days here on AI governance. We have the principles, but how do we put them into practice? And I mean that in a few different ways. Of course, from a company’s perspective, what does it mean? And I’m encouraged by the work of kind of trying to translate some of these things into concrete measures. But I think also from a country’s perspective, countries that want to implement and deploy and really roll out AI solutions to public challenges, how do they do that? What is the toolkit of measures and policies and frameworks at a domestic level that is important to have in place? Things like an energy policy, scientific infrastructure and research infrastructure, data, compute power, all of those things are really important. How do companies make sure they have, how do countries make sure they have the right elements in place to really leverage AI? And then I think, of course, from the perspective of policy institutions, how do they… . And then we also have a lot of work to do to set out toolkits and frameworks to make sure that all stakeholders have the opportunity to adopt AI. And so I think I’m also encouraged as we think about moving from principle to practice that there seems to be a broadening of the conversation. I think in terms of the focus beyond some of the early principles, I think it’s important to make sure that we’re looking at the benefits as well as minimizing the risks. And I think it’s important, I think the Hiroshima AI principles and process were really important in ensuring that we’re looking at maximizing the benefits as well as minimizing the risks. And so what does that mean? And how do we expand the conversation beyond risks to make sure it’s benefits-based? And that means including a lot of stakeholders who haven’t been part of the conversation to make sure that we’re moving from principle to practice. So how do we do that? How do we do the AI impact summit? How do we include as many stakeholders as possible in the conversation? Civil society, you know, everyone from the global south.
Yoichi Iida: How do we include that, expand that conversation, and how do we make sure we’re moving to tangible, concrete impacts? And how do we make sure that we’re avoiding fragmentation and improving the interoperability? And also, the second point, from principle into actions. This is very important, and that’s exactly what we are now pursuing. For example, I understand OECD is making the efforts, the toolkit for AI principles. And also, Hiroshima process, thank you very much for those results. And also, I think, only because we have the What the companies are doing inside the company when they assess the risks and also takes take the countermeasures and Publicize what they are doing. So all those information are on the website of OECD now and There is a lot of learnings from the practical Information but still we found those reports a little bit difficult to read up and understand so this is another challenge for for practicality, but We I believe we are making the progress. So having listened to these Answers, what is your opinion and what is what do you evaluate the current situation? Sure, thank you very much and thank you for the invitation to be on this panel So reflecting on this space around AI governance Both from how we within EY are looking at this, but also from what we are seeing
Ansgar Koene: Amongst our private sector and public sector clients whom we are helping with setting up Their AI transformation and their governance frameworks around this We are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens or Have significant impacts on the ability of the organization itself to operate it is becoming very critical For organizations to have the confidence that they have a good governance framework in place, which will allow them to assess and measure and understand the reliability of the AI system the I’m going to talk a little bit about the use cases for which it truly operates, what are the boundary conditions within which it should be used and where it should not be used, the kind of information that also people within the organization and people outside need to have in order to be able to use the AI systems correctly. And so if we reflect from that point of view of the need that organizations have to have a good governance framework for the use of AI onto these global exercises and global initiatives, I think there are effectively two dimensions in which these global initiatives are important. One is the direct one, which is things like the OECD AI principles helped all organizations to have a foundation that they could reflect on as they are thinking what are the key things that we need to have in our governance thinking. The G7 code of conduct has helped to elaborate that further and has helped to pinpoint in more detail what goes into questions such as what is good transparency, what is a way to think about inclusiveness for instance of the people that need to be reflected on when developing these systems. And now the Global Digital Compact also helps to provide a broader understanding of also the way to think about AI governance within the broader context of good governance in itself. But then there’s also the indirect way from the point of view of companies, the indirect way in which these global instruments of course help to make sure that different countries have a common base from where to approach how to create either regulations or voluntary guidelines, whatever works best within their particular context. But it gives a…
Yoichi Iida: Thank you very much, exactly what you said was we need to improve interoperability and coherence across different governance frameworks and we have to admit there are differences in approaches but we need the common foundation, probably as human centricity and democratic values and including transparency or accountability or data protection or whatever. So thank you very much for the comment and so we believe our approaches and the world is proceeding in the right direction by sharing the experiences and the knowledges and try to improve coherence and interoperability. Then we have different frameworks going on, so second question, what do you think you need to do as a stakeholder, what is your role and what is your strategy in coming years and in particular what do you expect from UN Global Digital Compact which is now discussing the global AI governance. So at this time I would like to start with Abhishek. Abhishek Thakur As I mentioned our strategy for AI implementation is to ensure that we use this technology for enabling access to I am the CEO and co-founder of the Global Digital Compact. We want to make it available to all services, to all Indians, in all languages, especially through voice.
Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this a reality? We have a lot of expectations because we are catching up with the West in the evolution of this technology. How do we kind of enable access? Like the first request that we had, especially the U.S., because that’s where the whole other companies who own compute, 90% of it is controlled by one company, to ensure that we have access to at least 50,000 GPUs in India. That becomes one practical requirement that we have. Second is to build in, ensure that the models, which are again developed primarily in the West and Deep-Sea came in China, so all these models, how do they become more inclusive in the sense that how they are trained on data sets from across the world? So that becomes our third, second request. And the third, which is the most important part, is building capacities. How do we ensure that, and even Global Digital Compact document also talks about capacity building initiative, setting up a capacity building network. How do we ensure that skills and capacities in all countries are developed, enhanced, and further to be able to take up advantage of the evolving technologies? And then we also need to build safeguards, like the OECD principles are there for responsible AI, for ensuring safe, trustworthy development of AI. But to ensure that, one would need tools and even regulators, especially being in the government, when we feel that there’s a need to regulate, but then how do we enhance the regulatory capacity? Even if you want to test a particular solution, whether it meets the standards, meets the benchmarks, do you have the regulatory capacity to test that? Enhancing that, enhancing cooperation on that, will become very, very critical. So I would say that my asks with the Global Digital Compact and the UN process will be at the operational level. I’m the director of the Global South and the Global Community. The principles are largely agreed on. Everybody talks the same language at every forum. But how do we translate that talk into action? That would be the real requirement that we will have. And we are happy to work with the global community in making this a reality, not only for India,
Yoichi Iida: but for the entire Global South and the world community. OK, thank you very much. Inclusivity will be one of the key words in the coming month in global AI governance discussion. There is a lot of expectation for India’s AI Impact Summit next year. So thank you very much for the comment. And now I invite Melinda for your views. Thank you so much. So under the theme of moving from principles to practice, three ideas.
Melinda Claybaugh: One is to continuing to build policy toolkits, which I think the OECD is really well-placed to do, for countries that want to advance their AI adoption. Two, I think, is libraries of resources along the lines of evaluations and benchmarks and third-party resources of testing of AI that’s been done and really putting that in one place. There are a lot of entities engaged in this and I think building the knowledge base will be really important. And then third, I think, is really continuing the global scientific conversation. And I think on that point, this is where I lead into the global digital compact, the UN Scientific Panel on AI as an independent scientific body to continue research and conversation and making sure that we are having the best scientific voices coming together. And then the global dialogue on AI governance through UN forums. I think that is the convening power there is what’s really important in bringing the right stakeholders. is a member of the OECD, and she is going to talk to us about how OECD can help to bring these new standards to place. Okay. Thank you very much. Very important three points. So, Melinda mentioned OECD toolkit. So, now I would like to invite Lucia for your comment.
Yoichi Iida: Yes. Thank you.
Lucia Russo: Indeed, we have started this project to build a toolkit to implement the OECD principles, and it comes exactly from this demand to have more actionable resources that would guide countries on how to go from these agreed principles into concrete actions. And it was agreed by the ministerial council meeting at the OECD just at the beginning of June. And what is this toolkit going to do, and how it’s going to be built? It will be an online interactive tool that will allow users, we expect mostly government representatives to make use of these resources, by consulting and interrogating the large database that we have on national AI policies, but it will be a guided interaction that will allow countries to understand where they need to act. And that would concern both the more values-based section of the principles, but also the policy areas that include, as we have heard, issues around compute capacity, data availability, research and development resources. And it will guide countries through understanding their needs, but also what the priorities may be, and then provide suggestions that would be policy options that other countries in a similar way. . And we want to have a level of advancement or in a region that is the same as the country that is navigating this toolkit to have these suggestions on policy options and practices that have already been put in place and that have been proven effective. And so, on one hand, we want to build this user experience. On the other hand, we want to have a level of advancement or in a region that has already been put in place and that has been proven effective. And so, on the one hand, we want to enrich the repository of national policies and strategies that we already have for 72 jurisdictions on the OECD database of national strategies. And that is one of the, I think, the priorities that also we see that we need to build further upon this toolkit. And so, on the one hand, we want to have a level of advancement or in a region that has already been put in place and that has been proven effective. And so, on the other hand, we hope to have this increased cooperation on things such as this one. And the idea is to build this toolkit again through co-creation with countries, and for that, we are organizing this toolkit, and we hope to have a level of advancement or in a region that has already been put in place. And so that we better understand the needs, because, as we have heard, I think everyone is agreeing on the broader actions, but then, when it comes to practice, we better need to understand what are the challenges, and that is where we want to work with countries around these challenges. So, thank you very much. is where we want to put the focus on. We have also been advancing work on understanding AI uptake across sectors and again this is in view of moving from this conversation that is very broad into concrete applications and their understanding better what are the bottlenecks and what are the pathways to increase adoption when it comes to agriculture, when it comes to health care, when it comes to education for instance. And perhaps just to close on that point I think when it comes to the Hiroshima reporting framework it’s interesting to see that the framework doesn’t only talk about risk identification, assessment and mitigation. The last chapter also talks about how to use AI to advance human and global interests and it’s interesting to see that in this first reporting cycle by 20 companies there are initiatives that are reported on how companies are actually engaging with governments and civil societies to have projects that indeed foster AI adoption across these key sectors. So once again this will be priorities and we see this as the
Yoichi Iida: key actions moving forward. Okay thank you very much. Actually OECD principles, GPAY, Hiroshima process, all those initiatives are backed up by OECD secretariat so we look forward to working very closely in the future. So time is rather limited but first I invite Ansgar. So what is your point?
Ansgar Koene: Sure, well very much I’d like to echo the point that was made regarding the need to move from principles to practice. As well as the point around capacity building and within those well, I would like to also highlight the work that OECD is doing around the Incidence database which is really helping to get better understanding about Where are real failures within AI occurring as opposed to hypothetical ones? but also, I think it is very important for us to be supporting and Encouraging broader participation in the standards development in this space Which are often a key tool that industry uses in order to be able to understand how to actually Go towards implementation and it is a good reference point. So that industry feels yes This is an approved the wider community agrees that this is a good approach to do it however all of these things in order to really achieve their intended outcome of being able to provide end-users with a Confidence and trust in these kinds of systems will require also reliable repeatable Assessments that can be done on how these systems are being implemented how the government’s frameworks are being implemented and In order to have these we need Greater transparency as to what the particular assessments are intended to achieve and how they are doing this so that we have Expectation management so that users will understand really how to interpret what this assessment has actually tested for we need greater capacity building also within the community to build an ecosystem of Assessment and assurance providers in this space and we’ve seen some interesting work around that happening Also already in some jurisdictions such as the UK and the OECD is helping in this space as well and Effectively we we just need the community to be able to Provide clarity as Chairman of the Board of Governors of the Japanese Government What is a good governance framework, how to approach this, hence the standards, and how to assess whether it has achieved and been done in the appropriate way through things like assessments.
Yoichi Iida: Thank you very much. The engagement of all communities, including the civil society, is very, very important, and the multi-stakeholder approach is definitely essential. So we believe the role of IGF in AI governance is increasingly important. So, sorry for the time remaining, but Juha, what is the role of Europe, and how do you think Europe will be working with the world?
Juha Heikkila: So, we are of course very much involved in sort of also the discussions of the GDC, the Global Digital Compact, as I mentioned earlier, and I think to echo what Melinda said, we think that the scientific panel, the independent scientific panel, is quite a crucial component of this. I think that the text, the GDC text, is very useful. I think what was agreed last year in that regard was very successful, and we hope that that will be then translated into the implementation, the way that it was expressed, and we do that in the spirit of the text. And I think that in this regard also for the dialogue, the AI governance dialogue, we think it’s important that it doesn’t actually duplicate existing efforts, because there are quite a lot of them, and that’s why also in the GDC text it’s mentioned that it would be on the margins. Chairman of the Board of Governors of the United Nations, I think that would be very useful because I think overall there is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI. I think that this kind of multiplication is not necessarily sustainable in the long run. I think we have made partial steps forward in the integrated partnership that was formed between the Global Partnership on AI and the OECD. We welcome that because we had some overlap between the expert communities and also I think now that initiative has a better sense of purpose also backed by the structures of the OECD which make it more impactful from our perspective and we look forward to how that will develop further and it will also have then a role in taking these discussions to a greater audience and membership. One thing that I wanted to mention just very briefly is that despite this multiplication of efforts and the seeming almost chaotic nature if you like in some respects to exaggerate a bit, there are some sort of constants however and one of these constants is, and Melinda mentioned this as well, that they go in the same direction. One item, one aspect which has been included in many of them is the risk-based approach which I mentioned as the foundation of the AI Act but it’s also for example reflected in the Hiroshima AI process guiding principles and the code of conduct. It’s also reflected elsewhere in some of the statements that have been made in the summit. So, you know, we have some common ground, but I think it would be desirable over the long run to try and seek some convergence and streamline.
Yoichi Iida: Okay, thank you very much. So there are a lot of efforts going on, and the GDC is also one of them, and maybe U.S. is first to join it too. And the role of the UN will be very important, but we need to avoid duplication, and we need to streamline and focus our power on the most efficient way. So I hope in the development of AI governance discussions, the role of IGF will be very important, and this needs to be the place where the people get together and discuss not only Internet governance, but also AI governance, or digital technology governance, to be discussed by multistakeholders here in IGF. So thank you very much. And I wanted to take one question, but I’m not sure I’m allowed. We’ve run out of time, we’ve just got one minute. Just ask. Okay, please. Okay. Yeah, please. But maybe you need a microphone. We can hear. You can go there and ask. Oh, yeah, okay. Go there and ask. The IGF protocol. I’m sorry. Thank you very much for great discussions.
Audience: My name is Shinichiro Terada from the University of Takyushu, Japan. And I’d like to understand AI governance compared to the Internet governance. And when the Internet was spreading globally, there were various challenges such as supporting Thank you very much for this complicated question, but we want to answer it.
Juha Heikkila: Okay, you have it. So it is a very complicated question. I comment on sort of one aspect maybe and I let, of course, my fellow panelists to comment. But I think, broadly speaking, I heard this comment the day before yesterday that AI is on the Internet and therefore Internet governance, you know, is suitable for it. There is more to AI than what is on the Internet. Think of embedded AI, for example, robotics, intelligent robotics, autonomous vehicles, etc. So not all of AI is on the Internet. There may be some inspiration AI governance can take from the principles of Internet governance. But I think there are numerous issues related to AI governance which cannot be, if you like, taken over from Internet governance, which are specific to AI, which have sort of characteristics where you don’t find any matching aspects in Internet governance. So I would personally see those as broadly different with potentially some inspiration for AI governance taken from Internet governance.
Yoichi Iida: Thank you very much. I would broadly agree with him. The only thing that I would say is that AI and Internet are two different things.
Abhishek Singh: AI includes a lot more than Internet, as he mentioned. Use cases also and input wise also, as rightly mentioned. I am the co-founder and co-director of IIT Bombay and I am here to talk about AI Governance and how it can be improved. So, AI Governance is a multi-stakeholder organization which is controlled by a few corporations there. So, in order to make it more equitable and bring in the principles of Internet Governance to AI Governance, it will have to be multi-stakeholder. It will have to ensure that the way we approach towards managing AI Governance as more inclusive, it involves people who are technology providers as also people who are technology users. And when we are able to do that balance, we will be able to make it more fair, more balanced, more equitable and this will require a lot more of global partnership than what Internet Governance has done so far. But the frameworks and the mechanism, the protocols which Internet Governance Forum has evolved can be a good guiding light for working on the AI Governance principles.
Ansgar Koene: Maybe if I can just add one additional perspective perhaps which I think links closely to what Yoha mentioned as one of the themes that has been picked up across so many of the Governance approaches around AI which is the risk-based approach. Within AI, it is very much the risk depends on the use case, whereas because AI is a core kind of technology that you can use in so many different kinds of applications and application spaces, whereas the Internet in that sense is more of a uniform kind of thing. Any more?
Yoichi Iida: Okay. So, thank you very much. Time is up, but I hope you enjoyed the discussion and please send the uploads to the excellent speakers. Actually, this is too excellent to close now, but time is up, but thank you very much. You don’t believe, you know, they are giving the questions only in midnight yesterday. And we must also acknowledge the presence of His Excellency, the President of Mauritius who is there. Listen to him. Great. Thank you very much, His Excellency. Thank you. Okay. Thank you for watching.
Yoichi Iida
Speech speed
112 words per minute
Speech length
2037 words
Speech time
1083 seconds
Japan initiated international AI governance discussions in 2016, leading to OECD AI principles (2019), Global Partnership on AI (2020), and the Hiroshima process responding to generative AI challenges
Explanation
Japan started international discussions on AI governance at G7 and OECD in 2016, which led to the first international and intergovernmental principles. This foundation enabled subsequent developments including the Global Partnership on AI launch and the Hiroshima process to address new challenges from generative AI technologies.
Evidence
OECD AI principles agreed in 2019, Global Partnership on AI launched in 2020, G7 Hiroshima process code of conduct and guiding principles agreed by end of year, reporting framework launched in 2024 with 20 reports by AI companies published on OECD website on April 22nd
Major discussion point
Evolution and Current State of Global AI Governance
Topics
Legal and regulatory
Lucia Russo
Speech speed
131 words per minute
Speech length
1146 words
Speech time
522 seconds
OECD has evolved from establishing principles in 2019 to providing policy guidance and analytical work, with three strategic pillars: moving from principles to practice, providing metrics through AI policy observatory, and promoting inclusive international cooperation
Explanation
The OECD serves as a convener of countries and multi-stakeholder groups, providing evidence-based understanding of AI risks and opportunities. The organization has developed three main strategic approaches to support implementation of AI principles through practical guidance and international cooperation.
Evidence
OECD AI policy observatory provides trends and data plus database of national AI policies, Global Partnership on AI and OECD merged in July 2024 creating 44 members including six non-OECD countries (India, Serbia, Senegal, Brazil, Singapore), expert community supporting work
Major discussion point
Evolution and Current State of Global AI Governance
Topics
Legal and regulatory | Development
Agreed with
– Abhishek Singh
– Melinda Claybaugh
– Ansgar Koene
Agreed on
Moving from principles to practice is the critical next step in AI governance
OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions
Explanation
The toolkit will be an online interactive tool allowing government representatives to consult a database of national AI policies through guided interaction. It will help countries understand where they need to act and provide policy suggestions from other countries with similar advancement levels or regional contexts.
Evidence
Toolkit approved by ministerial council meeting at OECD in June, will cover both values-based principles and policy areas including compute capacity, data availability, research and development resources, database covers 72 jurisdictions on national strategies
Major discussion point
Moving from Principles to Practice
Topics
Legal and regulatory | Development
The Global Partnership on AI merger with OECD expanded membership to 44 countries including six non-OECD members, broadening geographic scope for more inclusive conversations
Explanation
The merger achieved in July 2024 was a key milestone that broadened the geographic scope beyond OECD members to include developing countries. This expansion aims to foster more effective and inclusive conversations with a broader range of stakeholders.
Evidence
44 members total with six non-OECD countries: India, Serbia, Senegal, Brazil, Singapore, with expectation that broader geographic scope will continue to increase
Major discussion point
Inclusivity and Global South Participation
Topics
Development | Legal and regulatory
Agreed with
– Abhishek Singh
– Juha Heikkila
– Melinda Claybaugh
Agreed on
Need for inclusive international cooperation and avoiding fragmentation
Abhishek Singh
Speech speed
196 words per minute
Speech length
1445 words
Speech time
441 seconds
AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives
Explanation
Currently, AI power is concentrated in few companies and countries, requiring democratization to make Global South countries true stakeholders. This involves providing access to compute resources, ensuring training datasets are more inclusive of global contexts, and building institutional frameworks for funding and adoption.
Evidence
90% of compute controlled by one company, need access to at least 50,000 GPUs in India, high-end H100s and H200s made available at less than $1 per GPU per hour in India, AI Action Summit concept of current AI requiring financial commitments, India hosting AI Impact Summit in February next year
Major discussion point
Inclusivity and Global South Participation
Topics
Development | Infrastructure
Agreed with
– Lucia Russo
– Juha Heikkila
– Melinda Claybaugh
Agreed on
Need for inclusive international cooperation and avoiding fragmentation
Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions
Explanation
While principles are largely agreed upon globally, the challenge lies in translating these into operational actions. This requires building regulatory capacity to test AI solutions against standards and benchmarks, and developing practical tools for implementation.
Evidence
Principles agreed at every forum with same language, need for regulatory capacity to test solutions against standards and benchmarks, requirement for tools for watermarking AI content and frameworks for social media companies to prevent misinformation
Major discussion point
Moving from Principles to Practice
Topics
Legal and regulatory | Development
Agreed with
– Lucia Russo
– Melinda Claybaugh
– Ansgar Koene
Agreed on
Moving from principles to practice is the critical next step in AI governance
Global Digital Compact should focus on operational level implementation, capacity building networks, and enhanced cooperation on regulatory tools rather than just principles
Explanation
The Global Digital Compact should move beyond principle-setting to address practical operational needs. This includes establishing capacity building networks, enhancing regulatory cooperation, and creating frameworks for skill development across all countries.
Evidence
Global Digital Compact document mentions capacity building initiative and setting up capacity building network, need for skills and capacities development in all countries, requirement for enhanced cooperation on regulatory capacity building
Major discussion point
International Cooperation and Framework Coordination
Topics
Development | Legal and regulatory
India requires access to high-end compute resources, more inclusive training datasets, and global repository of AI solutions to enable equitable AI development
Explanation
India’s strategy focuses on using AI for enabling access to services for all citizens in all languages, particularly through voice interfaces. This requires practical access to compute resources, datasets that reflect global diversity, and shared AI solutions.
Evidence
Request for access to at least 50,000 GPUs, H100s and H200s available at less than $1 per GPU per hour in India, models primarily developed in West and China need training on global datasets, concept of global depository of AI solutions similar to DPI ecosystem model
Major discussion point
Technical Infrastructure and Capacity Building
Topics
Infrastructure | Development
AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations
Explanation
AI governance can learn from Internet governance frameworks and mechanisms, but requires more extensive global partnership due to the concentrated control of AI technology. The approach should be multi-stakeholder, involving both technology providers and users to achieve fairness and equity.
Evidence
AI controlled by few corporations, need for balance between technology providers and users, Internet Governance Forum protocols and mechanisms can serve as guiding light for AI governance principles
Major discussion point
AI Governance vs Internet Governance Comparison
Topics
Legal and regulatory | Development
Agreed with
– Juha Heikkila
– Ansgar Koene
Agreed on
AI governance differs significantly from Internet governance
Disagreed with
– Juha Heikkila
Disagreed on
Scope and nature of AI governance compared to Internet governance
Juha Heikkila
Speech speed
149 words per minute
Speech length
1277 words
Speech time
511 seconds
EU’s AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly environment
Explanation
The AI Act takes a risk-based approach, only intervening where necessary for harmful, dangerous, or risky uses of AI. This creates a level playing field for all entities placing AI systems on the EU market regardless of origin, while avoiding excessive regulation that could stifle innovation.
Evidence
About 80-85% of AI systems would be unaffected by the Act, applies equally to European, Asian, American companies, prevents fragmentation by creating uniform rules across EU instead of patchwork of member state regulations
Major discussion point
Evolution and Current State of Global AI Governance
Topics
Legal and regulatory
EU engages bilaterally and multilaterally to support global level playing field for trustworthy AI, participating in G7, G20, Global Partnership on AI, and various summits while ensuring compatibility with EU strategy
Explanation
The EU’s international engagement is built on three pillars: trust/regulation, excellence/innovation, and international cooperation. The EU participates in all key international discussions to promote responsible stewardship and democratic governance of AI while ensuring alignment with its own regulatory framework.
Evidence
Founding member of Global Partnership on AI, involved in G7 Hiroshima process, G20 initiatives, Network of AI Safety Institutes, summits at Bletchley, Seoul, Paris, upcoming India summit, Global Digital Compact participation
Major discussion point
International Cooperation and Framework Coordination
Topics
Legal and regulatory
Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining
Explanation
While there appears to be a chaotic multiplication of AI governance efforts, common elements like risk-based approaches appear consistently across different frameworks. This suggests underlying agreement but highlights the need for better coordination and streamlining of efforts.
Evidence
Risk-based approach reflected in AI Act, G7 Hiroshima process guiding principles and code of conduct, and other summit statements, integrated partnership between Global Partnership on AI and OECD as example of streamlining
Major discussion point
International Cooperation and Framework Coordination
Topics
Legal and regulatory
Agreed with
– Ansgar Koene
Agreed on
Risk-based approach as a common foundation across AI governance frameworks
AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics
Explanation
While AI may take some inspiration from Internet governance principles, AI encompasses much more than what operates on the Internet. AI includes embedded systems, robotics, and autonomous vehicles that have characteristics not found in Internet governance, requiring specific approaches.
Evidence
Examples of non-Internet AI: embedded AI, robotics, intelligent robotics, autonomous vehicles, numerous AI-specific issues without matching aspects in Internet governance
Major discussion point
AI Governance vs Internet Governance Comparison
Topics
Legal and regulatory | Infrastructure
Agreed with
– Abhishek Singh
– Ansgar Koene
Agreed on
AI governance differs significantly from Internet governance
Disagreed with
– Abhishek Singh
Disagreed on
Scope and nature of AI governance compared to Internet governance
Melinda Claybaugh
Speech speed
157 words per minute
Speech length
864 words
Speech time
328 seconds
The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation
Explanation
There has been an incredible proliferation of frameworks, principles, and codes in AI governance showing strong international cooperation. However, the focus should now shift to connecting these frameworks rather than continuing to create new principles, to avoid the risk of fragmentation for global technology deployment.
Evidence
Similarity and coherence of approach across various high-level frameworks, challenge of running global technology across fragmented regulatory approaches
Major discussion point
Evolution and Current State of Global AI Governance
Topics
Legal and regulatory
Agreed with
– Lucia Russo
– Abhishek Singh
– Juha Heikkila
Agreed on
Need for inclusive international cooperation and avoiding fragmentation
The focus should shift from establishing more principles to translating existing frameworks into actionable measures for companies, countries, and policy institutions
Explanation
Moving from principle to practice involves translating frameworks into concrete measures for companies, helping countries implement AI solutions for public challenges, and providing policy institutions with practical toolkits. This includes ensuring countries have necessary infrastructure like energy policy, research capabilities, and compute power.
Evidence
Need for energy policy, scientific infrastructure, research infrastructure, data, compute power for countries to leverage AI, broadening conversation beyond early principles to include benefits alongside risk minimization
Major discussion point
Moving from Principles to Practice
Topics
Legal and regulatory | Infrastructure
Agreed with
– Lucia Russo
– Abhishek Singh
– Ansgar Koene
Agreed on
Moving from principles to practice is the critical next step in AI governance
Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South
Explanation
The Hiroshima AI principles and process were important in ensuring focus on maximizing benefits alongside minimizing risks. This requires expanding the conversation to include more stakeholders, particularly civil society and Global South participants, to achieve tangible impacts.
Evidence
Hiroshima AI principles focus on maximizing benefits as well as minimizing risks, need to include civil society and Global South in conversations, AI Impact Summit as example of inclusive stakeholder engagement
Major discussion point
Inclusivity and Global South Participation
Topics
Development | Legal and regulatory
UN Scientific Panel on AI and global dialogue on AI governance should avoid duplicating existing efforts while providing independent scientific research and convening power
Explanation
The UN’s role should focus on providing independent scientific research through the Scientific Panel and using its convening power for global dialogue on AI governance. However, this should be done carefully to avoid duplicating existing international efforts and initiatives.
Evidence
UN Scientific Panel on AI as independent scientific body, global dialogue on AI governance through UN forums, importance of convening power for bringing right stakeholders together
Major discussion point
International Cooperation and Framework Coordination
Topics
Legal and regulatory
Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption
Explanation
Three key areas for moving from principles to practice include developing comprehensive policy toolkits for countries, creating centralized libraries of AI evaluations and benchmarks, and maintaining ongoing global scientific dialogue. These resources help countries advance their AI adoption capabilities.
Evidence
OECD well-placed to build policy toolkits, need for libraries of evaluations and benchmarks and third-party testing resources, importance of continuing global scientific conversation
Major discussion point
Technical Infrastructure and Capacity Building
Topics
Development | Legal and regulatory
Ansgar Koene
Speech speed
151 words per minute
Speech length
884 words
Speech time
349 seconds
Companies need concrete governance frameworks to assess reliability and understand boundary conditions for mission-critical AI applications, with global initiatives providing both direct guidance and indirect harmonization across jurisdictions
Explanation
As organizations move from exploring AI to implementing it in mission-critical applications, they need confidence in governance frameworks that help assess AI system reliability and understand operational boundaries. Global initiatives provide direct guidance through principles and indirect benefits by helping countries create compatible regulations.
Evidence
Organizations moving from test cases to mission-critical applications where AI failure has significant impact, need to understand boundary conditions and provide correct usage information, OECD AI principles and G7 code of conduct providing foundation for organizational governance thinking
Major discussion point
Moving from Principles to Practice
Topics
Legal and regulatory
Agreed with
– Lucia Russo
– Abhishek Singh
– Melinda Claybaugh
Agreed on
Moving from principles to practice is the critical next step in AI governance
Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers
Explanation
Effective AI governance implementation requires supporting broader participation in standards development, creating reliable and repeatable assessments, and building an ecosystem of assessment providers. This includes providing transparency about what assessments actually test and building community capacity for evaluation.
Evidence
OECD Incidence database helping understand real AI failures versus hypothetical ones, interesting work in jurisdictions like UK on building assessment ecosystems, need for expectation management so users understand assessment scope
Major discussion point
Technical Infrastructure and Capacity Building
Topics
Legal and regulatory | Digital standards
Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific
Explanation
AI governance complexity stems from the fact that AI is a core technology applicable across many different use cases, where risk depends heavily on the specific application. This contrasts with Internet governance, which deals with a more uniform technology platform.
Evidence
Risk-based approach picked up across many AI governance frameworks, AI risk depends on use case while Internet is more uniform technology
Major discussion point
AI Governance vs Internet Governance Comparison
Topics
Legal and regulatory
Agreed with
– Juha Heikkila
– Abhishek Singh
Agreed on
AI governance differs significantly from Internet governance
Audience
Speech speed
122 words per minute
Speech length
51 words
Speech time
25 seconds
AI governance should learn from Internet governance experiences while recognizing the differences between the two domains
Explanation
The audience member from University of Takyushu questioned how AI governance compares to Internet governance, noting that when the Internet was spreading globally there were various challenges. This suggests interest in applying lessons learned from Internet governance to the emerging field of AI governance.
Evidence
Reference to challenges faced during global Internet expansion
Major discussion point
AI Governance vs Internet Governance Comparison
Topics
Legal and regulatory
Agreements
Agreement points
Moving from principles to practice is the critical next step in AI governance
Speakers
– Lucia Russo
– Abhishek Singh
– Melinda Claybaugh
– Ansgar Koene
Arguments
OECD has evolved from establishing principles in 2019 to providing policy guidance and analytical work, with three strategic pillars: moving from principles to practice, providing metrics through AI policy observatory, and promoting inclusive international cooperation
Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions
The focus should shift from establishing more principles to translating existing frameworks into actionable measures for companies, countries, and policy institutions
Companies need concrete governance frameworks to assess reliability and understand boundary conditions for mission-critical AI applications, with global initiatives providing both direct guidance and indirect harmonization across jurisdictions
Summary
All speakers agree that while AI governance principles have been established across various frameworks, the urgent need now is to translate these principles into practical, actionable measures that can be implemented by companies, governments, and institutions
Topics
Legal and regulatory | Development
Need for inclusive international cooperation and avoiding fragmentation
Speakers
– Lucia Russo
– Abhishek Singh
– Juha Heikkila
– Melinda Claybaugh
Arguments
The Global Partnership on AI merger with OECD expanded membership to 44 countries including six non-OECD members, broadening geographic scope for more inclusive conversations
AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives
Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining
The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation
Summary
Speakers unanimously agree on the importance of inclusive international cooperation that brings Global South countries into decision-making processes while avoiding fragmentation through better coordination of existing frameworks
Topics
Legal and regulatory | Development
Risk-based approach as a common foundation across AI governance frameworks
Speakers
– Juha Heikkila
– Ansgar Koene
Arguments
Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining
Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific
Summary
Both speakers recognize that risk-based approaches have emerged as a consistent element across different AI governance frameworks, providing common ground despite the complexity of AI applications
Topics
Legal and regulatory
AI governance differs significantly from Internet governance
Speakers
– Juha Heikkila
– Abhishek Singh
– Ansgar Koene
Arguments
AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics
AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations
Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific
Summary
Speakers agree that while AI governance can learn from Internet governance principles, AI presents unique challenges requiring different approaches due to its broader applications beyond the Internet and concentrated control structure
Topics
Legal and regulatory | Infrastructure
Similar viewpoints
Both speakers emphasize the critical importance of including Global South countries and underrepresented stakeholders in AI governance discussions, moving beyond risk-focused conversations to include benefits and ensuring equitable access to AI resources
Speakers
– Abhishek Singh
– Melinda Claybaugh
Arguments
AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives
Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South
Topics
Development | Legal and regulatory
Both speakers advocate for developing comprehensive toolkits and resource libraries that provide practical guidance for implementing AI governance principles, with OECD being well-positioned to lead this effort
Speakers
– Lucia Russo
– Melinda Claybaugh
Arguments
OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions
Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption
Topics
Development | Legal and regulatory
Both speakers stress the need for building regulatory and assessment capacity, including tools for testing AI systems and transparent evaluation methods that can be implemented by regulatory bodies
Speakers
– Abhishek Singh
– Ansgar Koene
Arguments
Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions
Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers
Topics
Legal and regulatory | Digital standards
Unexpected consensus
Innovation-friendly regulation approach
Speakers
– Juha Heikkila
– Melinda Claybaugh
Arguments
EU’s AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly environment
The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation
Explanation
It’s unexpected that a major tech company representative (Melinda) and EU regulator (Juha) would find such strong alignment on the innovation-friendly nature of regulation, with both emphasizing that current approaches avoid stifling innovation while providing necessary safeguards
Topics
Legal and regulatory
Streamlining and avoiding duplication of international efforts
Speakers
– Juha Heikkila
– Melinda Claybaugh
– Abhishek Singh
Arguments
Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining
UN Scientific Panel on AI and global dialogue on AI governance should avoid duplicating existing efforts while providing independent scientific research and convening power
Global Digital Compact should focus on operational level implementation, capacity building networks, and enhanced cooperation on regulatory tools rather than just principles
Explanation
Unexpected consensus among government representatives from different regions (EU, US company, India) on the need to streamline international AI governance efforts rather than create more frameworks, showing pragmatic alignment across different stakeholder types
Topics
Legal and regulatory
Overall assessment
Summary
The discussion reveals strong consensus on key foundational issues: the urgent need to move from principles to practical implementation, the importance of inclusive international cooperation that brings Global South countries into decision-making, the adoption of risk-based approaches as common ground, and recognition that AI governance requires different approaches than Internet governance. There is also unexpected alignment between regulators and industry on innovation-friendly approaches and the need to streamline rather than proliferate international frameworks.
Consensus level
High level of consensus with significant implications for AI governance development. The alignment suggests that despite different stakeholder perspectives, there is substantial agreement on both the direction and methodology for advancing global AI governance. This consensus provides a strong foundation for coordinated international action, particularly in developing practical implementation tools, building inclusive frameworks, and avoiding regulatory fragmentation. The agreement spans both procedural aspects (how to govern) and substantive priorities (what to focus on), indicating mature understanding of the challenges and realistic pathways forward.
Differences
Different viewpoints
Scope and nature of AI governance compared to Internet governance
Speakers
– Juha Heikkila
– Abhishek Singh
Arguments
AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics
AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations
Summary
Juha emphasizes the fundamental differences between AI and Internet governance due to AI’s broader scope beyond Internet applications, while Abhishek focuses on adapting Internet governance principles to AI while addressing the concentration of corporate control
Topics
Legal and regulatory | Infrastructure
Unexpected differences
Limited disagreement on fundamental AI governance principles despite different jurisdictional approaches
Speakers
– All speakers
Arguments
Various arguments about implementation approaches but consistent agreement on core principles
Explanation
Surprisingly, there was minimal fundamental disagreement among speakers from different regions (EU, India, OECD, private sector) on core AI governance principles, with most differences being about implementation methods rather than underlying goals
Topics
Legal and regulatory
Overall assessment
Summary
The discussion showed remarkably low levels of fundamental disagreement, with most differences centered on implementation approaches rather than core principles. The main areas of difference were: technical approaches to capacity building, the relationship between AI and Internet governance, and specific mechanisms for Global South inclusion.
Disagreement level
Low to moderate disagreement level with high consensus on principles but varying approaches to implementation. This suggests strong foundation for international cooperation but potential challenges in coordinating diverse implementation strategies across different jurisdictions and stakeholder groups.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the critical importance of including Global South countries and underrepresented stakeholders in AI governance discussions, moving beyond risk-focused conversations to include benefits and ensuring equitable access to AI resources
Speakers
– Abhishek Singh
– Melinda Claybaugh
Arguments
AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives
Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South
Topics
Development | Legal and regulatory
Both speakers advocate for developing comprehensive toolkits and resource libraries that provide practical guidance for implementing AI governance principles, with OECD being well-positioned to lead this effort
Speakers
– Lucia Russo
– Melinda Claybaugh
Arguments
OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions
Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption
Topics
Development | Legal and regulatory
Both speakers stress the need for building regulatory and assessment capacity, including tools for testing AI systems and transparent evaluation methods that can be implemented by regulatory bodies
Speakers
– Abhishek Singh
– Ansgar Koene
Arguments
Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions
Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers
Topics
Legal and regulatory | Digital standards
Takeaways
Key takeaways
Global AI governance has rapidly evolved from Japan’s 2016 initiative to multiple frameworks (OECD principles, Hiroshima process, EU AI Act, Global Digital Compact), showing remarkable international cooperation but now requiring coordination to avoid fragmentation
The critical phase is moving from establishing principles to practical implementation – companies, governments, and organizations need concrete toolkits, assessment methods, and operational guidance rather than more high-level frameworks
Inclusivity and democratization of AI are essential, particularly ensuring Global South participation through access to compute resources, inclusive datasets, capacity building, and meaningful involvement in decision-making processes
Risk-based approaches have emerged as a common foundation across different frameworks, suggesting convergence potential despite apparent multiplication of governance efforts
AI governance differs fundamentally from Internet governance due to AI’s broader applications beyond Internet-based systems, requiring specialized approaches while potentially adopting multi-stakeholder principles
International cooperation should focus on interoperability between different jurisdictional approaches while respecting diverse national contexts and regulatory frameworks
Resolutions and action items
OECD to develop an interactive online toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions
Continue the Hiroshima AI process reporting framework with companies providing transparent reports on their AI governance practices
Expand Global Partnership on AI membership beyond current 44 countries to increase Global South representation
India to host AI Impact Summit in February focusing on operationalizing inclusive AI governance principles
Build global repository of AI solutions accessible to more countries, similar to the DPI ecosystem model
Develop capacity building networks as outlined in Global Digital Compact implementation
Create libraries of evaluation resources and benchmarks for AI assessment that can be shared globally
Unresolved issues
How to effectively streamline and coordinate the proliferation of AI governance frameworks without losing momentum or excluding stakeholders
Practical mechanisms for ensuring Global South access to high-end compute resources (like H100s, H200s) at affordable costs
Specific implementation details for making AI training datasets more inclusive and representative of global contexts
How to enhance regulatory capacity in developing countries to test and assess AI systems against established standards
Balancing innovation-friendly approaches with necessary safeguards across different jurisdictional frameworks
Defining the exact role and scope of UN Scientific Panel on AI to avoid duplication with existing initiatives
Addressing the concentration of AI development power in few companies and countries while maintaining technological advancement
Suggested compromises
Adopt risk-based approaches that allow different jurisdictions to implement AI governance according to their contexts while maintaining common foundational principles
Focus UN Global Digital Compact discussions on operational implementation rather than creating new principles, building on existing frameworks
Streamline international AI governance forums over time while maintaining the successful integrated partnership model between Global Partnership on AI and OECD
Balance innovation promotion with risk mitigation by focusing governance on specific high-risk AI applications rather than regulating the technology broadly
Use multi-stakeholder approaches from Internet governance while adapting to AI’s unique characteristics and broader application scope
Develop interoperable frameworks that respect different national approaches while ensuring global coordination and knowledge sharing
Thought provoking comments
Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries. If you have to democratise this, if you have to kind of ensure that the country, the Global South, become a stakeholder in the conversations around, we need to have this principle ingrained in all the countries around the world.
Speaker
Abhishek Singh
Reason
This comment was particularly insightful because it shifted the conversation from abstract governance principles to concrete power dynamics and equity issues. Singh highlighted the fundamental challenge that AI governance isn’t just about creating rules, but about addressing the concentration of technological power and ensuring meaningful participation from developing nations.
Impact
This comment significantly influenced the discussion’s trajectory by introducing the theme of inclusivity and democratization that became central to subsequent speakers’ remarks. It prompted other panelists to address capacity building, resource sharing, and the need for more equitable access to AI technologies. The comment also established the Global South perspective as a critical lens through which to evaluate governance frameworks.
I think at this moment, it’s really important to consider connecting the dots. I think we don’t want to continue down the road of duplication and proliferation and continued putting down of principles… And from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach.
Speaker
Melinda Claybaugh
Reason
This observation was thought-provoking because it challenged the prevailing approach of creating multiple governance frameworks. Claybaugh identified a critical problem: the proliferation of principles without sufficient focus on implementation and interoperability, which creates practical challenges for global technology deployment.
Impact
This comment catalyzed a shift in the discussion from celebrating the various governance initiatives to critically examining their effectiveness and coherence. It introduced the concept of ‘fragmentation risk’ that other speakers then built upon, leading to discussions about streamlining efforts and improving interoperability between different jurisdictions’ approaches.
The AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary… in fact it’s innovation friendly because about 80% according to our estimate, maybe even 85% of AI systems that we see around would be unaffected by it.
Speaker
Juha Heikkila
Reason
This clarification was insightful because it directly addressed widespread misconceptions about the EU AI Act being overly restrictive. By providing specific statistics and explaining the risk-based approach, Heikkila reframed the narrative around regulation from being innovation-stifling to being targeted and proportionate.
Impact
This comment helped establish a more nuanced understanding of regulatory approaches in the discussion. It influenced subsequent conversations about balancing innovation with safety, and provided a concrete example of how governance can be both protective and innovation-friendly, which other speakers referenced when discussing their own approaches.
We are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens… it is becoming very critical for organizations to have the confidence that they have a good governance framework in place.
Speaker
Ansgar Koene
Reason
This comment was particularly valuable because it connected theoretical governance discussions to practical organizational needs. Koene highlighted the evolution from experimental AI use to mission-critical applications, emphasizing why governance frameworks must be reliable and actionable rather than merely aspirational.
Impact
This observation reinforced the ‘principles to practice’ theme that became central to the discussion. It provided concrete justification for why the governance frameworks being discussed matter in real-world implementation, and supported arguments made by other speakers about the need for practical toolkits and assessment mechanisms.
There is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI. I think that this kind of multiplication is not necessarily sustainable in the long run.
Speaker
Juha Heikkila
Reason
This was a bold and thought-provoking statement because it challenged the assumption that more governance initiatives are inherently better. Heikkila raised questions about the sustainability and effectiveness of the current proliferation of AI governance forums and frameworks.
Impact
This comment validated and expanded upon Claybaugh’s earlier concerns about fragmentation, creating a consensus around the need for consolidation and better coordination. It influenced the moderator’s closing remarks about the role of IGF and the importance of avoiding duplication, suggesting a potential path forward for more streamlined governance approaches.
Overall assessment
These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a routine overview of governance initiatives into a more sophisticated analysis of systemic challenges. First, Singh’s emphasis on power concentration and Global South inclusion established equity as a central concern, influencing all subsequent speakers to address inclusivity and capacity building. Second, Claybaugh’s observation about fragmentation and the need to ‘connect the dots’ created a critical lens through which other speakers evaluated existing frameworks, leading to discussions about interoperability and streamlining. Third, the collective emphasis on moving ‘from principles to practice’ – reinforced by Koene’s practical perspective and supported by others – shifted the conversation from celebrating existing frameworks to critically examining their implementation challenges. These comments created a more mature, nuanced discussion that acknowledged both the progress made in AI governance and the significant challenges that remain, ultimately pointing toward more coordinated, inclusive, and practically-oriented approaches to global AI governance.
Follow-up questions
How can we ensure researchers in low- and medium-income countries have access to similar compute resources as researchers in Silicon Valley?
Speaker
Abhishek Singh
Explanation
This addresses the digital divide and democratization of AI technology access globally, which is crucial for inclusive AI development
Can we build up a global depository of AI solutions which can be accessible to more countries?
Speaker
Abhishek Singh
Explanation
This would facilitate knowledge sharing and prevent duplication of AI development efforts across different countries
How do we develop tools for watermarking AI content and global frameworks so that social media companies become part of preventing misinformation risks?
Speaker
Abhishek Singh
Explanation
This addresses the growing concern about AI-generated misinformation and deepfakes threatening democratic processes
How do we enhance regulatory capacity for testing AI solutions against standards and benchmarks?
Speaker
Abhishek Singh
Explanation
This is critical for ensuring AI systems meet safety and trustworthiness requirements before deployment
How do we connect different AI governance frameworks to avoid fragmentation and improve interoperability?
Speaker
Melinda Claybaugh
Explanation
This addresses the proliferation of different AI governance approaches that could create compliance challenges for global AI deployment
How do we expand the conversation beyond risks to include benefits and involve more stakeholders from civil society and the Global South?
Speaker
Melinda Claybaugh
Explanation
This ensures AI governance discussions are balanced and inclusive of diverse perspectives and use cases
How do we build reliable, repeatable assessments for AI systems implementation and governance frameworks?
Speaker
Ansgar Koene
Explanation
This is essential for providing end-users with confidence and trust in AI systems through standardized evaluation methods
How do we streamline the multiplication of AI governance efforts and forums to avoid duplication?
Speaker
Juha Heikkila
Explanation
The current landscape has numerous overlapping initiatives that may not be sustainable long-term and could lead to inefficiencies
How can principles of Internet governance be applied to AI governance, considering AI includes more than just Internet-based applications?
Speaker
Shinichiro Terada (audience member)
Explanation
This explores whether existing governance models can be adapted for AI, while recognizing the unique challenges AI presents beyond Internet governance
How do we make AI governance more multi-stakeholder and inclusive like Internet governance, while addressing the concentration of AI power in few corporations?
Speaker
Abhishek Singh (in response to audience question)
Explanation
This addresses the need for more democratic and distributed approaches to AI governance to prevent monopolization
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
