Lightning Talk #22 Eurodig Inviting Global Stakeholders

Lightning Talk #22 Eurodig Inviting Global Stakeholders

Session at a glance

Summary

This discussion was a networking session about the 18th edition of EuroDIG (European Dialogue on Internet Governance), which took place in Strasbourg at the Council of Europe. Sandra Hoferichter welcomed participants and explained that this year’s conference returned to where EuroDIG originally began, making it particularly special. The overarching theme was “balancing innovation and regulation” with an added focus on “safeguarding human rights” given the Council of Europe venue.


Thomas Schneider, Swiss ambassador and president of the EuroDIG Support Association, presented key messages from the WSIS+20 review process. He emphasized that the European community strongly supports the multi-stakeholder approach as fundamental to solving digital issues, with regional diversity and National and Regional Internet Governance Initiatives (NRIs) being crucial for bringing diverse voices into global processes. The community advocates for making the Internet Governance Forum (IGF) a permanent institution rather than renewing its mandate every 10 years, and calls for the WSIS+20 process to be more open, transparent, and inclusive rather than conducted behind closed doors.


Frances Douglas-Thompson, a YouthDIG participant, highlighted two controversial areas that emerged from youth discussions. First, regarding digital literacy, young people disagreed on how to approach generative AI in education—whether it should be seen as a positive assistant or as something that creates complacency and erodes critical thinking skills. Second, on content moderation, there was significant disagreement between those favoring strict moderation to combat misinformation and violence, and those concerned about freedom of speech and expression. The youth ultimately agreed on improving algorithm transparency so users can better understand why they receive certain content.


The session concluded with audience questions addressing broader challenges in internet governance, including the role of these dialogues during humanitarian crises and how to increase youth participation in policy discussions.


Keypoints

## Major Discussion Points:


– **WSIS+20 Review and Internet Governance Framework**: Discussion of EuroDIG’s input on the World Summit on the Information Society plus 20 review process, emphasizing the need for multi-stakeholder approaches, making the Internet Governance Forum (IGF) a permanent institution rather than renewing it every 10 years, and ensuring the review process remains open and inclusive rather than conducted behind closed doors.


– **Youth Participation and Controversial Digital Policy Issues**: Presentation of YouthDIG findings highlighting divisions among young Europeans on key issues like digital literacy curriculum design, the role of generative AI in education, and content moderation approaches – with some favoring strict moderation while others worried about freedom of expression implications.


– **AI Governance and Human Rights**: Focus on artificial intelligence governance challenges, particularly around privacy and data protection in environments where distinguishing private from non-private data becomes increasingly difficult, and the need to safeguard human rights in digital spaces.


– **Democratic Participation and Crisis Response**: Discussion of how internet governance forums can address humanitarian crises, misinformation during conflicts, and the broader challenge of maintaining democratic engagement when people may be losing faith in democratic institutions and seeking simpler answers from authoritarian figures.


– **Practical Youth Engagement Strategies**: Exploration of effective methods for involving young people in internet governance, including financial support for participation, bottom-up rather than top-down approaches, and creating spaces where youth voices are taken seriously in policy discussions.


## Overall Purpose:


The discussion served as a networking session and report-back from the 18th EuroDIG conference, aimed at sharing key outcomes and messages from the European internet governance community, particularly focusing on WSIS+20 review input and youth perspectives, while fostering dialogue about current digital policy challenges.


## Overall Tone:


The discussion maintained a professional yet accessible tone throughout, beginning with formal presentations but becoming increasingly interactive and engaged as audience questions introduced more complex and sometimes challenging topics. The tone remained constructive and solution-oriented even when addressing difficult issues like democratic backsliding and humanitarian crises, with speakers demonstrating both realism about current challenges and optimism about the potential for continued progress through multi-stakeholder dialogue.


Speakers

**Speakers from the provided list:**


– **Sandra Hoferichter** – Session moderator/facilitator for EuroDIG networking session


– **Thomas Schneider** – Swiss ambassador, President of the EuroDIG Support Association


– **Frances Douglas Thomson** – YouthDIG participant, youth representative


– **Hans Seeuws** – Business operations manager of .eu (operates top-level domain on behalf of the European Commission)


– **Chetan Sharma** – Representative from Data Mission Foundation Trust India


– **Janice Richardson** – Expert to the Council of Europe


– **Audience** – Various unidentified audience members who asked questions


**Additional speakers:**


– **Thomas** (different from Thomas Schneider) – Former youth delegate/youth digger from the previous year


– **Audience member from Geneva** – Representative from Geneva Macro labs University of Geneva, active with program committee


Full session report

# EuroDIG 18th Edition: Networking Session Summary


## Introduction and Context


The 18th edition of EuroDIG (European Dialogue on Internet Governance) took place at the Council of Europe in Strasbourg, marking a symbolic return to where the initiative originally began in 2008. Sandra Hoferichter, serving as session moderator, welcomed participants to this 30-minute networking session during the lunch break, designed to share key outcomes and messages from the conference. As Sandra noted, this return to “where it all started” made the event particularly special.


The overarching theme focused on “balancing innovation and regulation” with particular emphasis on “safeguarding human rights,” reflecting the significance of hosting the event at the Council of Europe. Although Council of Europe representatives were unable to attend due to the Deputy Secretary General having another mission, the session proceeded with its planned agenda.


This networking session served multiple purposes: reporting back on conference outcomes, sharing key messages for the WSIS+20 review process, presenting youth perspectives through YouthDIG findings, and fostering dialogue about contemporary digital governance challenges. The discussion brought together diverse stakeholders including diplomats, youth representatives, business operators, and civil society experts.


## WSIS+20 Review Process and European Messages


Thomas Schneider, Swiss ambassador and President of the EuroDIG Support Association, presented the European community’s key messages for the World Summit on the Information Society plus 20 (WSIS+20) review process. These represent “messages from Strasbourg number two” – the first being from 2008 – continuing EuroDIG’s tradition of producing consolidated input from the European internet governance community. Notably, EuroDIG invented the concept of “messages” that has since been adopted by the global IGF and other local IGFs.


The input gathering process was facilitated by Mark Harvell, a former UK government member, and the messages will be delivered to Suela Janina, the Albanian permanent representative who serves as European co-facilitator for WSIS+20.


The European community strongly advocates for maintaining and strengthening the multi-stakeholder approach as fundamental to solving digital issues. Schneider emphasized that regional diversity and National and Regional Internet Governance Initiatives (NRIs) play crucial roles in bringing diverse voices into global processes, ensuring that internet governance remains inclusive rather than dominated by any single stakeholder group.


A significant recommendation concerns the Internet Governance Forum’s institutional status. The European community calls for making the IGF a permanent institution rather than continuing the current practice of renewing its mandate every ten years. This change would provide greater stability and enable better long-term planning. Schneider noted that the IGF has evolved beyond its original focus on “critical internet resources and domain names” and now serves a “decision-shaping” rather than “decision-making” role.


Regarding the WSIS+20 review process itself, there was strong emphasis on ensuring it remains open, transparent, and inclusive rather than being conducted behind closed doors among diplomats. The messages also address contemporary challenges including AI governance, human rights protection in digital spaces, and data protection challenges.


## Youth Perspectives and Digital Policy Controversies


Frances Douglas Thomson, representing YouthDIG participants, presented excerpts from the youth messages, focusing on two particularly controversial areas that emerged from their discussions. The YouthDIG programme brought together young participants who met online the month before EuroDIG, then worked together for three days during the conference to develop their positions.


Thomson revealed that these controversial areas surprised organizers with the level of disagreement among youth participants, challenging common assumptions about youth consensus on digital issues and highlighting the complexity of generational perspectives on digital governance.


### Digital Literacy Curriculum Debates


The first controversial area concerned digital literacy curriculum development, specifically “who decides on the syllabus for digital literacy and how do these issues get presented,” particularly regarding generative AI in educational contexts. Youth participants were divided between those who view AI as a beneficial educational assistant that can enhance learning experiences and those who worry about creating dependency and eroding critical thinking skills.


This disagreement reflects broader questions about educational governance and how emerging technologies should be framed for young learners, suggesting that educational approaches may need to acknowledge multiple perspectives rather than presenting singular narratives.


### Content Moderation: Balancing Safety and Freedom


The second major area involved content moderation approaches, where youth participants were fundamentally split between strict safety measures and freedom of expression concerns. One group supported comprehensive content moderation including warnings, banning, and flagging of harmful content, while another group viewed such measures as threats to free speech.


However, youth participants found common ground in supporting algorithmic transparency. Their compromise solution focused on increased user understanding of “why is the algorithm feeding me this” with options to access “the other side” or “other sources.” This approach emphasizes user empowerment through information rather than content restriction.


### Social Media Regulation Approaches


When asked by a former youth delegate whether the goal should be to “lessen time spent online for youth” or ensure they see the “right type of stuff,” Frances emphasized focusing on content quality rather than quantity restrictions. She argued that time-based restrictions could limit access to valuable democratic spaces where young people engage with political and social issues, advocating instead for approaches that recognize social media’s potential as venues for civic engagement.


## Business and Technical Stakeholder Perspectives


Hans Seeuws, business operations manager of .eu (operating the top-level domain on behalf of the European Commission), provided insights from the technical and business stakeholder perspective. His participation highlighted the importance of involving infrastructure operators in internet governance discussions, as these stakeholders bring practical implementation experience.


Seeuws expressed particular interest in seeing EuroDIG messages influence decision-makers, especially regarding AI governance, human rights protection, data privacy, and youth involvement in policy processes, emphasizing the need for translating policy discussions into practical implementation.


## Crisis Response and Humanitarian Challenges


Chetan Sharma from the Data Mission Foundation Trust India raised challenging questions about internet governance platforms’ effectiveness in addressing humanitarian crises. He argued that these forums had missed opportunities to build advocacy roles during recent crises, suggesting they could contribute more meaningfully to raising awareness about urgent humanitarian situations.


Frances Douglas Thomson responded by highlighting that EuroDIG does address conflict-related issues through discussions on autonomous weapons, internet weaponization, and misinformation in warfare, bringing together diverse stakeholders to examine how digital technologies are used in conflicts.


Thomas Schneider acknowledged the challenges facing multilateral approaches while suggesting that crises can create opportunities for rethinking governance approaches. He noted that despite government knowledge gaps and engagement difficulties, other stakeholders can continue advancing important policy discussions.


## Democratic Participation and Governance Challenges


An audience member from Geneva raised fundamental questions about fixing internet policy when governments lack technical expertise and multilateral approaches appear insufficient. This prompted reflection on traditional governance limitations in addressing rapidly evolving technological challenges.


Schneider provided context suggesting that current challenges may reflect consequences of previous success, where people may be losing appreciation for the effort required to maintain democratic institutions. He emphasized that maintaining democracy and quality information requires continuous effort, particularly during crises when people may seek simple answers to complex problems.


## Youth Engagement Strategies and Implementation


The session explored practical strategies for expanding youth participation beyond current programs. Sandra Hoferichter emphasized the importance of promoting programs like YouthDIG through schools and social media to increase awareness among young people who might not otherwise encounter internet governance discussions.


A significant challenge identified was providing structural support, including financial assistance for travel and accommodation. Sandra made a specific appeal for support for participants from distant countries such as Armenia, Azerbaijan, and Georgia, noting the expensive flight costs that create barriers to participation.


Frances emphasized the importance of bottom-up rather than top-down approaches to youth engagement, ensuring young people feel taken seriously in policy spaces rather than being treated as token participants. She highlighted how these dialogues “impact when people go away from these dialogues how they perceive this issue.”


## Regional Diversity and Global Coordination


Throughout the discussion, speakers emphasized maintaining regional diversity within global internet governance processes. The European approach recognizes that different regions may have different priorities and approaches while still requiring coordination on global issues.


This perspective acknowledges that internet governance challenges manifest differently across regions due to varying legal frameworks, cultural contexts, and technological infrastructure. Regional initiatives like EuroDIG provide opportunities for developing contextually appropriate approaches while contributing to global policy discussions.


## Conclusions and Future Directions


The networking session demonstrated both achievements and ongoing challenges of multi-stakeholder internet governance approaches. While there was strong consensus on fundamental principles such as inclusivity, transparency, and multi-stakeholder participation, significant disagreements remain on specific policy approaches.


The revelation of disagreements within the youth community challenged assumptions about generational consensus and highlighted the need for more nuanced approaches to youth engagement that acknowledge diverse perspectives rather than treating young people as a monolithic group.


Sandra concluded the session by encouraging participants to visit the joint booth with other NRIs and to reach out for deeper discussions on the topics raised. The session reinforced that internet governance remains a work in progress, requiring continuous adaptation to address emerging challenges while maintaining commitment to inclusive, transparent, and participatory approaches.


The diversity of perspectives represented – from youth participants to business operators to diplomatic representatives – demonstrated both the complexity of achieving consensus and the value of maintaining spaces for constructive dialogue across different stakeholder communities as the internet governance community prepares for the WSIS+20 review process.


Session transcript

Sandra Hoferichter: So, welcome to this session, nice to see you, I appreciate that you interrupt your lunch break in order to have a short networking session about EuroDIG this year. It was the 18th edition of EuroDIG, the 18th in a row, we have been to a different European country ever since every year, but this year we came back to where it all started from and that made it a very special session. Unfortunately, the Council of Europe will not join us today, they were supposed to be with us here, but the Deputy Secretary General had to go to another mission and had to be accompanied by those who agreed being here on stage today, so unfortunately, but I should submit my best greetings to you. You will see that over there and also at our booth we have our annual messages and as I said already, messages from Strasbourg number two, because messages from Strasbourg number one were drafted in 2008. Just to remind us all, EuroDIG was the forum that invented the concept of messages which was later on adapted by the global IGF and by many other local IGFs as well and if you want to have a hard copy, get one after the session over there, if you want to have two, get two or three, whatever. The overarching theme this year was not super creative because we recycled the overarching theme from last year which was so good, that was balancing innovation and regulation and because we were at the Council of Europe, we added safeguarding human rights. We found that was a good idea and safeguarding human rights is possibly something that is of great importance in these times. Of course, WSIS plus 20 review was one of the focuses in this year’s EuroDIG as it is here at the IGF and of course we have prepared our messages on the WSIS plus 20 review which we would like to briefly share with you but possibly not going too much into detail because this is subject to many other sessions here at the IGF and then we would also like to highlight one element of EuroDIG which is very close to our heart which is youth participation and Frances Douglas-Thompson, got it right, she was one of our YouthDIG participants this year and she has prepared an excerpt of the YouthDIG messages from this year’s YouthDIG. They met already the month before online, prepared for their participation in EuroDIG and YouthDIG. They had three days of hard work, discussion but also fun I hope at EuroDIG and then EuroDIG as the conference was of course also included. And then I hope we have a little bit of ten minutes, it’s only a 30-minute session, ten minutes of exchange with you on either what the two of our speakers, Thomas, our president and Frances, our YouthDIG participant are telling us or we have also prepared some guiding questions, let’s put it that way. But if you would like to engage with us on any other topic, we are very flexible here. And without further ado, I hand over to Thomas Schneider, Swiss ambassador, well known to most of you and president of the EuroDIG Support Association to share the WSIS plus 20 review messages. Thomas, I think you should stand up and I take your seat for the moment. I don’t have a seat, so it’s possibly better.


Thomas Schneider: So we need, you see, we are working on very low resources, so we share almost everything. Okay, so yes, Sandra has already said it. In addition to thematic sessions, of course, the EuroDIG community also cares about the architectural issues about global digital governance. And we had before already like community gathered voices from Europe, not just the EU or the governments or single stakeholders, but already for the IGF improvement discussion for the GDC, EuroDIG collected input. It was facilitated by a former UK government member by Mark Harvell, who is very good at this. And also with him, we spent some time gathering information or views from European stakeholders on the ongoing WSIS plus 20 process, review process. And of course, the outcomes are absolutely surprising in the sense that, first of all, everyone thinks that multi-stakeholder approach is fundamental to successfully finding solutions for all kinds of digital issues, and that the regional diversity, also the NRIs are key to actually bring in as many voices as possible into global processes. So we need global processes, you need regional processes, national processes that then are able to distill everything so that the variety of voices are able to be heard. Of course, the EuroDIG community thinks that the IGF is a fundamentally important platform for multi-stakeholder, global multi-stakeholder dialogue. It’s not a decision-making platform, but it is a decision-shaping platform. We have had many instances where new issues have come up at the IGF or EuroDIG or elsewhere, and then have then picked up by decision-making bodies like intergovernmental institutions, but also by businesses and so on. And of course, the request, what a surprise, is that the EuroDIG community thinks that we should now move from renewing the IGF mandate only on a 10-year basis, which is always difficult than to plan if it goes at the end of the 10-year phase. So it should be a permanent, the IGF should become a permanent institution that you don’t have to renew every 10 years like what is it now. And of course, another key element is that the WSIS plus 20 process should not be happening behind closed doors in New York among some diplomats, but it should be as open, as transparent, as inclusive, and also as diverse as possible. So this is a very clear message and there’s still, things are getting better, but there’s still some room for improvement. And another point is also that the IGF is not just about critical internet resources and domain names like it used to be, mainly, not only, but that was in the focus 20 years ago when it was created, the IGF has moved on, has taken on all the new relevant issues from AI to data governance to other things, and that this should continue, the ITU should be the place where new and emerging issues are discussed and first ideas, first perspectives of common opportunities and challenges should be developed. And just to say that, of course, we had an exchange with the European co-facilitator for the WSIS plus 20 process with Suela Janina, the Albanian perm rep in New York, and she’s also here around, so if you happen to see her, of course, you may also, again, pass on European messages to her. Thank you very much.


Sandra Hoferichter: Thank you very much, and I would suggest we invite for one intervention, one reply to what Thomas just said, or question, whatever you like. I come down with a microphone. You have a microphone? Fantastic. So, please, signal to that lady with a microphone if you would like to speak up. There is only a chance now. Yeah, thank you, Thomas. You said that you’re like… Please introduce yourself.


Hans Seeuws: I’ll introduce myself, Sandra. My name is Hans Seuss. I’m the business operations manager of .eu, so we operate a top-level domain on behalf of the European Commission. You mentioned earlier that you like messages to be picked up and also, in a way, by decision-makers and intergovernmental bodies. What is then one of the key topics that you would like them to see picked up in the very near future?


Thomas Schneider: From this year’s Euro, it’s not just one, it’s several ones. One, of course, that was all over the place was AI governance, human rights aspect data, how to deal with something like privacy in an environment where it’s very difficult to actually say what is private data, what is not private data. Another big… But we’ll get to that, is how to involve young people, not just in a dialogue, but also then in decision-making. You get the key points on a few pages here in this leaflet, of course, also online, so I recommend you to read it. I must have forgotten all the other eight or ten important points, so forgive me if I didn’t mention the ones that you care most. But yeah, it’s basically from human rights economic angles. Geopolitical discussions, of course, were also there, so it covered the whole range. Thank you.


Sandra Hoferichter: Thank you. And as I said, only one intervention, we give another chance after the contribution from Frances. Frances, please, go ahead.


Frances Douglas Thomson: Okay. So, yes. As Sandra rightly said, I am a youth digger, which means that I attended both the sessions online and also the sessions in person. We had three days. They were very youth-led. At all points, it was the youth who were saying what mattered to us, what was pertinent to us, and what were our stances on these issues. And I think in this short amount of time, what I really would like to focus on is where controversies lie. Yeah, I think it’s very easy as a young person to think that everyone in your network in Europe has similar ideas about things like content moderation digital literacy The role of generative AI and whether these things are good or bad and how we should regulate or not regulate Against these issues. And so this is what I am hoping to present today and then get your advice or opinions or Perspectives on these issues. So the first one we had five broad messages. I’m going to focus on two today The first one is about digital literacy. So we all broadly speaking a lot of digital literacy operates, right? So who decides on the syllabus for digital literacy and how do these issues get presented? so for example, do we see generative AI as something that’s necessarily a positive and Wholesale good for society and education. Is it as an assistant or is it actually something that? creates complacency within young students and means the skill sets they leave education with and Not nearly as good as they used to be doesn’t mean that young people aren’t aware of the sources of information like they used to be and the kind of Skills you got in things like history about learning where resources and sources come from is now eroded So we had different ideas about this and we didn’t come to any clear Agreement on what the syllabus looks like but just that the syllabus is incredibly important because the digital space is Increasing in How much it’s going to be part of young people’s lives and you can’t just not be in the digital space And so you must know how to engage with it in a way that’s sensible and be aware of the benefits and the risks So yeah, this is the first thing The second thing is on content moderation and this is what surprised me the most because myself and another sector or contingent of the The group of youth diggers when we were coming to discuss in our discussion and debate spaces before the euro dig Some of us said content moderation is great. We think that things like warnings bannings flagging information that is clearly misinformation is clearly deep fakes or is clearly inciting violence or is Harmful to us this should be very strictly dealt with and then another group said this is very worrying for us and this idea of very harsh stringent content moderation is worrying for freedom of speech freedom of expression and The capacity for young people and all people to be engaging with information of all different types So then this was very surprising to me And so what we did decide on was something that could be very good is if we had Increased how users on places like social media can Better understand where what they’re being given Comes from right so you can press maybe an information thing on the corner of your post and see why is the algorithm feeding me? This so that instead of passively sort of being led into echo chambers on social media You therefore have more of a conscious awareness as to why you’re being fed the things you’re being fed and then you can also Say actually I don’t want to see more of this or I do want to see more of this or even I want to see The other side of this if it’s a political issue or if it’s a fact and you doubt it Then can I see other sources that are also reporting on this topic? So that’s what we all agreed was good But yes we didn’t agree exactly what should be strictly banned and taken off and whether anything should be strictly taken off these online sites is The very point of social media and these online spaces to have anyone saying whatever they want and is that good if you can have a critical awareness and and then also you get into the Is it censorship and who decides what gets taken off and why and is there bias there? So yeah, that was another point that I think was very important the last thing I’d say is I very much agree with Thomas when he says that the European dialogue is a place where policy issues Informed right so decisions are not necessarily made but they are very much informed and the kind of content that I saw at Euro dig was so varied and so informative and things that I wouldn’t have been able to engage with if I hadn’t had gone and Topics that I hadn’t I wouldn’t have been able to Yeah Conceptualize in the way I can now if I hadn’t had been able to attend these panel discussions and being taken very seriously in the Process so yeah that I’m very grateful for and that’s all I would say Thank


Sandra Hoferichter: You very much Francis and I I think we all agree we could feel the enthusiasm that came out of this year’s you stick Cohort and we are very happy that we can offer this element in our program I would ask you know One intervention already, but as I said if you would like to have a greater conversation on what Francis just said Flag us and we will change our plans and continue on this one Please introduce yourself is it it’s on.


Audience: Okay. Thank you. My name is Thomas I’m a former youth delegate or youth digger for last year and I thought what you said about how some of the Conversations that you end up in there are quite interesting and how you didn’t realize that there are so many perspectives You know there’s always a counterpoint to your point and that is the whole point right you’re in a space where you get to debate these things with other engaged youth And then on a tangent point of what you you said Do you see food for youth on online? And it’s the is the overall policy goal to lessen time spent online for youth or do you think it is more about making sure that people or younger people get to see the Right or wrong type of stuff online Do you wanna make clarification point on that?


Frances Douglas Thomson: I think that’s a very good question So I guess you could rephrase it to be like is it quality or quantity? I think that the there was a session yesterday about social media bands, and this is also something we talked a lot about At the Euro dig and youth dig should you just get children off social media completely so that it’s zero time spent on these apps I think when we think about information pollution and What content you are consuming obviously the quality matters? That’s the well yeah the first thing, but what matters more. I mean I also think there’s a lot of youth voices I don’t necessarily. I think I’d be quite happy completely off social media to be honest and I Have very limited interaction with social media, but when we were talking with the youth voices What was so good about the youth dig was that you also met a lot of people who said that’s a crazy thing for much older Politicians and policymakers to be deciding on my behalf that suddenly I as a 15 or 16 year old Now I’m not allowed into these spaces which are meant to be open accessible and free democratic spaces and even if you know the content Isn’t always good some of the good content is life-changing for some people so of course you have this argument I think that was it’s a very good example of an issue that is yeah So multifaceted and has people who feel very strongly who are young about this issue And so yeah, I think It’s difficult to just say people can’t go on it at all I think time spent on it does matter because it’s an opportunity cost you’re not spending time doing other positive things like being outside


Sandra Hoferichter: But yeah Any other intervention or question in the back, please introduce yourself as well


Audience: I m from Geneva Macro labs University of Geneva also active with the program committee in junior where Geneva people say multilateralism is dead and At the same time we see governments Often don’t have a clue. So how can we fix this if we leave? internet policy Internet policy Technology regulation to governments that don’t want to talk to each other neither to us. How how can we fix? The issues that we are have at hand


Sandra Hoferichter: Thomas that’s possibly something you could take on


Thomas Schneider: Yes, first of all, I need to say that I’ve been working for the Swiss government now for 22 three something like this years and In all these years there has always been whenever we talked with the foreign ministry, but Geneva the UN is in a crisis It is in it has never not been in a crisis But of course, that’s one thing. But of course the crisis now is slightly more serious than earlier perceived crisis is so so but and And then it may sound silly but every crisis can also be a chance It can it may force you to to rethink a few things to try and do things differently so crisis, yes, but I wouldn’t say it’s dead or crippled or something and Things may change quickly. Also if a few people that are powerful in the world change sometimes Things can change but I think it is now also the time and this is also part of the UN 80 discussions the 80s anniversary What does not work How can we try and make things work? But of course if member states don’t agree Then they will not let the UN or the IT or whatever the UNESCO do anything and this is and this is why it is Even more important that in these institutions you do not just have the governments But you have the other stakeholders you have civil society business academia because they want things done And if their own government is maybe not constructive for whatever reason actually Constructive for whatever reason actually businesses and people are more constructive than Governments that have their own interest I say this as a strong defender of direct democracy because we have we have made not such bad experiences in in my country Yeah, and who knows what of course politicians are not most knowledgeable in average about digital issues But they are neither on space issues or on how a car works and so on and so forth It is important that if they are not Experts themselves that they have experts that they listen to and that the society discusses these things I hope that answers at least partially what you Asked thing.


Sandra Hoferichter: Are there any other fantastic


Janice Richardson: Yes, Janice Richardson, expert to the Council of Europe, at the Council of Europe we believe that active participation especially of young people is really important. Do you have any great ideas to get a lot more young people involved in the type of work that you’re now involved in?


Frances Douglas Thomson: That’s a very good question. I think Youthdig is an incredible example of youth participation and achieving. I would say that something like Youthdig takes people who might not necessarily have a background in something like internet governance, provides the opportunity for them to be involved because it provides financial assistance for travel, for accommodation, it provides a space for open dialogue that isn’t top-down. And I think that’s important because if you want youth to be able to clearly express what they feel and their opinions and their perspectives, then it can’t be top-down, it can’t be someone dictating to you what you ought to think or how you ought to feel about this or how you should conceive of these ideas. It should be bottom-up basically. And I think as well another way to involve youth is not only through something like Youthdig, but also just making sure that young people feel like they’re taken seriously in these spaces. And that’s what I found so surprising about Eurodig and Youthdig was that at all times our questions were very much always answered, taken seriously, even outside of the conferences and panel discussions, conversations with older experts were encouraged, and it all felt very, very nice. So I think schemes like Youthdig, making sure that it’s feasible in terms of economics, and then also making sure that this gets out there, right? So that people who have gone post it on LinkedIn so other people can press on it, sharing it through schools, just making sure that there’s an awareness there of these kind of schemes. And maybe also encouraging young people and giving them spaces in things like the European Parliament, so that they can understand how the Parliament works, how commissions work, how councils operate, and therefore know that these kind of topics are being talked about and have a seat at the table.


Sandra Hoferichter: Thank you. There’s a gentleman in the back.


Chetan Sharma: Hello. Am I audible? This is Chetan Sharma from Data Mission Foundation Trust India. I’d like to apologize at the outset. Some of the questions that I’m asking are very frank. In this age and time when we’re seeing one humanitarian crisis unfolding after another, one crisis just right next door from where we all are assembled in Ukraine, the Internet policy dialogues or the governance platforms have, what has been their role and contribution? I’d like to just know. As opposed to even some of the mainstream media that have played a humongous role in sensitizing communities and raising up the deep humanitarian issues, we had this opportunity to build an advocacy. We had an opportunity to build opinions because this is a much closer community. Sadly, it has not happened. Now, if this was not enough, then we seem to be not even receiving proper information or the information which is transparent, accountable. We have said, no, no, this seems to be fake, AI generated, et cetera, et cetera. This is happening even on many of the social media and on the Internet platforms. I’d like to ask, sir, what is your view on this and how is it that we can prevent further degradation which seems to be exaggerating the humanitarian crisis we already are in? Thank you.


Sandra Hoferichter: Is that question directed to one specific, to Francis or to Thomas? To Thomas or to both?


Thomas Schneider: Yes, it’s quite complex, the development that you describe, if I understand it correctly. And the answer is probably also not an easy one. In the end, yeah, we do have tendencies. I’ve been talking to Wolfgang Kleinwächter actually at the IJF in Riyadh. This is an old German friend of mine and he said that, yeah, he’s losing faith in the German people, in the American people, in democracy because you elect people that do not take the truth serious, that say A, today and B, tomorrow and so on. To some extent, we may be a little bit also victims of a success in particular here in Europe where we had some decades of peace and after the fall of the Berlin Wall, we maybe thought that now everything will be fine. But then people realized that working in a free competitive society is also hard. You have to struggle every day for paying your bills, for, I don’t know, competing with other people and so on. And you have lots of information, lots of media. They all struggle, also compete to survive economically. So that maybe at some point in time, people get tired of the freedom because in the Internet you could check everything where it comes from. But this takes time and if you work hard and are tired in the evening, you don’t want to be a good democrat and spend three hours informing yourself actually. So to some extent, it is a period where people maybe start forgetting the value of democracy, but also what it means, the work, the effort that it needs to keep a democracy, keep free and quality media alive. And then they are sometimes inclined to follow people that tell them, just follow me, you don’t have to worry, I will sort everything out for you, now it’s your turn. But I’m also a historian, so maybe I hope, and it has happened in the past, when people start believing simple answers by people that give them simple answers, things normally get worse. Unfortunately, some people suffer in these periods, but at some point in time people realize that the simple answers may not be the right ones. And then they will start fighting again for democracy, for freedom of expression, for quality information. So things come and go in ways and I think that also the gathering like this is, we just need to keep on fighting. We need to keep on fighting for good governance, for fairness in societies, for a public debate. Other people fight for other things. We should together fight for a society which is fair and balanced and free. So the fight will never stop. Those that think that once you have achieved something you can stop fighting, you will lose it because there are others that fight against it. So that’s a little bit of my recollection.


Sandra Hoferichter: You are not audible, gentlemen, without a microphone. And I must also say we are in the red timer now. Possibly you don’t see the timer, but I see it. We have a couple of seconds. Yes, so we give the last word to you. How are you perceiving the world and how do we turn it into a good one again? That would be interesting to hear.


Frances Douglas Thomson: Well, I just wanted to say on the topic you raised about humanitarian crisis and even war and conflict, at the EuroDIG there were quite a few sessions about conflict. And they were very important for policymakers, the private sector, and also NGOs and governments. We talked about drones, autonomous weapon systems. We had perspectives from the private sector about why they thought these should be used. We also had NGOs saying why there are serious ethical concerns and the use of these drones in current contemporary warfare. That was very important. Because then you have people of all different stakeholders discussing these issues and sharing why they are so concerned and the reasons for that. Then we had one about Internet and using Internet or taking away a country’s Internet and using that as a form of attack. This is weaponizing the Internet or lack of access to it and how that affects people and how that can create very bad humanitarian crises. And we also had one about, as you say, the proliferation of misinformation about war and humanitarian crises and how we can deal with that, how we can put in place regulation, different ideas about how people can come to know this image is fake and this is not actually a real picture from the scene. This is something that’s been generated. So I think these dialogues are very important for addressing issues in humanitarian crises and for addressing issues in conflict zones. And I don’t think that this is just a discussion. I think it actually impacts when people go away from these dialogues how they perceive this issue. So I think it does make a real change.


Sandra Hoferichter: Okay. Thank you very much. I feel it’s time to wrap up. We graciously got two more minutes of which one is already over. And since I have the mic, I have to say examples like Francis, but there are many others that are really the ones that are making us proud that we are actually going in the right direction in engaging youth. Just to let you know, for our call for application for youth, it was responded by 428, 29 applications. We had really struggled this year of inviting them, of paying the travel costs in particular for youth participants from countries that are a bit further away from Strasbourg, like Armenia, Azerbaijan, Georgia, because flight tickets are just out of the space. So if there is someone in the audience who has deep pockets, it’s good invested here. It’s good invested in the next generation. So please help us to continue bringing up programs, not just the EuroDIG, but also in particular the YouthDIG. And thank you very much. Reach out to us. We have a joint booth with the other NRIs if you would like to have a deeper discussion on this. Thank you very much. Thank you.


S

Sandra Hoferichter

Speech speed

151 words per minute

Speech length

1074 words

Speech time

426 seconds

EuroDIG was the 18th edition held at the Council of Europe in Strasbourg, focusing on balancing innovation and regulation while safeguarding human rights

Explanation

Sandra explains that this year’s EuroDIG returned to where it all started in Strasbourg, making it special. The overarching theme was recycled from the previous year – balancing innovation and regulation – with the addition of safeguarding human rights due to the Council of Europe venue.


Evidence

EuroDIG has been held in different European countries for 18 consecutive years, and this was a return to the original location. The theme was enhanced specifically because they were at the Council of Europe.


Major discussion point

EuroDIG Overview and WSIS+20 Review


Topics

Human rights | Legal and regulatory


Agreed with

– Frances Douglas Thomson
– Janice Richardson

Agreed on

Youth participation requires proper support and inclusive approaches


T

Thomas Schneider

Speech speed

164 words per minute

Speech length

1567 words

Speech time

571 seconds

Multi-stakeholder approach is fundamental for solving digital issues, with regional diversity and NRIs being key to bringing various voices into global processes

Explanation

Thomas argues that successful solutions for digital issues require input from multiple stakeholders, not just governments or single stakeholder groups. He emphasizes that regional and national processes are essential to ensure diverse voices are heard in global governance.


Evidence

The EuroDIG community gathered voices from Europe including not just EU governments but all stakeholders. The process was facilitated by Mark Harvell, a former UK government member.


Major discussion point

EuroDIG Overview and WSIS+20 Review


Topics

Legal and regulatory | Development


Agreed with

– Frances Douglas Thomson

Agreed on

Multi-stakeholder approach is essential for digital governance


IGF should become a permanent institution rather than being renewed every 10 years, and should continue addressing emerging issues like AI and data governance

Explanation

Thomas advocates for making the IGF a permanent institution instead of the current 10-year renewal cycle, which creates planning difficulties. He also emphasizes that the IGF has evolved beyond its original focus on critical internet resources to address new issues like AI and data governance.


Evidence

The current 10-year renewal process makes planning difficult. The IGF has moved from mainly focusing on domain names and critical internet resources 20 years ago to now covering AI, data governance, and other emerging issues.


Major discussion point

EuroDIG Overview and WSIS+20 Review


Topics

Legal and regulatory | Infrastructure


Agreed with

– Frances Douglas Thomson

Agreed on

IGF and similar platforms serve as important decision-shaping forums


WSIS+20 process should be open, transparent, inclusive and diverse rather than happening behind closed doors among diplomats

Explanation

Thomas criticizes the current WSIS+20 process for being too closed and limited to diplomatic circles in New York. He argues for a more open and inclusive approach that involves broader stakeholder participation.


Evidence

There was an exchange with the European co-facilitator for the WSIS+20 process, Suela Janina, the Albanian permanent representative in New York, who was present at the event.


Major discussion point

EuroDIG Overview and WSIS+20 Review


Topics

Legal and regulatory | Development


Despite challenges with multilateralism and government knowledge gaps, crises can create opportunities for rethinking governance approaches

Explanation

Thomas acknowledges that multilateral institutions like the UN have always been perceived as being in crisis, but argues that current crises can force rethinking and trying different approaches. He suggests that when governments are not constructive, other stakeholders like businesses and civil society can be more constructive.


Evidence

Thomas has worked for the Swiss government for 22-23 years and notes that the UN has always been described as being in crisis. He references direct democracy experiences in Switzerland as an example of alternative approaches.


Major discussion point

Internet Governance and Crisis Response


Topics

Legal and regulatory | Development


Maintaining democracy and quality information requires continuous effort and fighting, as simple answers from authoritarian figures often lead to worse outcomes

Explanation

Thomas argues that democracy requires constant work and effort from citizens, including spending time to verify information sources. He warns that when people get tired of this effort, they may follow leaders who promise simple solutions, which historically leads to worse outcomes.


Evidence

He references conversations with Wolfgang Kleinwächter about losing faith in democratic processes when people elect leaders who don’t take truth seriously. He also draws on his background as a historian to note that simple answers typically make things worse.


Major discussion point

Internet Governance and Crisis Response


Topics

Human rights | Sociocultural


F

Frances Douglas Thomson

Speech speed

190 words per minute

Speech length

1819 words

Speech time

574 seconds

YouthDIG provides youth-led discussions where young people determine what matters to them and express their stances on digital issues

Explanation

Frances emphasizes that YouthDIG sessions were completely youth-led, with young people deciding what issues were important to them and determining their own positions. She highlights that this approach revealed controversies and different perspectives among youth that might not be apparent in more homogeneous networks.


Evidence

YouthDIG participants met online a month before, had three days of intensive discussions, and the sessions were structured so that ‘at all points, it was the youth who were saying what mattered to us, what was pertinent to us, and what were our stances on these issues.’


Major discussion point

Youth Participation and Digital Literacy


Topics

Sociocultural | Development


Digital literacy curriculum is crucial as digital spaces increasingly become part of young people’s lives, but there’s disagreement on how to present issues like generative AI

Explanation

Frances argues that digital literacy education is essential because young people cannot avoid participating in digital spaces. However, she notes significant disagreement about how to frame technologies like generative AI – whether as positive assistants or as tools that create complacency and erode critical thinking skills.


Evidence

The youth group had different ideas about whether generative AI helps education or creates complacency and reduces skills like source verification that students traditionally learned in subjects like history. They couldn’t reach clear agreement on curriculum content.


Major discussion point

Youth Participation and Digital Literacy


Topics

Sociocultural | Human rights


Disagreed with

Disagreed on

Generative AI in education – beneficial tool vs. harmful dependency


There are divided opinions among youth on content moderation – some favor strict measures against misinformation and harmful content, while others worry about freedom of speech implications

Explanation

Frances describes being surprised by the division among youth participants regarding content moderation. One group supported strict measures against misinformation, deepfakes, and violent content, while another group expressed concerns about censorship and freedom of expression.


Evidence

Frances personally supported strict content moderation for clearly harmful content, but another contingent worried about impacts on freedom of speech and expression. This division was unexpected to her.


Major discussion point

Content Moderation and Information Quality


Topics

Human rights | Sociocultural


Disagreed with

Disagreed on

Content moderation approach – strict vs. freedom-focused


Users should have better understanding of algorithmic content delivery through transparency features that explain why certain content is being shown

Explanation

Frances proposes that social media platforms should provide transparency features that allow users to understand why algorithms are showing them specific content. This would enable more conscious engagement rather than passive consumption and help users actively choose to see different perspectives.


Evidence

She suggests features like an information button on posts that explains algorithmic choices, allowing users to request more or less of certain content types, or to see other sources reporting on the same topic.


Major discussion point

Content Moderation and Information Quality


Topics

Human rights | Legal and regulatory


The focus should be on content quality rather than just time spent online, as complete social media bans may restrict access to valuable democratic spaces

Explanation

Frances argues that while time spent online matters due to opportunity costs, completely banning young people from social media is problematic because these platforms can be valuable democratic spaces with life-changing content. She emphasizes that the quality of content consumed is more important than quantity of time spent.


Evidence

She references discussions about social media bans and notes that some youth felt it was inappropriate for older politicians to decide on their behalf to restrict access to spaces meant to be ‘open accessible and free democratic spaces.’


Major discussion point

Content Moderation and Information Quality


Topics

Human rights | Sociocultural


Agreed with

– Thomas Schneider

Agreed on

IGF and similar platforms serve as important decision-shaping forums


Disagreed with

Disagreed on

Social media regulation approach – time limits vs. quality focus


Youth participation requires financial assistance, bottom-up approaches, and ensuring young people feel taken seriously in policy spaces

Explanation

Frances outlines key requirements for effective youth engagement: providing financial support for travel and accommodation, ensuring discussions are bottom-up rather than top-down, and creating environments where young people feel their contributions are valued and taken seriously.


Evidence

YouthDIG provided financial assistance and created spaces for open dialogue. Frances noted that throughout EuroDIG, youth questions were always answered and taken seriously, and conversations with older experts were encouraged.


Major discussion point

Youth Participation and Digital Literacy


Topics

Development | Sociocultural


Agreed with

– Janice Richardson
– Sandra Hoferichter

Agreed on

Youth participation requires proper support and inclusive approaches


EuroDIG addresses conflict-related issues including autonomous weapons, drones, internet weaponization, and misinformation in warfare, bringing together diverse stakeholders

Explanation

Frances responds to concerns about internet governance platforms’ role in humanitarian crises by highlighting specific EuroDIG sessions that addressed warfare and conflict issues. She emphasizes that these discussions bring together different stakeholders with varying perspectives on ethical and practical concerns.


Evidence

EuroDIG had sessions on drones and autonomous weapon systems with private sector perspectives on usage and NGO concerns about ethics, discussions about weaponizing internet access, and sessions on misinformation about war and humanitarian crises including fake imagery.


Major discussion point

Internet Governance and Crisis Response


Topics

Cybersecurity | Human rights


Agreed with

– Thomas Schneider

Agreed on

Multi-stakeholder approach is essential for digital governance


Disagreed with

– Chetan Sharma

Disagreed on

Role of internet governance in humanitarian crises


J

Janice Richardson

Speech speed

136 words per minute

Speech length

50 words

Speech time

22 seconds

Active participation of young people is important, and programs like YouthDIG should be promoted through schools and social media to increase awareness

Explanation

Janice, representing the Council of Europe, asks for ideas to increase youth involvement in internet governance work. This reflects the Council of Europe’s belief in the importance of active youth participation in policy discussions.


Evidence

She identifies herself as an expert to the Council of Europe and states that ‘at the Council of Europe we believe that active participation especially of young people is really important.’


Major discussion point

Youth Participation and Digital Literacy


Topics

Development | Sociocultural


Agreed with

– Frances Douglas Thomson
– Sandra Hoferichter

Agreed on

Youth participation requires proper support and inclusive approaches


H

Hans Seeuws

Speech speed

152 words per minute

Speech length

71 words

Speech time

28 seconds

There’s interest in seeing key topics from EuroDIG messages picked up by decision-makers, particularly around AI governance, human rights, data privacy, and youth involvement

Explanation

Hans asks about which key topics from EuroDIG should be prioritized for uptake by decision-makers and intergovernmental bodies. This reflects the practical interest in translating dialogue outcomes into policy action.


Evidence

Thomas responds that key topics include AI governance, human rights aspects of data, privacy challenges, and youth involvement in decision-making, noting these are covered in the EuroDIG messages leaflet.


Major discussion point

Technical and Operational Perspectives


Topics

Legal and regulatory | Human rights


The .eu domain operation represents business stakeholder engagement in European internet governance discussions

Explanation

Hans introduces himself as the business operations manager of .eu, operating the top-level domain on behalf of the European Commission. This represents the technical and business stakeholder perspective in European internet governance.


Evidence

He specifically identifies his role as managing .eu domain operations for the European Commission.


Major discussion point

Technical and Operational Perspectives


Topics

Infrastructure | Economic


C

Chetan Sharma

Speech speed

131 words per minute

Speech length

221 words

Speech time

101 seconds

Internet governance platforms should play a stronger advocacy role during humanitarian crises, similar to mainstream media’s contribution to raising awareness

Explanation

Chetan criticizes internet governance platforms for not playing a sufficient advocacy role during humanitarian crises like the conflict in Ukraine. He argues that these platforms have missed opportunities to build advocacy and shape opinions, unlike mainstream media which has been more effective in raising awareness.


Evidence

He points to the ongoing crisis in Ukraine and notes that mainstream media has played a ‘humongous role in sensitizing communities and raising up the deep humanitarian issues’ while internet governance communities have not seized similar opportunities.


Major discussion point

Internet Governance and Crisis Response


Topics

Human rights | Sociocultural


Disagreed with

– Frances Douglas Thomson

Disagreed on

Role of internet governance in humanitarian crises


A

Audience

Speech speed

159 words per minute

Speech length

227 words

Speech time

85 seconds

Key topics from EuroDIG messages should be prioritized for uptake by decision-makers, particularly around AI governance, human rights, data privacy, and youth involvement

Explanation

An audience member representing .eu domain operations asks about which key topics from EuroDIG should be prioritized for uptake by decision-makers and intergovernmental bodies. This reflects the practical interest in translating dialogue outcomes into policy action.


Evidence

Thomas responds that key topics include AI governance, human rights aspects of data, privacy challenges, and youth involvement in decision-making, noting these are covered in the EuroDIG messages leaflet.


Major discussion point

Technical and Operational Perspectives


Topics

Legal and regulatory | Human rights


The .eu domain operation represents business stakeholder engagement in European internet governance discussions

Explanation

An audience member introduces himself as the business operations manager of .eu, operating the top-level domain on behalf of the European Commission. This represents the technical and business stakeholder perspective in European internet governance.


Evidence

He specifically identifies his role as managing .eu domain operations for the European Commission.


Major discussion point

Technical and Operational Perspectives


Topics

Infrastructure | Economic


Youth participation in internet governance provides valuable perspectives on policy issues and helps young people understand different viewpoints through debate

Explanation

A former youth delegate acknowledges the value of youth participation in internet governance, noting how engaging with other youth in debate helps participants realize there are multiple perspectives on issues. He asks about whether policy goals should focus on reducing time spent online or improving content quality.


Evidence

The speaker identifies as a former youth delegate or youth digger from the previous year and references conversations about counterpoints and different perspectives in youth engagement spaces.


Major discussion point

Youth Participation and Digital Literacy


Topics

Development | Sociocultural


Multilateralism faces serious challenges with governments lacking knowledge and unwillingness to engage, requiring alternative approaches to fix internet policy and technology regulation issues

Explanation

An audience member from Geneva expresses concern that multilateralism is perceived as dead while governments often lack understanding of technology issues and are unwilling to engage with each other or stakeholders. He questions how to address internet policy and technology regulation under these circumstances.


Evidence

The speaker identifies as being from Geneva Macro labs University of Geneva and active with the program committee, referencing the common saying that multilateralism is dead.


Major discussion point

Internet Governance and Crisis Response


Topics

Legal and regulatory | Development


Internet governance platforms should play a stronger advocacy role during humanitarian crises, similar to mainstream media’s contribution to raising awareness

Explanation

An audience member criticizes internet governance platforms for not playing a sufficient advocacy role during humanitarian crises like the conflict in Ukraine. He argues that these platforms have missed opportunities to build advocacy and shape opinions, unlike mainstream media which has been more effective in raising awareness.


Evidence

He points to the ongoing crisis in Ukraine and notes that mainstream media has played a ‘humongous role in sensitizing communities and raising up the deep humanitarian issues’ while internet governance communities have not seized similar opportunities.


Major discussion point

Internet Governance and Crisis Response


Topics

Human rights | Sociocultural


Disagreed with

– Chetan Sharma
– Frances Douglas Thomson

Disagreed on

Role of internet governance in humanitarian crises


Information transparency and accountability are being compromised by AI-generated content and misinformation on social media platforms, exacerbating humanitarian crises

Explanation

An audience member expresses concern about the degradation of information quality due to AI-generated content and misinformation on internet platforms. He argues this is making humanitarian crises worse by preventing people from receiving proper, transparent, and accountable information.


Evidence

He references how content is being labeled as fake or AI-generated on social media platforms, and connects this to the broader issue of information pollution during humanitarian crises.


Major discussion point

Content Moderation and Information Quality


Topics

Human rights | Sociocultural


Active participation of young people in internet governance requires better promotion and outreach through various channels including schools and social media

Explanation

An audience member representing the Council of Europe asks for ideas to increase youth involvement in internet governance work. This reflects institutional interest in expanding youth participation beyond current programs like YouthDIG.


Evidence

She identifies herself as an expert to the Council of Europe and states that ‘at the Council of Europe we believe that active participation especially of young people is really important.’


Major discussion point

Youth Participation and Digital Literacy


Topics

Development | Sociocultural


Agreements

Agreement points

Multi-stakeholder approach is essential for digital governance

Speakers

– Thomas Schneider
– Frances Douglas Thomson

Arguments

Multi-stakeholder approach is fundamental for solving digital issues, with regional diversity and NRIs being key to bringing various voices into global processes


EuroDIG addresses conflict-related issues including autonomous weapons, drones, internet weaponization, and misinformation in warfare, bringing together diverse stakeholders


Summary

Both speakers emphasize the importance of involving multiple stakeholders (governments, civil society, business, academia) in digital governance discussions rather than limiting participation to single stakeholder groups


Topics

Legal and regulatory | Development | Human rights


Youth participation requires proper support and inclusive approaches

Speakers

– Frances Douglas Thomson
– Janice Richardson
– Sandra Hoferichter

Arguments

Youth participation requires financial assistance, bottom-up approaches, and ensuring young people feel taken seriously in policy spaces


Active participation of young people is important, and programs like YouthDIG should be promoted through schools and social media to increase awareness


EuroDIG was the 18th edition held at the Council of Europe in Strasbourg, focusing on balancing innovation and regulation while safeguarding human rights


Summary

All speakers agree that meaningful youth engagement in internet governance requires structural support including financial assistance, proper promotion, and creating environments where young people are taken seriously


Topics

Development | Sociocultural


IGF and similar platforms serve as important decision-shaping forums

Speakers

– Thomas Schneider
– Frances Douglas Thomson

Arguments

IGF should become a permanent institution rather than being renewed every 10 years, and should continue addressing emerging issues like AI and data governance


The focus should be on content quality rather than just time spent online, as complete social media bans may restrict access to valuable democratic spaces


Summary

Both speakers view internet governance platforms as valuable spaces for shaping policy discussions and decisions, even if they don’t make formal decisions themselves


Topics

Legal and regulatory | Human rights


Similar viewpoints

Both speakers advocate for inclusive, transparent, and participatory approaches to internet governance that go beyond traditional diplomatic or top-down processes

Speakers

– Thomas Schneider
– Frances Douglas Thomson

Arguments

WSIS+20 process should be open, transparent, inclusive and diverse rather than happening behind closed doors among diplomats


YouthDIG provides youth-led discussions where young people determine what matters to them and express their stances on digital issues


Topics

Legal and regulatory | Development | Sociocultural


Both speakers see internet governance platforms as spaces that can constructively address contemporary challenges including conflicts and crises through multi-stakeholder dialogue

Speakers

– Thomas Schneider
– Frances Douglas Thomson

Arguments

Despite challenges with multilateralism and government knowledge gaps, crises can create opportunities for rethinking governance approaches


EuroDIG addresses conflict-related issues including autonomous weapons, drones, internet weaponization, and misinformation in warfare, bringing together diverse stakeholders


Topics

Legal and regulatory | Human rights | Cybersecurity


Unexpected consensus

Content moderation complexity among youth

Speakers

– Frances Douglas Thomson

Arguments

There are divided opinions among youth on content moderation – some favor strict measures against misinformation and harmful content, while others worry about freedom of speech implications


Explanation

Frances expressed surprise at discovering significant disagreement among youth participants about content moderation, challenging assumptions that young Europeans would have similar views on digital rights issues. This unexpected division led to consensus on algorithmic transparency as a compromise solution


Topics

Human rights | Sociocultural


Democratic resilience requires continuous effort

Speakers

– Thomas Schneider

Arguments

Maintaining democracy and quality information requires continuous effort and fighting, as simple answers from authoritarian figures often lead to worse outcomes


Explanation

Thomas’s historical perspective that democracy and quality information require constant vigilance and effort represents an unexpected consensus point about the fragility of democratic institutions in the digital age


Topics

Human rights | Sociocultural


Overall assessment

Summary

The speakers demonstrated strong consensus on the importance of multi-stakeholder approaches, meaningful youth participation, and the value of inclusive internet governance platforms. There was also agreement on the need for transparency and openness in governance processes, and recognition that digital governance platforms can address contemporary challenges including conflicts and humanitarian crises.


Consensus level

High level of consensus on fundamental principles of internet governance, with particular strength around inclusivity, transparency, and multi-stakeholder participation. The implications suggest a mature understanding of internet governance challenges and broad agreement on democratic approaches to addressing them, though specific implementation details may still require further discussion.


Differences

Different viewpoints

Content moderation approach – strict vs. freedom-focused

Speakers

– Frances Douglas Thomson

Arguments

There are divided opinions among youth on content moderation – some favor strict measures against misinformation and harmful content, while others worry about freedom of speech implications


Summary

Within the youth community, there’s a fundamental split between those who support strict content moderation (warnings, banning, flagging of misinformation, deepfakes, and violent content) and those who view such measures as threats to freedom of speech and expression


Topics

Human rights | Sociocultural


Generative AI in education – beneficial tool vs. harmful dependency

Speakers

– Frances Douglas Thomson

Arguments

Digital literacy curriculum is crucial as digital spaces increasingly become part of young people’s lives, but there’s disagreement on how to present issues like generative AI


Summary

Youth participants disagreed on whether generative AI should be presented as a positive educational assistant or as something that creates complacency and erodes critical thinking skills like source verification


Topics

Sociocultural | Human rights


Social media regulation approach – time limits vs. quality focus

Speakers

– Frances Douglas Thomson

Arguments

The focus should be on content quality rather than just time spent online, as complete social media bans may restrict access to valuable democratic spaces


Summary

There’s disagreement between those who support complete social media bans for youth (zero time approach) and those who argue this restricts access to valuable democratic spaces, with the latter favoring quality over quantity approaches


Topics

Human rights | Sociocultural


Role of internet governance in humanitarian crises

Speakers

– Chetan Sharma
– Frances Douglas Thomson

Arguments

Internet governance platforms should play a stronger advocacy role during humanitarian crises, similar to mainstream media’s contribution to raising awareness


EuroDIG addresses conflict-related issues including autonomous weapons, drones, internet weaponization, and misinformation in warfare, bringing together diverse stakeholders


Summary

Chetan argues that internet governance platforms have failed to play adequate advocacy roles during crises like Ukraine, while Frances counters that EuroDIG does address conflict issues through stakeholder discussions on weapons, internet weaponization, and misinformation


Topics

Human rights | Sociocultural


Unexpected differences

Youth division on content moderation

Speakers

– Frances Douglas Thomson

Arguments

There are divided opinions among youth on content moderation – some favor strict measures against misinformation and harmful content, while others worry about freedom of speech implications


Explanation

Frances explicitly states this was ‘what surprised me the most’ – she expected youth to have similar views on content moderation but discovered fundamental disagreements between those favoring strict measures and those prioritizing freedom of expression


Topics

Human rights | Sociocultural


Diverse youth perspectives on digital issues

Speakers

– Frances Douglas Thomson

Arguments

YouthDIG provides youth-led discussions where young people determine what matters to them and express their stances on digital issues


Explanation

Frances found it surprising that youth in her European network had such different ideas about fundamental issues like content moderation and digital literacy, challenging assumptions about generational consensus on digital issues


Topics

Sociocultural | Development


Overall assessment

Summary

The main areas of disagreement center around content moderation approaches, the role of AI in education, social media regulation strategies, and the effectiveness of internet governance in addressing humanitarian crises. Most significantly, there are fundamental divisions within the youth community itself on key digital rights issues.


Disagreement level

Moderate to high disagreement with significant implications. The disagreements reveal deep philosophical divisions about balancing freedom versus safety online, the role of technology in education, and the responsibilities of governance platforms during crises. The unexpected youth divisions suggest that generational assumptions about digital consensus may be incorrect, requiring more nuanced policy approaches that account for diverse youth perspectives rather than treating them as a monolithic group.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for inclusive, transparent, and participatory approaches to internet governance that go beyond traditional diplomatic or top-down processes

Speakers

– Thomas Schneider
– Frances Douglas Thomson

Arguments

WSIS+20 process should be open, transparent, inclusive and diverse rather than happening behind closed doors among diplomats


YouthDIG provides youth-led discussions where young people determine what matters to them and express their stances on digital issues


Topics

Legal and regulatory | Development | Sociocultural


Both speakers see internet governance platforms as spaces that can constructively address contemporary challenges including conflicts and crises through multi-stakeholder dialogue

Speakers

– Thomas Schneider
– Frances Douglas Thomson

Arguments

Despite challenges with multilateralism and government knowledge gaps, crises can create opportunities for rethinking governance approaches


EuroDIG addresses conflict-related issues including autonomous weapons, drones, internet weaponization, and misinformation in warfare, bringing together diverse stakeholders


Topics

Legal and regulatory | Human rights | Cybersecurity


Takeaways

Key takeaways

EuroDIG successfully demonstrated the value of multi-stakeholder dialogue in addressing digital governance issues, with the 18th edition focusing on balancing innovation and regulation while safeguarding human rights


The IGF should transition from 10-year mandate renewals to becoming a permanent institution to ensure continuity and better long-term planning


Youth participation in digital governance is crucial and effective when properly supported through programs like YouthDIG, which received 428-429 applications demonstrating high interest


Digital literacy curriculum development is critical as digital spaces become increasingly central to young people’s lives, though approaches to teaching about technologies like AI remain contested


Content moderation presents a fundamental tension between protecting users from harmful content and preserving freedom of expression, with transparency in algorithmic content delivery emerging as a potential middle ground


Internet governance forums like EuroDIG serve as important ‘decision-shaping’ platforms that influence policy even when they don’t make formal decisions


The WSIS+20 review process needs to be more open, transparent, and inclusive rather than conducted behind closed doors among diplomats


Maintaining democratic governance and quality information requires continuous effort and vigilance, especially during periods of crisis and political polarization


Resolutions and action items

EuroDIG community formally recommends that IGF become a permanent institution rather than requiring 10-year mandate renewals


Messages from EuroDIG 2024 have been compiled and made available both in hard copy and online for stakeholders to reference and act upon


Continued promotion of YouthDIG program through schools, social media, and professional networks to increase youth awareness and participation


Seek additional funding sources to support travel costs for youth participants from distant countries like Armenia, Azerbaijan, and Georgia


Unresolved issues

How to determine appropriate digital literacy curriculum content, particularly regarding the presentation of generative AI as beneficial versus potentially harmful


Where to draw the line on content moderation between protecting users and preserving freedom of expression


How to effectively engage governments that lack technical expertise in digital issues while maintaining multi-stakeholder participation


How internet governance platforms can play a more effective advocacy role during humanitarian crises and conflicts


How to address the broader crisis in multilateralism and democratic engagement that affects digital governance


Who should decide what constitutes misinformation or harmful content, and how to avoid bias in content moderation decisions


How to balance quality versus quantity of online engagement for youth


Suggested compromises

Implement transparency features that allow users to understand why algorithmic systems show them specific content, enabling more conscious engagement rather than passive consumption


Focus on improving content quality and user awareness rather than implementing blanket restrictions on social media access for youth


Combine strict content moderation for clearly harmful content (violence, misinformation) with enhanced user tools for understanding and controlling their information environment


Maintain both global and regional governance processes to ensure diverse voices can be heard while still enabling effective coordination


Thought provoking comments

Yeah, I think it’s very easy as a young person to think that everyone in your network in Europe has similar ideas about things like content moderation digital literacy… And so this is what I am hoping to present today and then get your advice or opinions or perspectives on these issues.

Speaker

Frances Douglas Thomson


Reason

This comment is insightful because it challenges the assumption of generational homogeneity on digital issues. Frances reveals that even among European youth, there are significant disagreements on fundamental digital governance questions, particularly around content moderation and AI regulation. This breaks the stereotype that young people have uniform views on technology.


Impact

This comment shifted the discussion from presenting consensus youth positions to exploring nuanced disagreements within the youth community. It set up the framework for discussing controversial topics and invited audience engagement on complex issues rather than simple advocacy.


Some of us said content moderation is great… and then another group said this is very worrying for us and this idea of very harsh stringent content moderation is worrying for freedom of speech freedom of expression

Speaker

Frances Douglas Thomson


Reason

This reveals a fundamental tension within youth perspectives on one of the most critical digital governance issues. It’s thought-provoking because it shows that the traditional left-right political divide on content moderation exists even among digital natives, challenging assumptions about youth consensus on platform governance.


Impact

This comment deepened the conversation by introducing genuine complexity and controversy. It moved the discussion away from simple policy recommendations toward exploring the underlying tensions between safety and freedom that define modern digital governance debates.


In this age and time when we’re seeing one humanitarian crisis unfolding after another… the Internet policy dialogues or the governance platforms have, what has been their role and contribution?… We had this opportunity to build an advocacy… Sadly, it has not happened.

Speaker

Chetan Sharma


Reason

This is a provocative challenge to the entire internet governance community, questioning whether these forums are actually making a meaningful difference in addressing real-world crises. It forces participants to confront the potential gap between policy discussions and tangible humanitarian impact.


Impact

This comment created a pivotal moment in the discussion, forcing both Thomas and Frances to defend and contextualize the value of internet governance forums. It shifted the conversation from internal process discussions to fundamental questions about relevance and impact, leading to more substantive responses about how these dialogues address conflict and humanitarian issues.


To some extent, we may be a little bit also victims of a success in particular here in Europe where we had some decades of peace… people maybe start forgetting the value of democracy, but also what it means, the work, the effort that it needs to keep a democracy

Speaker

Thomas Schneider


Reason

This comment provides a profound historical and sociological analysis of current democratic backsliding, connecting it to complacency born from success. It’s insightful because it reframes current challenges not as failures but as consequences of previous achievements, offering a nuanced perspective on democratic fragility.


Impact

This response elevated the discussion to a higher analytical level, moving from immediate policy concerns to broader questions about democratic sustainability and civic engagement. It provided a framework for understanding current challenges that went beyond simple technological solutions.


Do you see food for youth on online? And it’s the is the overall policy goal to lessen time spent online for youth or do you think it is more about making sure that people or younger people get to see the Right or wrong type of stuff online

Speaker

Thomas (former youth delegate)


Reason

This question cuts to the heart of youth digital policy by forcing a choice between two fundamentally different approaches: time-based restrictions versus content-based curation. It’s thought-provoking because it reveals the underlying philosophical divide in how we approach youth protection online.


Impact

This question prompted Frances to articulate the complexity of youth perspectives on social media bans and parental controls, leading to a more nuanced discussion about agency, access, and the diversity of youth opinions on their own digital rights.


Overall assessment

These key comments transformed what could have been a routine policy presentation into a substantive exploration of fundamental tensions in digital governance. Frances’s revelation of youth disagreement on core issues challenged assumptions and invited deeper engagement. Chetan’s provocative questioning forced participants to justify the relevance of their work, while Thomas’s historical analysis provided broader context for understanding current challenges. The former youth delegate’s question about online time versus content quality highlighted practical policy dilemmas. Together, these comments elevated the discussion from procedural updates to philosophical debates about democracy, youth agency, content governance, and the real-world impact of policy dialogues, creating a more intellectually rigorous and practically relevant conversation.


Follow-up questions

What is one of the key topics that you would like them to see picked up in the very near future?

Speaker

Hans Seeuws


Explanation

This question seeks to identify priority areas from EuroDIG messages that should be addressed by decision-makers and intergovernmental bodies


Who decides on the syllabus for digital literacy and how do these issues get presented?

Speaker

Frances Douglas Thomson


Explanation

This addresses the governance and content of digital literacy education, particularly regarding how emerging technologies like AI are framed in educational contexts


Is the overall policy goal to lessen time spent online for youth or do you think it is more about making sure that people or younger people get to see the right or wrong type of stuff online?

Speaker

Thomas (former youth delegate)


Explanation

This question explores whether youth online safety policies should focus on quantity (time limits) versus quality (content curation) of online engagement


How can we fix internet policy and technology regulation when governments don’t have a clue and multilateralism appears to be failing?

Speaker

Audience member from Geneva Macro labs University


Explanation

This addresses the fundamental challenge of effective governance in digital policy when traditional multilateral approaches seem inadequate


Do you have any great ideas to get a lot more young people involved in the type of work that you’re now involved in?

Speaker

Janice Richardson


Explanation

This seeks practical strategies for expanding youth participation in internet governance and policy discussions beyond current programs


What has been the role and contribution of Internet policy dialogues or governance platforms in addressing humanitarian crises, and how can we prevent further degradation of information quality that exaggerates these crises?

Speaker

Chetan Sharma


Explanation

This questions the effectiveness of internet governance forums in addressing real-world humanitarian issues and the challenge of misinformation during crises


How to involve young people not just in dialogue, but also in decision-making processes?

Speaker

Thomas Schneider


Explanation

This addresses the gap between youth participation in discussions versus actual influence in policy decisions


How to deal with privacy in an environment where it’s very difficult to say what is private data versus non-private data?

Speaker

Thomas Schneider


Explanation

This highlights the technical and policy challenges of defining and protecting privacy in the context of AI and big data


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages

Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages

Session at a glance

Summary

This discussion centered on the launch of “NetMundial Plus 10,” a book containing multistakeholder guidelines that has been translated into the six official UN languages plus Portuguese. Rafael Evangelista from Brazil’s Internet Steering Committee opened the session by explaining that NetMundial Plus 10 represents a continuation of the original NetMundial process from 2014, resulting in the São Paulo Multistakeholder Guidelines designed to strengthen internet governance and digital policy processes.


Multiple speakers emphasized that NetMundial serves as a shining example of how the multistakeholder model can deliver concrete results in a timely manner, contrasting it with other forums that may deliberate for years without reaching consensus. Pierre Bonis from AFNIC highlighted how NetMundial addressed critical moments in internet governance history, including the post-Snowden era and the recent Digital Global Compact discussions. The collaborative editing process was particularly noteworthy, with participants describing intense online sessions where committee members from around the world simultaneously edited documents in real-time.


The translation process itself proved to be more than mere linguistic conversion. Speakers noted that translating these governance concepts into different languages required deep understanding of the field and collaboration with community experts rather than just professional translators. Jennifer Chung discussed the complexity of Chinese translation, involving coordination between different script systems and regional terminology variations. Valeria Betancourt emphasized that translation carries political dimensions and increases community appropriation of the guidelines by making them accessible in native languages.


Several participants stressed the practical application of these guidelines, particularly in the context of the upcoming WSIS Plus 20 review process. The book launch represents not just a publication milestone but a call to action for implementing these multistakeholder principles across various governance levels, from local to global contexts.


Keypoints

## Major Discussion Points:


– **Book Launch and Multilingual Publication**: The primary focus was launching the “NetMundial Plus 10” book, which contains multistakeholder guidelines and has been translated into the six official UN languages plus Portuguese, making it accessible to a global audience.


– **Success of the Multistakeholder Model**: Participants emphasized how NetMundial and NetMundial Plus 10 serve as exemplary cases of effective multistakeholder governance that can deliver concrete results in a timely manner, countering criticisms that such processes are slow or ineffective.


– **Translation as Political and Cultural Process**: Speakers highlighted that translation goes beyond literal conversion, involving cultural adaptation and political considerations. The process required collaboration within language communities (e.g., different Chinese dialects and scripts) and helps increase local appropriation of governance principles.


– **Practical Implementation and Future Applications**: Discussion centered on putting the São Paulo Multistakeholder Guidelines into practice across various levels (local, national, regional, global) and using them to inform ongoing processes like the WSIS+20 review and national IGF initiatives.


– **Collaborative Process and Community Engagement**: Participants reflected on the intensive collaborative editing process that created the guidelines and emphasized the ongoing nature of the work, including invitations for additional translations and continued community involvement.


## Overall Purpose:


The discussion aimed to officially launch the multilingual publication of the NetMundial Plus 10 statement and São Paulo Multistakeholder Guidelines, celebrate the collaborative achievement, and encourage broader adoption and implementation of these governance principles across different linguistic and cultural contexts.


## Overall Tone:


The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed genuine enthusiasm and pride in the collaborative achievement, with frequent expressions of gratitude and mutual recognition. The atmosphere remained positive and constructive, with speakers building on each other’s comments and emphasizing the collective nature of the work. There was no notable shift in tone – it maintained an upbeat, collegial spirit from beginning to end.


Speakers

**Speakers from the provided list:**


– **Rafael de Almeida Evangelista** – Board member of CGI.br (Brazilian Internet Steering Committee), session moderator


– **Pierre Bonis** – Representative from AFNIC (French registry)


– **Jennifer Chung** – Works for .asia (registry operator for the generic top level domain .Asia)


– **Valeria Betancourt** – From Association of Progressive Communication


– **Renata Mielli** – Member of CGI.br, involved in NetMundial Plus 10 organization


– **Everton Rodrigues** – Part of NIC.br team involved in the NetMundial Plus 10 process


– **Jorge Cancio** – From the Swiss government, involved in translation process


– **Akinori Maemura** – Member of the HLAC (High-Level Advisory Committee) for NetMundial Plus 10, also involved in NetMundial 2014 EMC


– **Vinicius W. O. Santos** – Part of the secretariat/NIC.br team for NetMundial Plus 10


– **Audience** – Identified as James Ndolufuye from Africa


**Additional speakers:**


None – all speakers who spoke during the session were included in the provided speakers names list.


Full session report

# NetMundial Plus 10: Multilingual Book Launch and Implementation Discussion


## Executive Summary


This session focused on the launch of “NetMundial Plus 10,” a publication containing multistakeholder guidelines translated into the six official UN languages plus Portuguese. Moderated by Rafael de Almeida Evangelista from Brazil’s Internet Steering Committee, the discussion brought together international stakeholders to celebrate this achievement and examine its implications for internet governance.


Participants emphasized NetMundial’s effectiveness as a multistakeholder process that delivers concrete results efficiently. The conversation highlighted how translation involves more than linguistic conversion, requiring cultural adaptation and community collaboration. Speakers stressed the importance of implementing these guidelines in practice rather than treating them as static documents.


## Background and Context


Rafael Evangelista explained that NetMundial Plus 10 continues the original NetMundial process from 2014, which produced the São Paulo Multistakeholder Guidelines for strengthening internet governance and digital policy processes. The publication launch occurs one year after NetMundial Plus 10, with printed copies available for free and digital versions accessible at cgi.br.


The timing coincides with ongoing discussions around the Digital Global Compact and preparations for the WSIS Plus 20 review process. Pierre Bonis from AFNIC noted the post-Snowden context that influenced NetMundial’s development.


## Multistakeholder Model Effectiveness


Speakers consistently praised NetMundial as proof that multistakeholder approaches work. Pierre Bonis stated: “Sometimes people look at our model, at the multi-stakeholder model, saying, okay, that’s nice, but it doesn’t deliver anything. And NetMundial and NetMundial Plus10… you see that in four days, you get a consensus that is a solid one, that can last four years, as in other fora, you may talk for 10 years without having anything like that.”


Jennifer Chung from .asia described NetMundial as “a shining example that multi-stakeholder process works and produces tangible results,” emphasizing how the guidelines provide practical tools for improving multilateral processes. James Ndolufuye from Africa noted that “NetMundial process has become the conscience of information society regarding multi-stakeholder engagement.”


## Translation as Cultural Process


The discussion revealed sophisticated understanding of translation challenges. Valeria Betancourt from the Association for Progressive Communications explained: “Translation is not just a mere transposition, a literal transposition of specific content from one language to another. When doing the translation we are actually also expressing different views of reality, different perspectives… It’s not only a communication strategy… but for me it’s actually mostly an attempt to respond to the contextual realities of people, if we really want to be inclusive.”


Jennifer Chung detailed Chinese translation complexities, explaining how the team had to collaborate within the Chinese community because “certain terms are translated in different ways” and decisions were needed between traditional and simplified scripts, as well as different regional variations.


Pierre Bonis noted that “Internet governance concepts are better translated by field experts than general UN translators.” Vinicius W. O. Santos from NIC.br described their collaborative review process involving “multiple reviewers and field experts” working with professional translators.


The translation effort involved specific contributors including Lucien Castex (French), Manao, Christina Arida, Zena (Arabic), and Leonid Todorov (Russian). Renata Mielli from CGI.br observed that reading different language versions reveals “meaningful differences” and that “different language versions have distinct impacts and make more sense in specific contexts.”


## Collaborative Editing Process


Akinori Maemura, a member of the High-Level Advisory Committee (HLAC) for NetMundial Plus 10, described the “dynamic online editing process with global HLAC members” as a “remarkable collaborative experience” involving committee members from around the world simultaneously editing documents.


Jorge Cancio from the Swiss government characterized the process as “intensive and emotionally heavy but productive,” highlighting both the challenges and rewards of such collaboration.


## Implementation Focus


Jorge Cancio emphasized implementation responsibility: “It’s really in our hands to put them to work… if it’s just a piece of paper it will get forgotten in some years of time but if we put them to use… they will really gain some reality, they will inform what we do on a multi-stakeholder fashion at the different levels.”


He provided concrete examples, describing how the Swiss IGF is implementing the São Paulo guidelines and how his office uses the guidelines in German and French for internal discussions to ensure “bottom-up, open, inclusive” processes that “take into account asymmetries of power and influence.”


Valeria Betancourt connected the guidelines to the WSIS Plus 20 review, explaining how they “provide practical steps for making inclusion reality” and can increase representation of underrepresented communities. Jennifer Chung emphasized their relevance for capacity building in local communities.


## Ongoing Expansion


Everton Rodrigues from NIC.br emphasized the continuing nature of the project: “We had ongoing efforts in other languages as well, so please feel invited to send us those new translations to contact us also to make those translations available… the future of NetMundial is still ongoing.”


This prompted James Ndolufuye to offer translations into Swahili, Yoruba, and Hausa to “scale the approach across Africa.” The NetMundial Plus 10 website will credit new translations and make them available to the community.


## Conclusion


The session demonstrated strong support for NetMundial’s multistakeholder approach while emphasizing the need for practical implementation. The multilingual publication represents both an achievement in inclusive governance and a tool for ongoing work. Speakers committed to using the guidelines in their respective contexts, from local IGFs to international processes, ensuring the NetMundial principles continue to evolve through practical application.


The discussion highlighted how effective multistakeholder governance requires both inclusive processes and concrete outcomes, with NetMundial serving as a model for addressing complex governance challenges efficiently while maintaining broad stakeholder participation.


Session transcript

Rafael de Almeida Evangelista: Hello, good afternoon, everyone. It is a pleasure to welcome you to this launch session at the Open Stage. My name is Rafael Evangelista. I am a board member of CGI.br, the Brazilian Internet Steering Committee. This session is about the book launch, NetMundial plus 10, statement in the six UN languages. So I have here with me, helping me with the book launch, Pierre Bonis from AFNIC, Pierre, can you join us? Okay, Pierre. Thank you. Valeria Betancourt from Association of Progressive Communication, Valeria, she’s not here yet. Okay. Jennifer Chung from .asia, hi Jennifer, how are you? And Renata Mielli will join us later after her session ends. NetMundial provided an inclusive platform where relevant actors from all stakeholder groups and regions were able to convene, deliberate and reach consensus on a new set of guidelines and recommendations. These were designed to address the most pressing challenges facing governance processes in our time. The result of an extensive and collaborative process, NetMundial plus 10 culminated in the document entitled NetMundial plus 10, Mood Stakeholder Statement, Strengthening Internet Governance and Digital Policy Processes. This document is now being published within the CGI book series and it is available in the six official languages of the United Nations, in addition to Portuguese. This is the book that I’m showing here. One of the central outcomes of this process was the formulation of the Sao Paulo Mood Stakeholder Guidelines. These guidelines present a structured yet adaptable set of steps aimed at enhancing Mood Stakeholder practices across various digital governance decision-making contexts. By doing so, the NetMundial plus 10 ultimate outcomes contribute to fostering a more effective, inclusive and results-oriented model of Mood Stakeholder collaboration at the global level. The Sao Paulo Guidelines seek to strengthen governance processes by offering a flexible, inclusive and actionable framework grounded in principles of transparency, participation and adaptability, thereby reinforcing the legitimacy and effectiveness of Mood Stakeholder approaches. In light of the importance of disseminating these outcomes widely, this statement has been translated into multiple languages and compiled into a book available in both digital and print formats. Launching this publication at the Internet Governance Forum is a meaningful milestone, one that allows us to promote and discuss the NetMundial plus 10 results with a broader global community. This moment is particularly significant given that NetMundial plus 10 recognizes the IGF as a forum that ought to be further strengthened and as a central hub for Internet governance decisions. Today’s launch also serves as a timely opportunity for reflection and dialogue, making one year since the NetMundial plus 10 event. Our session will feature contributions from key stakeholders involved in the process, who will discuss the Sao Paulo Guidelines, the Mood Stakeholder statement and the challenges encountered in producing a multilingual publication of this nature. We are grateful for your presence here and engagement in this session. Printed copies of the book are available. Please feel free to request one if you have not received one yet. The digital version can also be accessed online at cgi.br. Thank you. I’ll hand out to our speakers. Perhaps Pierre can be the first one. Can you, please?


Pierre Bonis: Thank you. Obrigado. Thank you very much for welcoming us here. So it’s a little bit ironic because I’m going to speak in English. Because while we are presenting this work that I will talk about a little bit later, in the UN official language and Portuguese, this open space that is a very cool one is the one where you don’t have translation. So I will keep on speaking English. Just to share the fact that, first of all, having been invited by cgi.br and nick.br is always a pleasure. We are, at AFNIC, the French registry and we see the work that has been done by nick.br and cgi.br for years as a real example of a multi-stakeholder approach. And an inspiring one, really. Just want to share, of course we are happy to have been able to contribute to translate in French this document. But I want to testify of something that is very important to us. That NetMundial and NetMundial Plus10 are, I think, the best example of a multi-stakeholder approach that can deliver in a timely manner. And the timely manner is a very important point here. So we can be talking, and that’s always good to talk, to exchange, to dig more and more and more some subjects. But sometimes people look at our model, at the multi-stakeholder model, saying, okay, that’s nice, but it doesn’t deliver anything. And NetMundial and NetMundial Plus10, at two very important moments of the Internet governance, the first one, we remember it, it was after the Snowden thing, and just before the transition of the IANA function and the move of the American government and NetMundial Plus10, a few weeks before the adoption of the digital global compact emergency, we had to talk and to see if together, at a global level, we had something to say, to add to what we said 10 years before. The decision to bring all the stakeholders, lately, and CGI.br organized an online platform, very demanding, very efficient. And when you look at these three or four days of meetings in Sao Paulo that reached that point, you see that in four days, you get a consensus that is a solid one, that can last four years, as in other fora, you may talk for 10 years without having anything like that. So that’s why, and I will finish with that, that’s why we say, and that’s not just to be polite, even if we try to be polite, but when we say that your work is inspiring, it’s really inspiring, this way of delivering things has to be applauded, so thank you very much.


Rafael de Almeida Evangelista: Thank you Pierre for such nice words, and please, Jennifer.


Jennifer Chung: Thank you very much, well that is a very hard act to follow Pierre, I feel like we can just applaud and that’s it, we can just celebrate. My name is Jennifer Chung, I work for .Asia, and .Asia is of course the registry operator for the top level domain, the generic top level domain .Asia, we have a mandate to promote internet adoption and development and capacity building in Asia Pacific, so obviously when we’re talking about translation, we’re doing the Asian languages, specifically Chinese. But taking a half step back, and really echoing a lot of things that Pierre has mentioned, NetMundial and NetMundial plus 10 is really held up to be such a shining example of what works, what actually can be done right, with the multi-stakeholder process, with the multi-stakeholder model, and the fact that the entire community comes together and produces something so beautiful is testament to the fact that it actually works, and every time there are naysayers and criticisms about, oh, you know, multi-stakeholder process is messy, it’s slow, it doesn’t produce results, this is a result, it’s beautiful, it’s a book, six languages, amazing. So taking another step back as well, to look at the Thank you very much. So, I think the multistakeholder guidelines gives us a way to measure, to look at things with a critical eye, to actually make sense of when we’re talking about multistakeholder model, we’re not talking about one model, we’re not talking about a monolith. There are different ways to look at and measure and another important part is how can we improve the multilateral process and this actually tells us very much how we can do this. In fact, sitting here, you know, in UN IGF, UN itself is multilateral but the IGF is multistakeholder and that is the beauty of it, how we are able to work together in this kind of environment and this is one of the things that really allows us to look at it in a very critical eye. I guess I’m going to turn a little bit to the translation part which is why I’m here. The interesting part about my mother tongue, Chinese, is I actually have two because I speak Cantonese because I am from Hong Kong but I also speak Mandarin which is of course the official language, the official Chinese. The script itself is also quite interesting. We have the traditional script which looks a little more complicated because when you look at the character, oh, there’s a lot more lines and of course, there’s the official script which is simplified. So when we looked into the translation process, we really had to, even within the Chinese community, do a lot of collaboration because we find that certain terms are translated in different ways and to be as inclusive as possible, we had to consult with and coordinate and cooperate with many, many different friends. So it really isn’t just .Asia doing everything, it absolutely isn’t. So it’s .Asia, there’s friends from mainland, there’s Sienic friends, there’s also friends who understand the different terminologies that is used with the Chinese-speaking community around the world. So I guess I’ll stop here with a little bit of flavor and context of the translation and hand it right back to you.


Rafael de Almeida Evangelista: Thank you, Jennifer. We have here Valeria, she could make, and thank you for joining us and please, the floor is yours.


Valeria Betancourt: Apologies for being late, it’s just running from one session to another. I really value the opportunity to talk about my experience of assisting with the translation into Spanish, but before I dig into the specific issue, what I want to say is that, and I have been saying in different panels, that when we look back at processes that we engage in relation to shaping Internet governance and the governance of digital technologies in general, we not only remember the issues that were important for us, the people that we met, the collaborations that we established, we really remember how things happened, how the process was conducted. And I think this is at the core of the multi-stakeholder guidelines. These principles and these guidelines really help us to build different ways of running processes when it has to do with the application of the multi-stakeholder approach, not only to the conversations, but actually to the collaborations and to finding solutions, which I think help us to address specific problems, either acknowledging that there are different interests on the table, acknowledging that the dynamic of stakeholder engagement is not easy, but actually helping us to move beyond the simple dialogue and conversation which is essential and important to specific solutions to address the different and critical challenges that we face in order to crystallize the vision that many of us envisage of having governance that is inclusive, that is democratic, that is transparent, that is accountable. So all of that is at the center of these guidelines and the possibility to honor also the principles that were stated 20 years ago in relation to inclusivity in the information society, not only to sit in the different stakeholders on the table, but actually to be able to meet their interests and to respond to the realities that people face, acknowledging that the way in which tech is embedded in our lives is really rendering us in many different ways. So in relation to the translation, the translation is not just a mere transposition, a literal transposition of specific content from one language to another. When doing the translation we are actually also expressing different views of reality, different perspectives, so in that sense the translation in Spanish is not only an exercise and an attempt and an effort to expand the reach of the multi-stakeholder guidelines, but I think it is a very important effort to actually respond to the contextual realities that are needed in order to be able to make progress in democratic governance. So that’s the importance for me of having the multi-stakeholder guidelines, the Sao Paulo multi-stakeholder guidelines in different languages has to do with that. It’s not only a communication strategy, it’s not only the possibility for different people to be able to relate with the principles and the guidelines in their own language, but for me it’s actually mostly an attempt to respond to the contextual realities of people, if we really want to be inclusive, if we really want to be able to make progress in terms of democratic governance. So let me leave it like that for now.


Rafael de Almeida Evangelista: Thank you very much, Valeria. We have time for questions, yes?


Audience: Thank you. Greetings everybody. The process that took place about 11 years ago, the first Netmundia, of course those that can remember, you know it was like a conscience on the whole information ecosystem. So the Netmundia process has become the conscience of the information society vis-a-vis the multi-stakeholder engagement. So the process was great, before I forget, I’m James Ndolufuye from Africa, Africa. The process involved is really a case in point, and I’m so happy that this has been translated so that others can benefit from it. The scoping, the engagement, and the inclusivity across the region is exemplary, and we’ve been talking about this a lot in Nigeria, and government people have a dozen of it, but it’s like the reference point, and we are scaling it across Africa. And so we also offer to at least translate into Swahili, into Yoruba, into Hausa. So we will proceed with that initiative as well. Thank you very much.


Rafael de Almeida Evangelista: Thank you. Any more comments? Everton, please. I should say that this was possible because of the great team that we have on NIC.br, all those persons who were able to do in a timely manner, and Everton was one of those, and Vinicius was the other, and many other people. So all the NIC.br team, thank you Rafa.


Everton Rodrigues: Very quickly, just as a follow-up of what Jimson was mentioning, NetMundial, the success of NetMundial depends a lot on the future of what we did one year ago. So it’s a living experience. The process didn’t stop on those two days that we gathered in Sao Paulo. Part of that is also that the process of translation is still ongoing, so we are open to keep receiving those translations in other languages. The six UN languages are published here with the book, but we have ongoing efforts in other languages as well, so please feel invited to send us those new translations to contact us also to make those translations available. They will be credited on NetMundial Plus 10 website, so join the effort, because the future of NetMundial is still ongoing. Thank you very much.


Rafael de Almeida Evangelista: Thank you, Everton. We have here with us Jorge Cancio, who was also involved in the process of translation, and we will have a few words from him. Please, Jorge. Anything you want.


Jorge Cancio: Hello, everyone. I’m Jorge Cancio from the Swiss government. I’m very sorry that I was late. We had an open forum just discussing, amongst other things, how we can use the NetMundial Plus 10 and the Sao Paulo multistakeholder guidelines to inspire the different processes we have. It’s really in our hands, so it’s like the translations. We did one into Italian, and then our colleague, Titi Casa, really went on. document and especially the Sao Paulo multi-stakeholder guidelines, which are very close to my heart Not only because of the product but because of the process we had Before Sao Paulo and in Sao Paulo last year, which was really amazing very intense very very Heavy also in emotions, but very productive It’s really in our hands to put them to work. I think that’s the the most important part of them To them because if it’s just a piece of paper It will get forgotten in in some years of time but if we put them to use be it here in the dynamic coalitions be it in the best practice for our beard in the policy networks or be it in national and regional IGF’s they will really gain some reality, they will inform what we do on a Multi-stakeholder in a multi-stakeholder fashion at the different levels. So I really invite you wherever your responsibility lies Be it at the local at the national at the regional or at the global level To really use them to put them to work and to Bring back your feedback on what works what doesn’t work where we have to improve them and here in the IGF Which is like the caretaker of the Sao Paulo multi-stakeholder guidelines Thanks to the invitation of net munia plus 10. This is the place to then keep on evolving them and just Very small commercial on my side in that sense. We are trying to implement them For the Swiss IGF, so we have a role in the Swiss IGF we are part of a multi-stakeholder community there and now we are looking into how we use the Sao Paulo multi-stakeholder guidelines to make sure that the Processes that affect the Swiss IGF are really bottom-up open inclusive and take into account Also the the asymmetries of power and influence of the different stakeholders Which is the first guideline from Sao Paulo, so I’ll leave it by that. Thank you


Rafael de Almeida Evangelista: Thank You Jorge I’ll hand out the microphone to Renata Miele who was able to join us from another session Please Renata


Renata Mielli: Well, I’m sorry I was in another session first of all, thank you everybody for being here for our Book launching and What I’m going to say I’m going to say that I’m very very very Happy to see that the work we’ve done we in net mundial plus 10 It was a hard work We joined some Marvelous people in our Agalec the the committee the high-level committee that put the the net mundial Who organized and Helped to Deline all the discussions. We we’ve made in net mundial and Some are here with us now and in the IGF and it’s very very wonderful, there is no words to say how Amazing is to see when we are on workshop and see people in all the world Making reference to the net mundial stakeholder guidelines, you know in so What we try to do when we start this discussion on CGI.br Well, let’s do another net mundial. I think we achieve our goal and in I hope this guidelines can Impact on the future of the governance of the Internet and the governance of the digital world in a way that the mood stakeholder approach can be improved and applied in Everywhere, so thank you very much. I’m very happy with the result of our collective job because only in the collective we can achieve And Things that can be Have a real impact in the society. Thank you very much


Rafael de Almeida Evangelista: Thank You Renata any more comments I see people here who were involved in the process anyone want to comment anything no Jennifer please


Jennifer Chung: Just a quick comment, actually, I was gonna go straight to Akinori, I wanted to acknowledge that he is also on the HLAC and I’m sure he’s gonna talk about the translation


Akinori Maemura: Okay All right Okay. Thank you very much my name is Akinori Maemura one of the member of the HLAC for the net mundial plus 10 and then I am actually the net mundial 2014 I was involved in the EMC. So this is this is my second Fabulous experience to join this this party I’d like to share two things. One is that As the whole head said the process was great I I was so totally astonished by the editing process You know, do you can you imagine that the all HLAC member is in the zoom call and Then from all over the world editing the same that a single Google document to move ahead so a lot of castles are here and there and Editing and someone doing the some edition editing in in, you know a little bit, you know a Why are you saying that the sentence of the forthcoming sentences, so it is really dynamic that dynamic process to edit and then my brain is really sizzling to Catch up the discussion. That’s one one one one thing and then another thing is I I actually contributed for the the Japanese translation of the Of the net mundial plus 10 Statement and then it is it is, you know only, you know contributing for the Japanese community for You know better Understand what is net mundial plus 10 a statement and the guidelines are but it is for me it is very important that that statement is not only the statement, but it is it is it it is Get more valuable when it was utilized made use of in their own process, so I’d like to have it Utilized in Japanese some Some a multi-stakeholder process in in in the Japanese community So if if it is utilized like that, then the value of the this this document is more and more Tangible and then a substantial one. So what that’s two points of mine. Thank you very much.


Rafael de Almeida Evangelista: Thank you Vinicius please


Vinicius W. O. Santos: Thank you I’m Vinicius. I was part of the secretariat as they mentioned I won’t repeat many of the great sentences that you all already Brought to us here in this group, but just to add small comments related to the translation process itself we could just reach this result here this Book because we had a lot of help We needed and we had a lot of help from many people from these people here But also from other people that are not here exactly. For example, we had also Manao So just recognizing here Manao Christina Arida also Zena that were responsible for reviewing the translation to the Arabic We had the same for the Russian translator. We had a Collaborator as well his name Leonid Todorov that was also responsible for reviewing the the translation for us We had translators some some translations. It was really a collaborative process So we had some translations that were made from people here with us Also some translations made by by specific professionals hired for that but all the translations were reviewed by people of the field of this field here the internet governance community and people that were involved with the net mundial process and could Review the work and help us out with all the details because there are a lot of details and you know that So that’s it just to recognize these people and all this work because it’s a very complex and very very minimalistic work and we really need to be very Helped to to reach this kind of result. Thank you very much. That’s it


Valeria Betancourt: Thank you. Sure a very important comment that I want to make well I hope it is important But as many of you know, we are going through the plus 20 review of the World Summit on the Information Society and And I think this instrument that we have here, it has very practical and pragmatic steps and recommendations on how to make inclusion a reality. And I mention this because the opportunity that the PLAS-20 review of the WSIS present us is to precisely increase the representation of underrepresented views, perspectives, communities, and stakeholders. And I think if we really want to achieve the WSIS vision and to make progress in terms of a digital future with dignity and justice, it cannot happen if we don’t really in practice at all levels, national, regional, local, global, increase the representation of perspectives that have been overlooked or because of different reasons are not present here as they should be. So I think this is a powerful instrument for us to use in order to make sure that those voices, perspectives, and realities are taken into consideration as part of the review. So we actually can move towards a future that is truly more inclusive. Thank you.


Rafael de Almeida Evangelista: Pierre?


Pierre Bonis: Yeah, thank you. If I can add a few words because I was so thrilled by the process that everyone talked about that I almost talk about that only, but on the translation process in French. So first of all, I want to thank Lucien Castex because it has been a work to read the translation that we asked to a professional translator. And in that process, I think we see in this document that the concept of the Internet governance are better translated that for instance, the official translation that we can find even in the UN system. Because when you had the WSIS for instance, the translation of the Geneva Declaration and Agenda of Tunis was made by very competent translators from the UN system, but not from the field. So now if you look at the words, at least in French, there is a new translation of IG concept that is closer to the field that it was before. So maybe it gives an idea for the next WSIS plus 20 process to ask some of the community of the multi-stakeholder community to read the official translation of the UN before they are published. Maybe it will be more understandable later, but anyway, thank you to have given us this opportunity to do it. And once again, thank you to Lucien who have done that brilliantly in French.


Rafael de Almeida Evangelista: Thank you. Thank you. Jennifer, please.


Jennifer Chung: Thank you so much. Just very briefly because of what Akinori and Valeria said about going through the WSIS plus 20 process and actually this is a good time to use and actually kind of enshrine these principles, these guidelines into the input. And I wanted to take up on specifically on translation because of language. Because we are, when we live and breathe internet governance and multi-stakeholderism, we live and dream it, we are able to actually translate it into our native languages and this actually helps us with capacity building in our own communities for them to understand these concepts so they too can be leveled up to be able to substantively input into the WSIS plus 20 process. So this is such a valuable undertaking and thank you again CGI, WPR, Renata and all of those, I’m looking around, I don’t see them anymore, Everton and Vinicius and all of, oh there you are, to actually do this, to enable us to have this resource so we can uplift our own communities. Thank you very much. Thank you.


Rafael de Almeida Evangelista: We thank you, Jennifer. People are saying to me that I have to say to you that the copy of this book is free so please pick up one of these. We have some on our booth as well on the CGI.br but I think there are many on the tables, on the chairs and anything, anyone wants to add some more comments or from the audience? We still have five minutes, I want to ask you a question if you allow me. Is there anything in the process of translation that made more clear to you or made more important to you some of the principles that are in this book? I know that the process of translation is always a process of rethinking, reflecting on the text. So if you have something on your mind. Jorge told us about power imbalances, which is very much addressed in this book. So please.


Valeria Betancourt: I can say something, obviously as I mentioned earlier, translation, it’s also permeated by politics, your politics, your understanding of processes of existence, of issues, of problems, of your own reality, all of that is also translated into the exercise of translating content. So in that sense I think the fact that we have the NetMundial principles in different languages, it also increases the level of appropriation. And appropriation of the instruments that we have at the moment, whether they are frameworks of principles, whether they are guidelines, whether they are set of policies or practices, whatever the instrument it is, the possibility to relate with those instruments on your own language I think increases the possibility or appropriate them in a way that is meaningful. And only by being meaningful we can engage with the governance arrangements that could respond to the challenges that we face in a way that really means something for the different groups and realities. So that is what I want to highlight in terms of the translation. It’s not exempt of those politics and the political dimension that the processes have, particularly if they have to do with the governance of digital technology. Thank you.


Rafael de Almeida Evangelista: Renata?


Renata Mielli: I want to make a comment, not about the translation process, but about the importance in having the guidelines in different languages. When I’m going to discuss the NetMundial plus 10 guidelines with Spanish people, I don’t read the guidelines in English or in Portuguese. I look for the guidelines in Spanish because I can see the difference in the impact that the very specific language has in the translation. So for us, it’s amazing to see that. When I’m going to talk about in English, I go to the English version because it’s different for the Portuguese. And the other day I made a meeting and I talked about NetMundial plus 10 for Brazilian people. And I didn’t read the Portuguese version until that moment. And I saw differences that make more sense in Portuguese. So for us, it’s also important to have this book in all these languages. It’s very interesting and an education process for me.


Rafael de Almeida Evangelista: Jorge?


Jorge Cancio: Yeah, perhaps a similar example. We were having a discussion in the management board of my office, where we were talking about a process, a procedure that was rather internal, but that had some elements of needing inclusion and openness. And it was super handy for me because I could refer to the text, to the process steps, both in German and in French, because normally it’s the languages we work in, we discuss. And it has a completely different impact for my colleagues who are not used to work in English, so that I could show them, indicate them the text I was referring to in German or in French.


Rafael de Almeida Evangelista: Thank you. Thank you. Any more comments? I think our time is over. So please, Renata, do you want to say something?


Renata Mielli: No. I want to say, don’t go. Let’s take a picture with everybody who is here from the HLAC before closing the session.


Rafael de Almeida Evangelista: Okay. So let’s make a picture. But thank you all for attending this presentation, and please pick up a copy of the book. Thank you. Thank you.


R

Rafael de Almeida Evangelista

Speech speed

107 words per minute

Speech length

925 words

Speech time

515 seconds

Introduction of multilingual publication in six UN languages plus Portuguese

Explanation

Rafael introduces the NetMundial Plus 10 book launch, emphasizing that the document has been published in the six official UN languages in addition to Portuguese. This multilingual approach ensures broader global accessibility and understanding of the guidelines.


Evidence

Shows the physical book during presentation and mentions it’s available both in digital and print formats at cgi.br


Major discussion point

Multilingual accessibility of governance guidelines


Topics

Sociocultural


Announcement of free book distribution and digital availability

Explanation

Rafael announces that printed copies of the book are available for free distribution to attendees and that digital versions can be accessed online. This ensures maximum accessibility and dissemination of the guidelines.


Evidence

States ‘Printed copies of the book are available. Please feel free to request one if you have not received one yet. The digital version can also be accessed online at cgi.br’


Major discussion point

Open access to governance resources


Topics

Development


Recognition of NIC.br team’s contribution to timely publication

Explanation

Rafael acknowledges the great team at NIC.br, specifically mentioning Everton and Vinicius among others, for their ability to deliver the publication in a timely manner. This recognition highlights the collaborative effort behind the successful publication.


Evidence

Specifically names Everton and Vinicius as key contributors and mentions ‘all the NIC.br team’


Major discussion point

Collaborative team effort in governance initiatives


Topics

Development


P

Pierre Bonis

Speech speed

110 words per minute

Speech length

674 words

Speech time

364 seconds

NetMundial demonstrates multi-stakeholder approach can deliver results in timely manner

Explanation

Pierre argues that NetMundial and NetMundial Plus 10 are the best examples of multi-stakeholder approaches that can deliver concrete results quickly. He contrasts this with other forums that may take years without producing similar outcomes.


Evidence

Cites that NetMundial achieved consensus in 3-4 days that can last for years, while other forums may talk for 10 years without similar results. References timing around Snowden revelations and IANA transition for first NetMundial, and Digital Global Compact for Plus 10


Major discussion point

Effectiveness of multi-stakeholder governance models


Topics

Legal and regulatory


Agreed with

– Jennifer Chung
– Audience
– Renata Mielli

Agreed on

NetMundial demonstrates effectiveness of multi-stakeholder governance


AFNIC contributed French translation as part of collaborative effort

Explanation

Pierre explains that AFNIC, the French registry, was happy to contribute to the French translation of the document. He emphasizes this as part of a broader collaborative effort in the multi-stakeholder community.


Evidence

States ‘we are happy to have been able to contribute to translate in French this document’ and thanks Lucien Castex for reviewing the professional translation


Major discussion point

International collaboration in translation efforts


Topics

Sociocultural


Internet governance concepts are better translated by field experts than general UN translators

Explanation

Pierre argues that having translations reviewed by experts from the internet governance field results in better, more accurate translations than those done solely by UN system translators. This leads to concepts that are closer to the field and more understandable.


Evidence

Compares the NetMundial translations favorably to official UN translations from WSIS, noting that UN translators were competent but ‘not from the field’


Major discussion point

Quality and accuracy in technical translation


Topics

Sociocultural


Agreed with

– Jennifer Chung
– Vinicius W. O. Santos

Agreed on

Translation requires collaborative effort from field experts


Disagreed with

– Vinicius W. O. Santos

Disagreed on

Translation approach – professional vs community-based


J

Jennifer Chung

Speech speed

169 words per minute

Speech length

767 words

Speech time

271 seconds

NetMundial serves as shining example that multi-stakeholder process works and produces tangible results

Explanation

Jennifer argues that NetMundial Plus 10 serves as a powerful counter-argument to critics who claim multi-stakeholder processes are messy, slow, and don’t produce results. The beautiful book in six languages is concrete proof that the process works.


Evidence

Points to the physical book as tangible evidence, stating ‘this is a result, it’s beautiful, it’s a book, six languages, amazing’


Major discussion point

Tangible outcomes of multi-stakeholder governance


Topics

Legal and regulatory


Agreed with

– Pierre Bonis
– Audience
– Renata Mielli

Agreed on

NetMundial demonstrates effectiveness of multi-stakeholder governance


Multi-stakeholder guidelines help measure and improve multilateral processes

Explanation

Jennifer explains that the guidelines provide a framework for critically examining and measuring different multi-stakeholder models, recognizing that there isn’t just one monolithic approach. They offer ways to improve multilateral processes.


Evidence

References the setting of UN IGF as an example where multilateral (UN) and multi-stakeholder (IGF) approaches work together


Major discussion point

Framework for evaluating governance processes


Topics

Legal and regulatory


Chinese translation required coordination across different Chinese-speaking communities and scripts

Explanation

Jennifer describes the complexity of Chinese translation, involving coordination between different Chinese-speaking communities, scripts (traditional vs simplified), and regional variations. This required extensive collaboration beyond just .Asia’s efforts.


Evidence

Explains her bilingual background (Cantonese and Mandarin), different scripts (traditional and simplified), and mentions collaboration with friends from mainland China, Sinic friends, and others understanding different terminologies


Major discussion point

Complexity of multilingual translation efforts


Topics

Sociocultural


Agreed with

– Pierre Bonis
– Vinicius W. O. Santos

Agreed on

Translation requires collaborative effort from field experts


Multilingual availability enables capacity building in local communities

Explanation

Jennifer argues that having resources in native languages helps with capacity building in local communities, enabling them to understand concepts and participate more substantively in processes like WSIS Plus 20. This levels up community participation.


Evidence

Specifically mentions how native language resources help ‘level up’ communities to ‘substantively input into the WSIS plus 20 process’


Major discussion point

Language accessibility for community empowerment


Topics

Development


Agreed with

– Valeria Betancourt
– Jorge Cancio
– Renata Mielli

Agreed on

Multilingual resources enable meaningful community engagement


V

Valeria Betancourt

Speech speed

131 words per minute

Speech length

932 words

Speech time

424 seconds

Translation expresses different views of reality and responds to contextual realities

Explanation

Valeria argues that translation is not merely literal transposition but expresses different perspectives and worldviews. Having guidelines in multiple languages responds to contextual realities needed for democratic governance progress.


Evidence

States ‘translation is not just a mere transposition, a literal transposition of specific content from one language to another. When doing the translation we are actually also expressing different views of reality’


Major discussion point

Cultural and contextual dimensions of translation


Topics

Sociocultural


Having guidelines in native languages increases appropriation and meaningful engagement

Explanation

Valeria contends that translation is permeated by politics and personal understanding, and having instruments in one’s native language increases the possibility of meaningful appropriation. This is essential for engaging with governance arrangements that respond to real challenges.


Evidence

Explains that translation includes ‘your politics, your understanding of processes of existence, of issues, of problems, of your own reality’ and that native language access ‘increases the possibility or appropriate them in a way that is meaningful’


Major discussion point

Political and personal dimensions of language access


Topics

Sociocultural


Agreed with

– Jennifer Chung
– Jorge Cancio
– Renata Mielli

Agreed on

Multilingual resources enable meaningful community engagement


Guidelines provide practical steps for making inclusion reality in WSIS Plus 20 review

Explanation

Valeria sees the guidelines as a powerful practical instrument for the WSIS Plus 20 review process, providing concrete steps to increase representation of underrepresented views and communities. This can help achieve the WSIS vision of digital future with dignity and justice.


Evidence

References the ongoing WSIS Plus 20 review and emphasizes the need to ‘increase the representation of underrepresented views, perspectives, communities, and stakeholders’


Major discussion point

Practical application in global governance processes


Topics

Legal and regulatory


Agreed with

– Jorge Cancio
– Akinori Maemura

Agreed on

Guidelines must be actively implemented to remain valuable


Guidelines can be used to increase representation of underrepresented communities

Explanation

Valeria argues that the guidelines offer practical mechanisms to ensure that voices, perspectives, and realities that have been overlooked are taken into consideration in governance processes. This is essential for moving toward a truly inclusive future.


Evidence

Emphasizes the need to include ‘perspectives that have been overlooked or because of different reasons are not present here as they should be’


Major discussion point

Inclusive representation in governance


Topics

Human rights


A

Audience

Speech speed

116 words per minute

Speech length

160 words

Speech time

82 seconds

NetMundial process has become the conscience of information society regarding multi-stakeholder engagement

Explanation

The audience member (James Ndolufuye from Africa) argues that the NetMundial process has become the moral compass or conscience of the information society when it comes to multi-stakeholder engagement. The process is exemplary and serves as a reference point.


Evidence

States it’s ‘like the reference point’ and mentions they are ‘scaling it across Africa’ and that ‘government people have a dozen of it’ in Nigeria


Major discussion point

NetMundial as a governance reference model


Topics

Legal and regulatory


Agreed with

– Pierre Bonis
– Jennifer Chung
– Renata Mielli

Agreed on

NetMundial demonstrates effectiveness of multi-stakeholder governance


E

Everton Rodrigues

Speech speed

129 words per minute

Speech length

137 words

Speech time

63 seconds

Translation efforts are ongoing and open to additional languages beyond UN official languages

Explanation

Everton explains that the translation process didn’t stop with the six UN languages published in the book, but continues to be open to translations in other languages. The community is invited to contribute additional translations that will be credited on the NetMundial Plus 10 website.


Evidence

States ‘we are open to keep receiving those translations in other languages’ and mentions they will be ‘credited on NetMundial Plus 10 website’


Major discussion point

Ongoing collaborative translation efforts


Topics

Sociocultural


J

Jorge Cancio

Speech speed

136 words per minute

Speech length

519 words

Speech time

228 seconds

Guidelines must be put to work in various contexts to gain reality and avoid being forgotten

Explanation

Jorge emphasizes that the guidelines will only remain relevant if they are actively used in practice across different levels and contexts – from local to global. Without practical application, they risk becoming forgotten documents.


Evidence

Mentions applications in ‘dynamic coalitions’, ‘best practice forums’, ‘policy networks’, and ‘national and regional IGFs’, and warns they will ‘get forgotten in some years of time’ if not used


Major discussion point

Practical implementation of governance guidelines


Topics

Legal and regulatory


Agreed with

– Akinori Maemura
– Valeria Betancourt

Agreed on

Guidelines must be actively implemented to remain valuable


Swiss IGF is implementing Sao Paulo guidelines to ensure bottom-up, inclusive processes

Explanation

Jorge provides a concrete example of implementation by describing how they are using the Sao Paulo guidelines in the Swiss IGF to ensure processes are bottom-up, open, and inclusive, particularly addressing power and influence asymmetries among stakeholders.


Evidence

Specifically mentions they are ‘looking into how we use the Sao Paulo multi-stakeholder guidelines’ and references ‘the asymmetries of power and influence of the different stakeholders which is the first guideline from Sao Paulo’


Major discussion point

National-level implementation of global guidelines


Topics

Legal and regulatory


Native language resources help colleagues who don’t work in English understand concepts better

Explanation

Jorge provides a practical example of how having guidelines in German and French helped him explain concepts to colleagues in his office who don’t typically work in English. This demonstrates the concrete impact of multilingual resources in professional settings.


Evidence

Describes a specific meeting where he could ‘show them, indicate them the text I was referring to in German or in French’ which had ‘a completely different impact for my colleagues who are not used to work in English’


Major discussion point

Workplace application of multilingual resources


Topics

Sociocultural


Agreed with

– Valeria Betancourt
– Jennifer Chung
– Renata Mielli

Agreed on

Multilingual resources enable meaningful community engagement


R

Renata Mielli

Speech speed

119 words per minute

Speech length

425 words

Speech time

213 seconds

Collective work is essential for achieving real societal impact

Explanation

Renata emphasizes that meaningful impact on society can only be achieved through collective effort. She expresses satisfaction that the NetMundial guidelines are being referenced globally and hopes they will improve multi-stakeholder approaches in internet and digital governance.


Evidence

States ‘only in the collective we can achieve things that can have a real impact in the society’ and mentions seeing people ‘in all the world making reference to the net mundial stakeholder guidelines’


Major discussion point

Collective action for governance impact


Topics

Legal and regulatory


Agreed with

– Pierre Bonis
– Jennifer Chung
– Audience

Agreed on

NetMundial demonstrates effectiveness of multi-stakeholder governance


Different language versions have distinct impacts and make more sense in specific contexts

Explanation

Renata explains that she deliberately chooses different language versions of the guidelines depending on her audience, as each version has a different impact and makes more sense in specific linguistic contexts. This demonstrates the value of having multiple language versions.


Evidence

Provides specific examples: uses Spanish version when talking to Spanish speakers, English version for English discussions, and discovered meaningful differences when reading the Portuguese version for Brazilian audiences


Major discussion point

Contextual effectiveness of multilingual resources


Topics

Sociocultural


Agreed with

– Valeria Betancourt
– Jennifer Chung
– Jorge Cancio

Agreed on

Multilingual resources enable meaningful community engagement


A

Akinori Maemura

Speech speed

150 words per minute

Speech length

337 words

Speech time

134 seconds

Dynamic online editing process with global HLAC members was remarkable collaborative experience

Explanation

Akinori describes the intensive collaborative editing process where HLAC members from around the world simultaneously edited the same Google document during Zoom calls. This dynamic process was both challenging and remarkable, with multiple people editing different parts simultaneously.


Evidence

Describes the specific process: ‘all HLAC member is in the zoom call and then from all over the world editing the same that a single Google document’ with ‘a lot of castles are here and there and editing’ making his ‘brain really sizzling to catch up the discussion’


Major discussion point

Innovative collaborative editing methodology


Topics

Legal and regulatory


Japanese translation contributes to better understanding in Japanese community

Explanation

Akinori explains his contribution to the Japanese translation and emphasizes that the real value comes when the guidelines are utilized in actual Japanese multi-stakeholder processes. The translation becomes more valuable when it’s actively used rather than just available.


Evidence

States he ‘contributed for the Japanese translation’ and emphasizes ‘if it is utilized like that, then the value of this document is more and more tangible and substantial’


Major discussion point

Community-specific translation value through usage


Topics

Sociocultural


Agreed with

– Jorge Cancio
– Valeria Betancourt

Agreed on

Guidelines must be actively implemented to remain valuable


V

Vinicius W. O. Santos

Speech speed

162 words per minute

Speech length

272 words

Speech time

100 seconds

Translation process was highly collaborative involving multiple reviewers and field experts

Explanation

Vinicius explains that the successful translation outcome was only possible due to extensive collaboration involving many people, including specific reviewers for different languages and both professional translators and community volunteers. All translations were reviewed by people from the internet governance field.


Evidence

Names specific contributors like ‘Manao Christina Arida also Zena that were responsible for reviewing the translation to the Arabic’ and ‘Leonid Todorov that was also responsible for reviewing the translation for us’ for Russian


Major discussion point

Collaborative quality assurance in translation


Topics

Sociocultural


Agreed with

– Pierre Bonis
– Jennifer Chung

Agreed on

Translation requires collaborative effort from field experts


Disagreed with

– Pierre Bonis

Disagreed on

Translation approach – professional vs community-based


Agreements

Agreement points

NetMundial demonstrates effectiveness of multi-stakeholder governance

Speakers

– Pierre Bonis
– Jennifer Chung
– Audience
– Renata Mielli

Arguments

NetMundial demonstrates multi-stakeholder approach can deliver results in timely manner


NetMundial serves as shining example that multi-stakeholder process works and produces tangible results


NetMundial process has become the conscience of information society regarding multi-stakeholder engagement


Collective work is essential for achieving real societal impact


Summary

All speakers agree that NetMundial represents a successful model of multi-stakeholder governance that delivers concrete results quickly and serves as an inspiring example for the global community


Topics

Legal and regulatory


Translation requires collaborative effort from field experts

Speakers

– Pierre Bonis
– Jennifer Chung
– Vinicius W. O. Santos

Arguments

Internet governance concepts are better translated by field experts than general UN translators


Chinese translation required coordination across different Chinese-speaking communities and scripts


Translation process was highly collaborative involving multiple reviewers and field experts


Summary

Speakers unanimously agree that high-quality translation of technical governance documents requires collaboration among experts from the internet governance field rather than relying solely on professional translators


Topics

Sociocultural


Multilingual resources enable meaningful community engagement

Speakers

– Valeria Betancourt
– Jennifer Chung
– Jorge Cancio
– Renata Mielli

Arguments

Having guidelines in native languages increases appropriation and meaningful engagement


Multilingual availability enables capacity building in local communities


Native language resources help colleagues who don’t work in English understand concepts better


Different language versions have distinct impacts and make more sense in specific contexts


Summary

All speakers agree that providing resources in native languages significantly enhances understanding, appropriation, and meaningful participation in governance processes


Topics

Sociocultural


Guidelines must be actively implemented to remain valuable

Speakers

– Jorge Cancio
– Akinori Maemura
– Valeria Betancourt

Arguments

Guidelines must be put to work in various contexts to gain reality and avoid being forgotten


Japanese translation contributes to better understanding in Japanese community


Guidelines provide practical steps for making inclusion reality in WSIS Plus 20 review


Summary

Speakers agree that the true value of the guidelines lies in their practical implementation across various governance contexts rather than their mere existence as documents


Topics

Legal and regulatory


Similar viewpoints

Both speakers emphasize that translation is not merely technical but involves cultural and contextual adaptation that makes content more meaningful and impactful for specific audiences

Speakers

– Valeria Betancourt
– Renata Mielli

Arguments

Translation expresses different views of reality and responds to contextual realities


Different language versions have distinct impacts and make more sense in specific contexts


Topics

Sociocultural


Both speakers view NetMundial as definitive proof that multi-stakeholder processes can be effective and produce concrete outcomes, countering critics who claim such processes are ineffective

Speakers

– Pierre Bonis
– Jennifer Chung

Arguments

NetMundial demonstrates multi-stakeholder approach can deliver results in timely manner


NetMundial serves as shining example that multi-stakeholder process works and produces tangible results


Topics

Legal and regulatory


Both speakers see multilingual resources as tools for empowerment and inclusion, enabling broader participation in governance processes

Speakers

– Jennifer Chung
– Valeria Betancourt

Arguments

Multilingual availability enables capacity building in local communities


Guidelines can be used to increase representation of underrepresented communities


Topics

Development | Human rights


Unexpected consensus

Technical complexity of collaborative online editing

Speakers

– Akinori Maemura

Arguments

Dynamic online editing process with global HLAC members was remarkable collaborative experience


Explanation

While not directly contradicted by others, the detailed appreciation for the technical and logistical complexity of simultaneous global editing represents an unexpected focus on process innovation that wasn’t emphasized by other speakers


Topics

Legal and regulatory


Ongoing nature of translation efforts beyond official publication

Speakers

– Everton Rodrigues

Arguments

Translation efforts are ongoing and open to additional languages beyond UN official languages


Explanation

The commitment to continue accepting translations beyond the six UN languages represents an unexpected expansion of the project’s scope that goes beyond typical publication cycles


Topics

Sociocultural


Overall assessment

Summary

The discussion shows remarkable consensus across all speakers on the value of NetMundial as a governance model, the importance of collaborative translation by field experts, the necessity of multilingual resources for inclusive participation, and the need for practical implementation of guidelines. There were no significant disagreements or conflicting viewpoints expressed.


Consensus level

Very high consensus with strong mutual reinforcement of key themes. This unanimous support suggests the NetMundial Plus 10 process has successfully built broad stakeholder buy-in and demonstrates the maturity of the multi-stakeholder governance community in recognizing effective practices. The consensus has significant implications for legitimizing and scaling multi-stakeholder approaches globally.


Differences

Different viewpoints

Translation approach – professional vs community-based

Speakers

– Pierre Bonis
– Vinicius W. O. Santos

Arguments

Internet governance concepts are better translated by field experts than general UN translators


Translation process was highly collaborative involving multiple reviewers and field experts


Summary

Pierre emphasizes the superiority of field expert translations over professional UN translators, while Vinicius describes a hybrid approach using both professional translators and community reviewers. They differ on the optimal translation methodology.


Topics

Sociocultural


Unexpected differences

Overall assessment

Summary

The discussion showed remarkably high consensus among speakers with only minor methodological differences regarding translation approaches


Disagreement level

Very low disagreement level. The speakers demonstrated strong alignment on core principles of multi-stakeholder governance, the value of multilingual accessibility, and the importance of practical implementation. The minimal disagreements were constructive and focused on methodology rather than fundamental principles, suggesting a mature and collaborative community working toward shared goals.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize that translation is not merely technical but involves cultural and contextual adaptation that makes content more meaningful and impactful for specific audiences

Speakers

– Valeria Betancourt
– Renata Mielli

Arguments

Translation expresses different views of reality and responds to contextual realities


Different language versions have distinct impacts and make more sense in specific contexts


Topics

Sociocultural


Both speakers view NetMundial as definitive proof that multi-stakeholder processes can be effective and produce concrete outcomes, countering critics who claim such processes are ineffective

Speakers

– Pierre Bonis
– Jennifer Chung

Arguments

NetMundial demonstrates multi-stakeholder approach can deliver results in timely manner


NetMundial serves as shining example that multi-stakeholder process works and produces tangible results


Topics

Legal and regulatory


Both speakers see multilingual resources as tools for empowerment and inclusion, enabling broader participation in governance processes

Speakers

– Jennifer Chung
– Valeria Betancourt

Arguments

Multilingual availability enables capacity building in local communities


Guidelines can be used to increase representation of underrepresented communities


Topics

Development | Human rights


Takeaways

Key takeaways

NetMundial Plus 10 successfully demonstrates that multi-stakeholder processes can deliver concrete, timely results and produce tangible outcomes


The Sao Paulo Multi-stakeholder Guidelines provide a practical, actionable framework for improving governance processes with principles of transparency, participation, and adaptability


Translation into multiple languages is not merely linguistic conversion but enables cultural appropriation and contextual understanding of governance concepts


Collaborative translation processes involving field experts produce better results than standard institutional translations


The guidelines serve as a living document that gains value through practical implementation rather than remaining theoretical


Multi-language availability significantly increases community capacity building and enables broader participation in governance processes


The NetMundial process represents a successful model for inclusive, bottom-up decision-making that can be replicated at various levels


Resolutions and action items

Continue accepting and publishing translations in additional languages beyond the six UN languages


Implement the Sao Paulo Guidelines in various contexts including national IGFs (specifically Swiss IGF mentioned)


Use the guidelines as input for the WSIS Plus 20 review process to increase representation of underrepresented communities


Apply the guidelines in dynamic coalitions, best practice forums, and policy networks


Provide feedback on implementation experiences to evolve and improve the guidelines


Utilize the IGF as the platform to continue developing and refining the multi-stakeholder guidelines


Unresolved issues

How to effectively scale the NetMundial model to other regions and contexts beyond the examples mentioned


Specific mechanisms for collecting and incorporating feedback from implementation experiences


How to ensure sustained engagement and prevent the guidelines from being forgotten over time


Methods for measuring the effectiveness of the guidelines in addressing power imbalances and asymmetries


Coordination mechanisms for ongoing translation efforts in additional languages


Suggested compromises

None identified


Thought provoking comments

Sometimes people look at our model, at the multi-stakeholder model, saying, okay, that’s nice, but it doesn’t deliver anything. And NetMundial and NetMundial Plus10… you see that in four days, you get a consensus that is a solid one, that can last four years, as in other fora, you may talk for 10 years without having anything like that.

Speaker

Pierre Bonis


Reason

This comment directly addresses a fundamental criticism of multi-stakeholder governance – that it’s inefficient and doesn’t produce concrete results. By contrasting NetMundial’s 4-day consensus with other forums that ‘talk for 10 years without having anything,’ Pierre provides a powerful counter-narrative that reframes the discussion from theoretical benefits to practical effectiveness.


Impact

This comment established a defensive yet confident tone for the entire discussion, positioning NetMundial as proof-of-concept for multi-stakeholder effectiveness. It influenced subsequent speakers to emphasize concrete outcomes and practical applications rather than just theoretical principles.


Translation is not just a mere transposition, a literal transposition of specific content from one language to another. When doing the translation we are actually also expressing different views of reality, different perspectives… It’s not only a communication strategy… but for me it’s actually mostly an attempt to respond to the contextual realities of people, if we really want to be inclusive.

Speaker

Valeria Betancourt


Reason

This insight transforms the discussion from viewing translation as a technical process to understanding it as a political and cultural act. Valeria introduces the concept that language carries worldviews and that true inclusivity requires acknowledging these different realities, not just linguistic accessibility.


Impact

This comment elevated the entire conversation about translation, leading other speakers to reflect more deeply on their translation experiences. It prompted Renata to later observe how different language versions had different impacts on audiences, and influenced the discussion toward viewing translation as a tool for democratic governance rather than just communication.


It’s really in our hands to put them to work… if it’s just a piece of paper it will get forgotten in some years of time but if we put them to use… they will really gain some reality, they will inform what we do on a multi-stakeholder fashion at the different levels.

Speaker

Jorge Cancio


Reason

This comment shifts the focus from celebrating the achievement to acknowledging the responsibility for implementation. Jorge introduces urgency and agency, warning against the common fate of policy documents that remain unused, and emphasizes the need for active application across different governance levels.


Impact

This comment created a turning point in the discussion, moving from retrospective celebration to forward-looking action. It prompted speakers to discuss practical applications and inspired concrete examples of implementation, such as Jorge’s own work with the Swiss IGF and references to using the guidelines in WSIS+20 processes.


The interesting part about my mother tongue, Chinese, is I actually have two because I speak Cantonese… The script itself is also quite interesting. We have the traditional script… So when we looked into the translation process, we really had to, even within the Chinese community, do a lot of collaboration because we find that certain terms are translated in different ways.

Speaker

Jennifer Chung


Reason

This comment reveals the complexity hidden within seemingly simple categories like ‘Chinese translation.’ Jennifer exposes how even within a single language community, there are multiple variants, scripts, and terminologies that require extensive collaboration and negotiation, demonstrating that inclusivity requires attention to intra-community diversity.


Impact

This comment deepened the discussion about translation complexity and reinforced Valeria’s earlier point about translation as a collaborative, political process. It led to greater appreciation for the collaborative nature of the entire translation effort and influenced later comments about the importance of community involvement in translation work.


We had ongoing efforts in other languages as well, so please feel invited to send us those new translations to contact us also to make those translations available… the future of NetMundial is still ongoing.

Speaker

Everton Rodrigues


Reason

This comment reframes NetMundial from a completed project to an ongoing, living process. By opening the door for additional translations and emphasizing continuity, Everton transforms the book launch from an endpoint to a milestone in an continuing journey, emphasizing the dynamic nature of the initiative.


Impact

This comment energized the discussion and led to immediate offers of additional translations (James Ndolufuye offering Swahili, Yoruba, and Hausa). It shifted the conversation from retrospective to prospective, encouraging active participation and expanding the scope of the initiative beyond the current publication.


Overall assessment

These key comments fundamentally shaped the discussion by transforming it from a simple book launch celebration into a sophisticated dialogue about governance effectiveness, cultural politics of translation, and ongoing responsibility for implementation. Pierre’s opening defense of multi-stakeholder efficiency set a confident tone that influenced all subsequent speakers to emphasize concrete outcomes. Valeria’s insight about translation as cultural-political work elevated the entire conversation about language and inclusivity, leading to richer reflections from other translators. Jorge’s call for active implementation created a crucial pivot from celebration to action, while Jennifer’s detailed account of Chinese translation complexity and Everton’s invitation for ongoing contributions transformed the event from a conclusion to a beginning. Together, these comments created a dynamic flow that moved the discussion through multiple layers – from defending multi-stakeholder governance, to understanding translation as political work, to accepting responsibility for future implementation – ultimately positioning the NetMundial guidelines as both a proven model and a living tool for democratic governance.


Follow-up questions

How can the NetMundial Plus 10 guidelines be effectively implemented and utilized in various multi-stakeholder processes at local, national, regional, and global levels?

Speaker

Jorge Cancio


Explanation

Jorge emphasized that the guidelines need to be put to work in practice rather than remaining as documents, and invited feedback on what works, what doesn’t work, and where improvements are needed when implementing them in real processes


How can the Sao Paulo multi-stakeholder guidelines be applied to improve the Swiss IGF processes?

Speaker

Jorge Cancio


Explanation

Jorge mentioned they are actively trying to implement the guidelines for the Swiss IGF to ensure processes are bottom-up, open, inclusive, and take into account asymmetries of power and influence


How can the NetMundial Plus 10 guidelines be used as input for the WSIS Plus 20 review process?

Speaker

Valeria Betancourt and Jennifer Chung


Explanation

Both speakers highlighted the opportunity to use these practical guidelines to increase representation of underrepresented communities and ensure more inclusive participation in the WSIS Plus 20 review


How can translation into additional local languages (Swahili, Yoruba, Hausa) be coordinated and implemented?

Speaker

James Ndolufuye


Explanation

James offered to translate the guidelines into African languages to scale the approach across Africa, indicating a need for coordination of additional translation efforts


How can the multi-stakeholder community contribute to improving official UN translations of internet governance concepts?

Speaker

Pierre Bonis


Explanation

Pierre suggested that the community could review official UN translations before publication to make them more understandable and closer to field terminology, particularly for the WSIS Plus 20 process


How can the guidelines be utilized in Japanese multi-stakeholder processes to make their value more tangible and substantial?

Speaker

Akinori Maemura


Explanation

Akinori emphasized that the document becomes more valuable when utilized in actual processes and expressed interest in applying it to Japanese community multi-stakeholder processes


What feedback and lessons learned can be gathered from ongoing implementations of the guidelines in various contexts?

Speaker

Jorge Cancio


Explanation

Jorge called for bringing back feedback on implementation experiences to continue evolving and improving the guidelines based on real-world application


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions

Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions

Session at a glance

Summary

This panel discussion focused on the intersection of artificial intelligence and environmental sustainability, exploring how AI can both contribute to and help solve climate challenges. The moderator introduced the central paradox that while AI offers opportunities to address environmental risks, it simultaneously contributes to the problem through high energy and water consumption.


Mario Nobile from Italy’s digital agency outlined his country’s four-pillar AI strategy emphasizing education, research, public administration, and enterprise applications. He highlighted efforts to transition from energy-intensive large models to smaller, vertical domain-specific models for sectors like manufacturing, health, and tourism. Marco Zennaro presented TinyML technology, which runs machine learning on extremely small, low-power devices costing under $10, enabling AI applications in remote areas without internet connectivity. He shared examples from a global network of 60 universities developing applications like disease detection in livestock and anemia screening in remote villages.


Adham Abouzied from Boston Consulting Group emphasized the value of open-source solutions, citing a Harvard study showing that recreating existing open-source intellectual property would cost $4 billion, but licensing it separately would cost $8 trillion. He advocated for system-level cooperation and data sharing across value chains to optimize AI’s impact on sectors like energy. Ioanna Ntinou discussed her work on the Raido project, demonstrating how knowledge distillation techniques reduced model parameters by 60% while maintaining accuracy in energy forecasting applications.


Mark Gachara from Mozilla Foundation highlighted the importance of measuring environmental impact, referencing projects like Code Carbon that help developers optimize their code’s energy consumption. The panelists agreed that sustainable AI requires active incentivization rather than emerging by default, emphasizing the need for transparency in energy reporting, open-source collaboration, and policies that promote smaller, task-specific models over large general-purpose ones. The discussion concluded that achieving sustainable AI requires coordinated efforts across education, policy, procurement, and international cooperation to balance AI’s benefits with its environmental costs.


Keypoints

## Major Discussion Points:


– **AI’s dual role in climate challenges**: The discussion explored how AI presents both opportunities to address environmental risks (through optimization and efficiency) and contributes to the problem through high energy and water consumption, creating a complex “foster opportunity while mitigating risk” scenario.


– **Small-scale and efficient AI solutions**: Multiple panelists emphasized moving away from large, energy-intensive models toward smaller, task-specific AI applications, including TinyML devices that cost under $10, consume minimal power, and can operate without internet connectivity in remote areas.


– **Open source collaboration and data sharing**: The conversation highlighted how open source approaches can significantly reduce costs and energy consumption by avoiding duplication of effort, with examples showing potential savings from $4 billion to $8 trillion through shared intellectual property.


– **Policy frameworks and governance models**: Panelists discussed the need for comprehensive governance including procurement policies, regulatory frameworks, transparency requirements, and incentive structures (both “carrots and sticks”) to promote sustainable AI development and deployment.


– **Transparency and measurement imperatives**: A key theme was the critical need for transparency in AI energy consumption, with calls for standardized reporting of energy use at the prompt level and better assessment tools to enable informed decision-making by developers, policymakers, and users.


## Overall Purpose:


The discussion aimed to explore practical solutions for balancing AI’s potential benefits in addressing climate change with its environmental costs, focusing on how different stakeholders (governments, researchers, civil society, and industry) can collaborate to develop and implement more sustainable AI technologies and governance frameworks.


## Overall Tone:


The discussion maintained a consistently optimistic and solution-oriented tone throughout. While acknowledging the serious challenges posed by AI’s environmental impact, panelists focused on concrete examples of successful implementations, practical policy recommendations, and collaborative approaches. The moderator set an upbeat tone from the beginning by emphasizing hope and “walking the talk” rather than dwelling on problems, and this constructive atmosphere persisted as panelists shared specific use cases, technical solutions, and policy frameworks that are already showing positive results.


Speakers

– **Mario Nobile** – Director General of the Agents for Digital Italy (AGIDS)


– **Leona Verdadero** – UNESCO colleague, in charge of the report “Smarter, Smaller, Stronger, Resource Efficient Generative AI in the Future of Digital Transformation”


– **Marco Zennaro** – Next Edge AI Expert at the Abdus Salam International Center of Theoretical Physics, also works for UNESCO, works extensively on TinyML and energy efficient AI applications


– **Audience** – Jan Lublinski from DW Academy in Media Development under Deutsche Welle, former science journalist


– **Adham Abouzied** – Managing Director and Partner at Boston Consulting Group, works at the intersection of AI, climate resilience and digital innovation with focus on open source AI solutions


– **Moderator** – Panel moderator (role/title not specified)


– **Ioanna Ntinou** – Postdoctoral researcher in computer vision and machine learning at Queen Mary University of London, works with Raido project focusing on reliable and optimized AI


– **Mark Gachara** – Senior advisor, Global Public Policy Engagements at the Mozilla Foundation


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# AI and Environmental Sustainability Panel Discussion Report


## Introduction


This panel discussion examined the intersection between artificial intelligence and environmental sustainability, addressing what the moderator described as a central paradox: while AI offers opportunities to tackle climate challenges, it simultaneously contributes to environmental problems through substantial energy and water consumption. The moderator emphasized the need to move beyond theoretical discussions toward practical solutions, setting a solution-oriented tone for the conversation.


The panel brought together perspectives from government policy, academia, civil society, and industry to explore how different stakeholders can collaborate on more sustainable AI technologies and governance frameworks.


## National Strategy and Policy Frameworks


### Italy’s Four-Pillar Approach


Mario Nobile, Director General of Italy’s Agency for Digital Italy (AGID), outlined Italy’s comprehensive AI governance strategy built on four pillars: education, scientific research, public administration, and enterprise applications. He emphasized that the contemporary debate has evolved, stating: “Now the debate is not humans versus machines. Now the debate is about who understands and uses managed AI versus who don’t.”


Nobile highlighted Italy’s substantial financial commitment, with 69 billion euros allocated for ecological transition and 13 billion euros for business digitalization from the National Recovery Plan. He noted the challenge of implementing guidelines across Italy’s 23,000 public administrations and described plans for tax credit frameworks to incentivize small and medium enterprises to adopt AI technologies.


A key element of Italy’s strategy involves transitioning from large, energy-intensive models to what Nobile called “vertical and agile foundation models” designed for specific sectors including manufacturing, health, transportation, and tourism. Italy’s Open Innovation Framework enables new procurement methods beyond conventional tenders for public administration, addressing regulatory challenges around data governance and cloud usage.


## Technical Solutions and Innovations


### TinyML: Small-Scale AI Applications


Marco Zennaro from the Abdus Salam International Centre of Theoretical Physics presented TinyML (Tiny Machine Learning) as an approach to sustainable AI. This technology enables machine learning on small, low-power devices costing under $10, consuming minimal energy, and operating without internet connectivity.


Zennaro described a global network of over 60 universities across 32 countries developing practical TinyML applications, including:


– Disease detection in cows in Zimbabwe


– Bee population monitoring in Kenya


– Anemia screening in remote villages in Peru


– Turtle behavior analysis for conservation in Argentina


He posed a fundamental question: “Do we always need these super wide models that can answer every question we have? Or is it better to focus on models that solve specific issues which are useful for SDGs or for humanity in general?”


The TinyML approach emphasizes regional collaboration and south-to-south knowledge sharing, with investment in local capacity building using open curricula developed with global partners.


### Model Optimization Research


Ioanna Ntinou from Queen Mary University of London presented research from the Radio project demonstrating how knowledge distillation techniques can achieve significant efficiency gains. Her work showed that model parameters could be reduced by 60% while maintaining accuracy in energy forecasting applications.


Ntinou raised concerns about current evaluation frameworks, noting: “If we are measuring everything by accuracy, and sometimes we neglect the cost that comes with accuracy, we might consume way more energy than what is actually needed.” She emphasized that sustainable AI development requires active intervention and new success metrics that balance performance with environmental impact.


## Open Source Collaboration and Economic Perspectives


### The Economics of Shared Development


Adham Abouzied from Boston Consulting Group presented economic evidence for open source collaboration, citing research showing that recreating existing open source intellectual property would cost $4 billion, while licensing equivalent proprietary solutions separately would cost $8 trillion. He argued that “open source solutions optimize energy costs by avoiding repetitive development.”


Abouzied emphasized that meaningful AI applications in sectors like energy require system-level cooperation and data sharing across value chains, noting that “AI breakthroughs came from training on wealth of internet data; for vertical impact, models need to see and train on data across value chains.”


## Civil Society and Environmental Justice


### Community-Centered Approaches


Mark Gachara from Mozilla Foundation brought environmental justice perspectives to the discussion, emphasizing that “the theater of where the most impact of climate is in the global south and it would be a farmer, it would be indigenous and local communities.”


Gachara shared a specific example from Kilifi, Kenya, where communities used AI tools for ecological mapping to advocate against a proposed nuclear reactor project. He highlighted how civil society can leverage AI for evidence generation while ensuring that communities most affected by climate change have agency in developing responses.


He referenced tools like Code Carbon that help developers measure their code’s energy consumption, demonstrating how transparency can drive behavioral change at the individual developer level.


## Transparency and Measurement Challenges


### The Need for Energy Reporting


A recurring theme was the urgent need for transparency in AI energy consumption reporting. Ntinou highlighted that the current lack of visibility into energy usage per prompt or model interaction makes it impossible to develop effective legislation or make informed decisions about AI deployment.


The panelists agreed that assessment of energy usage in widely used public models is essential before focusing on regulatory frameworks. Without standardized evaluation methods and transparent reporting, stakeholders cannot compare the environmental impact of different AI systems.


An audience question from Jan Lublinski raised the possibility of carbon trading mechanisms for AI, asking whether transparency in energy consumption could enable market-based solutions for reducing AI’s environmental impact.


## UNESCO Report Preview


Leona Verdadero participated online to provide a preview of UNESCO’s forthcoming report on resource-efficient generative AI. While details were limited in this discussion, she indicated the report would provide evidence about optimized AI models and their potential for improved efficiency.


## Key Themes and Future Directions


### Emerging Consensus


The discussion revealed broad agreement across speakers on several key principles:


– The value of smaller, task-specific AI models over large general-purpose systems


– The critical importance of transparency in energy consumption measurement


– The benefits of open source and collaborative approaches


– The need for comprehensive policy frameworks that balance innovation with sustainability


### Implementation Challenges


Several challenges remain unresolved, including the fundamental tension between AI model accuracy and energy consumption when accuracy remains the primary success metric. Regulatory challenges around data governance and cross-border data transfer continue to limit AI implementation, particularly in emerging economies.


The lack of transparency in energy consumption reporting for widely used AI models represents a critical gap that must be addressed before effective policy interventions can be developed.


## Conclusion


The discussion demonstrated that achieving sustainable AI requires coordinated efforts across education, policy, procurement, and international cooperation. The panelists agreed that sustainable AI will not emerge by default and requires active incentivization through both regulatory frameworks and market mechanisms.


The conversation revealed convergence toward practical, implementable solutions emphasizing efficiency over scale, transparency in measurement, open collaboration, and equitable access. The path forward involves not just technical innovation but fundamental changes in how success is measured and resources are allocated in AI development.


As the discussion highlighted, the solutions exist across multiple scales—from tiny, efficient models addressing specific local challenges to policy frameworks enabling sustainable AI adoption at national levels. What remains is coordinated implementation at the scale and speed required to address both AI’s environmental impact and its potential to help solve climate challenges.


Session transcript

Moderator: Good morning, everyone. A pleasure to be here with you all, in the audience, online, but also with this fantastic group of panelists. I’m sure if you read any of these reports about the most important issues of our times, you will certainly find the issues that we are discussing here today. Some reports will say governance of artificial intelligence is one of the most important issues we need to deal with. Certainly, climate change, many of these reports will say it’s one of the most important policy issues we have in front of us, as well as it’s one of the most threatening risks for humanity. Energy resources and scarce resources, and water is probably another of those issues that you’ll see in different reports here and there. The challenge for this gentleman and lady here is how to combine all those things together. And of course, with you, I hope we are going to have a very interesting conversation about that. So, I don’t want to use a cliché here, but voilà, artificial intelligence offers, maybe needless to say, interesting opportunities for addressing some of these environmental risks, and we can speak about that, how the models can help us, and us as the climate scientists, as the policy makers, as the civil society, to solve this very complex equation on how we solve the planetary crisis that we are in the middle of, but tragically, or ironically, the same technology that can help us to address the issue is contributing to the problem, because it consumes a lot of energy, and water, and so on. So, again, as I said, it’s a cliché, but our job is how we can foster the opportunity and mitigate the risk. And as usual, in these very complex lives we have right now, it’s easy to say, but not necessarily easy to do. The good news is that these people in this panel and online, they do have some interesting solutions to propose to you, and some of them are already implementing it. So, since I’m optimistic by nature, when my team asked me to moderate and I saw what they prepared, I was very happy that it was not only dark and terrible, but it was also about how we can actually walk the talk, right? So, this session is a lot about this, to have a dialogue with this group of different actors here that are dedicated to think about this problem, and how we can… and I think we can suggest and underline in the one hour we have some of these issues. Of course, let’s look into the implications of AI technologies for these problems. We will try to showcase some tools and frameworks. UNESCO with the University College of London, we are going to launch very soon a very exciting issue brief called Smarter, Smaller, Stronger, Resource Efficient Generative AI in the Future of Digital Transformation that my dear colleague Leona, who is online, unfortunately she couldn’t be here today also to reduce UNESCO’s carbon footprint of people travelling all over the world, but she is the person in charge of this report and you can ask further questions and interact with her on that. And also again, as I said, try to highlight the different approaches that we can take to address these issues. So I’m sure we are going to have an exciting panel and since we need to be expeditive, I will stop here and go straight to my panellists and let me start with you, Mario. So Mario Nobile is the Director General of the Agents for Digital Italy, AGIDS, I guess I say like that. Correct. And Mario, let’s first start on what you are already doing at the strategic level, right? So how your agency is trying to cope with these not easy challenges. Buongiorno, over to you.


Mario Nobile: Buongiorno, thank you and good morning to all. Our Italian strategy rests on four pillars. Education, first of all. Scientific Research, Public Administration and Enterprises. And these efforts aim to bridge the divide, ensuring inclusive growth and empowering individuals to thrive in an AI-driven economy. For the first, scientific research, our goal is promoting research and development of advanced AI technologies. You before mentioned technology evolves really fast. So now we are dealing with agentic AI. We started with large language models, then large multimodal models. Now there is a new frontier about small models for vertical domains. So we met also with Confindustria, which is the enterprises organization in Italy. And we are trying applications about manufacturing, health, tourism, transportation. And this is important for energy-consuming models. For public administration, we are trying to improve the efficiency and effectiveness of public services. For companies, Italy is the seventh country in the world for exports. So we must find a way to get to the application layer and to find concrete solutions for our enterprises. And education, first of all. I always say that now the debate is not humans versus machines. Now the debate is about who understands and uses managed AI versus machines. who don’t. Okay. And in Italy, the AI strategy is, we wrote it in 2024, we have also a strategic planning, the three year plan for IT in public administration. And we emphasize the importance of boosting digital transition using AI in an ethical and inclusive way. This is important for us. We have three minutes, so I go to the conclusion. And we are stressing our universities about techniques like incremental and federated learning to reduce model size and computational resource demands. This is the first goal. And so this approach minimizes energy consumption and we are creating the conditions to transition from brute force models, large and energy consuming, to vertical and agile foundation models with specific purposes, health, transportation, tourism, manufacturing. This is the point. Now, I was saying technology evolves faster than a strategy. So we have a strategy, but we are dealing with agentic AI, which is another frontier.


Moderator: Thank you very much, Mario. And I’m glad to hear the conclusion in terms of what you are working with your universities. Because I mean, I’m just the international bureaucrat here, right? I don’t understand anything about these things. But I did read the report that my team prepared and they were making recommendations towards what you were saying. And one of my questions to them is, is this Marco Zennaro, Next Edge AI Expert at the Abdus Salam International Center of Theoretical Physics. He also works for UNESCO. My children, they always tell me, why don’t you do these kind of interesting things? Because they don’t understand what I do, right? Because maybe not even I, but people like Marco are the ones walking the talk in UNESCO. So Marco, you have worked extensively on 10 ml and energy efficient AI applications, also in African contexts. So can you tell us, and especially for a person like me that is no political scientist, we don’t understand anything. So how you can understand the benefits of what you are doing?


Marco Zennaro: Sure, sure. Definitely. Thank you very much. So let me introduce TinyML first. So TinyML is about running machine learning models on really tiny devices, on really small devices. And when I say small, I say really small. So devices that have, you know, few kilobytes of memory, that have really slow processors, but they have two main advantages. The first one is that they’re extremely low power. So we’re talking about, you know, green AI, these devices consume very little power. And second advantage is that they’re extremely low cost. So one of these chips is less than a dollar, and one full device is about $10. So, you know, that’s, of course, very positive. And they allow AI or machine learning to run on the devices without the need of internet connection. So we heard, you know, during IGF that a third of the world is not connected. And if we want to have, you know, data from places that are remote, where there’s no internet connection, and we want to use AI or machine learning, that is a really good solution. So you ask about application. And so in 2020, together with colleagues from Harvard University and Columbia University, we created a network of people working on TinyML with a special focus on the Global South. Now we have more than 60 universities in 32 different countries. So we have many researchers working on TinyML in different environments. And they worked on very diverse applications. So just to cite a few, there’s colleagues in Zimbabwe that worked on TinyML for food and mouth disease in cows. So, you know, sticking this device in the mouth of the cow and detecting the disease. There’s colleagues in Kenya working on counting the number of bees that you have in a beehive. There’s colleagues in Peru that use TinyML to detect anemia through the eyes in remote villages. We have colleagues in Argentina that use TinyML on turtles to understand how they behave. So very diverse applications. Many of them have impact on SDGs. And again, using low cost and extremely low power devices.


Moderator: Thank you. This is fascinating. As you said, we are having in this IGF, which I think is a good thing, lots of discussions regarding, I mean, in this UN language, how we do these things, leaving no one behind. And these are very concrete examples, right? Because it’s about the cost, it’s about being low intensive on energy. So very glad to hear that. And again, to see that those things are possible. And this offers us a bit of hope, because as I said in the beginning, I’m always concerned that we are only looking to the side of the problems and the terrible risks, but without looking into what is already happening to address the issues, right? So let me move now to have a different setting to our online guests. We have One speaker that is speaking from the internet space, from the digital world. And he is Adham Abduzadeh, and he is Managing Director and Partner at the Boston Consulting Group. Welcome, Adham, to this conversation. And I know that the BCG, the Boston Consulting Group, is very much concerned about these issues as well. You are putting a lot of effort on that. And you yourself have worked at the intersection of AI, climate resilience and digital innovation. And with a specific focus on the open source AI solutions. So how you can tell us in your three minutes the connection of these issues with the main topic of this panel, that of course is the relationship with sustainability, environmental sustainability. Welcome and over to you.


Adham Abouzied: Thank you very much. Honored and very happy to join this very interesting panel. And I must say I enjoyed so much the interventions of the panelists before me. I think you asked a very, very interesting question. I will start my answer by basically the results of a study that has been recently made by Harvard University, which is around the value of the open source wealth that exists on the ground today. The study basically estimates that if we would recreate only one time the wealth of open source intellectual property that is available today on the Internet, it would cost us $4 billion. But it does not stop there, because if you assume or if you imagine that this material. was not open source, and that every player who would want to use it would either recreate it or pay a license, then basically you would increase this $4 billion of cost to $8 trillion just because of the repetition. So in a sense, having something that is open source optimizes significantly the amount of work that is required to get the value out of these AI algorithms or digital technologies in a larger sense. So instead of doing the thing one time, you will have communities that are actually contributing and building on top of the, I would say, intellectual creation of each other. First, I mean, to be able to get to broader impact at a much lower cost and a much lower energy cost as well. Now, from experience also, and building specifically on what Mario is saying, for you to be able to implement vertically focused AI models that generate value across a certain system or a certain value chain, you need to have some kind of system level cooperation, which means that today all of the, I would say, AI breakthroughs that we have been seeing basically through very large, generic foundational models, it’s there because they were trained on the data that is available out there on the internet, the wealth of text, the wealth of images, the wealth of videos, and basically these models are good at this generic, generative, tasks because of what these models have been able to see and train on. But now for you to be able to have a meaningful impact with models that have a vertical focus and can create value across a certain system, they have to basically see and train on data and build on top of the decisions that are made across a value chain. Let me give a concrete example that relates to this basically your starting speech. Let’s take the energy sector today. If you want to have or if you’re seeking to have basically an AI development that rather than creating a burden on the energy system, actually it’s becoming more optimised, then you also need to allow AI to create value within the energy sector, basically help optimise the decisions from the generation to the transmission to the actual usage. And if I take this energy system as an example, basically currently in very few countries in the world, you can see basically cooperation, data exchange, IP exchange across the different steps of the value chain. This would be essential and if it is allowed through policies, through protocols, like when we first started the internet with TCP IP, then the vertical models can create much more significant value that would allow, as a big example, the energy sector to optimise its income and will allow the AI algorithms not only to consume less energy by themselves, but also to help the energy sector itself. I would say, optimise its output and become greener.


Moderator: Thank you. Very interesting. So at the end of the day, we need to move from the lose-lose game to the win-win game, right? And you mentioned some keywords that I’m sure Jona will talk about, optimisation, cooperation, so that’s a very interesting segue. Let me just make one remark on the open source and the openness, on the open solutions, that is very interesting. We witnessed the concrete global example during the pandemic, right? When the scientists decided to open, the vaccines were produced in a record time in human history. So here, mutatis, mutandis, we need to use the same logic to fix this huge problem. So, Jona, in TINU, it’s correct? It’s TINU. OK, sorry about that. So Jona is a postdoctoral researcher in computer vision and machine learning at the Queen Mary University in London. And you work with Raido project, which focuses on reliable and optimised AI, again, the word. So, again, can you walk us through the specific use case from the project where you managed to show this connection with the efficiency and so on? Over to you.


Ioanna Ntinou: Yes, thank you. I’m happy to be here. So, as you said, Raido stands for reliable AI in data optimisation. And what we’re trying to do is to be a bit more conscious when we develop AI models on the energy consumption that the models are going to use. And by the end of the project, we will have developed a platform that we are going to test it against for real-life use cases. We call it pilots. And that they are spun in different domains. This can be… Institute of Technology, Robotics, Healthcare, Smart Farming, and Critical Infrastructure. Today, I will focus in one specific domain, which is the energy grid, and we’re working with two companies from Greece. One is an energy company, the National Energy Company, which is called PPC, and the other is like a research center called CERF. So, what we have is an optimized model for day ahead forecasting of energy demand and supply, particularly in smart homes that they have a microgrid. And what we were given was a big time series model that would predict the energy demand of small electronic devices that we have in our house, including, let’s say, a bulb. The problem with this model was that it was a bit big. It was very good in accuracy, but it was quite big, and it would need quite often retraining because, as you know, when we calculate the energy consumption of a device, this is dependent on the seasonality, this is dependent on the different habits that someone can have in a house. And what we did was simply a knowledge distillation. We took this model and we used it as a teacher to train a smaller model to produce more or less the same results. It was quite close in accuracy, but we reduced the number of parameters to 60%. So, we got a much smaller model, we call it the student now, that has more or less the same performance, and we got several benefits from this process. First, we have a smaller model that consumes much smaller energy. Secondly, this model is easier to deploy in small devices, as we discussed before in the panel, right, which is very critical because then you can democratize the… access to AI to a lot of houses, right? Because if you have a small model, you can deploy it much easier. And also, by maintaining the accuracy, we somehow help these companies have a more accurate forecast of the energy demand that they will be around. And in that way, they can consume or they produce energy having this into their mind. So we are happy with this and we hope that we will help also the rest of the pilots get good models and have good results.


Moderator: Thank you. Fascinating. And so it’s also a bit about evidence-based next steps, either for the companies or for policymaking. I remember a few years ago, before the AI times, but already in the open data environment, a Ministry of Energy of a particular country launched a hackathon with open data so that people could help to improve the efficiency of energy consumption and so on in the public sector. So transparency always sheds light, sometimes in very funny ways. And after they did the hackathon and the people did the run the models and so on, they found out that the ministry that consumed more energy in that country was the Ministry of Energy and Environment, which was a big shame for them. But then they actually implemented some very concrete policies to resolve these issues. So that’s also interesting. We need to connect these conversations with the conversation of transparency and accountability and so on. So, Mark, last but not least, Mark Gachara is a senior advisor, Global Public Policy Engagements at the Mozilla Foundation. And we are going to get back to the conversation, I guess, of open source because Mozilla has a long experience on that. And this mission of Mozilla


Mark Gachara: We had a number of grantees think about this particular area. We had, for example, an organization from France called Code Carbon. And within this theme of environmental justice, they were thinking about, when I write code, how much energy am I using? Because at the end of the day, fossil fuels are used to generate the energy. So a developer can actually optimize their code. And it’s an open source project that’s available that people can. before they harm the environment. These are just examples of practical use cases that are measuring because in management they have told us if you can’t measure it, you can’t manage it. Probably you also can’t manage the risk. So I think being able to run this kind of action research that can talk to policy eventually then sheds light to make like environmental justice part of a core definition of how we roll out AI solutions.


Moderator: Thank you. Thank you. Super interesting and if I hear you correctly there are two important issues. There may be a energy efficiency AI by design right from the moment you are writing the code which is an interesting story also from the human rights perspective to be by design but also the capacity of having decent risk assessments for these things. So let’s now what’s going to happen we are going to have a second round here very quickly ping pong and then we are going to open to you so please start thinking about your questions to these fantastic people here. Mario you are also the regulator so and regulators can use sticks but they also can use carrots. So I guess my question for you is about the carrots.


Mario Nobile: Thank you. I’ll try to connect some dots and And my answer, I have three answers for this, but I want to connect. I fully agree with the other panelists, with Adam. The first one is education, and I connect with the answer from Adam. We are writing with public consultation guidelines for AI adoption, AI procurement, and AI development. So think about public administration. In Italy, we have 23,000 public administrations. Everyone must adopt. Everyone must do procurement. Some of them must develop. I think ministries like Environment and Energy, of course, but also the National Institute for Welfare. And what Adam was saying before is also about the interaction between cloud and edge computing. This is important. But we have also other challenges, and I’m thinking about the energy model. About which are the challenges now for a good use of artificial intelligence? Data quality and breaking the silos. These two points are really important for us. So the first answer is education, collaboration, sharing. New questions about what can I do with my data and with my data quality? What can I do about the cloud and the edge services I can develop? The second one, and the Agency for Digital Italy is working on it, is the Open Innovation Framework. It’s about new applications, not the classic tender about buying something, but the Open Innovation Framework, so I can enable public administration to carry for planned procedures and facilitate the supply and the demand in a new way. The third one is money. So the carrot is money. We are using in Italy a big amount of money from the National Recovery and Resilience Plan. We have 69 billion euros for ecological transition and 13 billion euros for business digitalization. Now we are working on a new framework about tax credit for enterprises for a good start of using AI in the small and medium enterprises. Thank you very much. This is fascinating, and also finishing with the money always gives people hope.


Moderator: But I wanted to underline the aspect of procurement, because in 99% of the UN member states, the public sector is still the biggest single buyer. So if we can have decent procurement policies, it’s already a lot, right? So congrats on that. Marco, on your side of the story, with your experience, if you can tell us what are the key enablers for this story, right? So what are the drivers? If someone needs to start looking into that, the policy makers and the scientists, what would you…


Marco Zennaro: and I’m going to be very concrete and practical. So the first one is investing in local capacity building in embedded AI. So capacity building has been mentioned many, many times in the last few days. But I would say the new aspect from my side is to give priority to funding for curricula that is co-developed with global partners. So not reinventing the wheel, but using the data to make the decision. So not reinventing the wheel, but using existing and open curricula such as the one that we developed with the TinyML Academic Network, which is completely open and can be reused. The second one is to promote open, low-cost infrastructure for TinyML deployment. So people need to have these devices in their hands, and very often that’s not easy. So, you know, subsidizing the access to these open source TinyML tools and low-power hardware would be extremely useful. Because, of course, this lowers the barriers, the entry barriers, and stimulates local innovation with these, you know, SDG-related challenges. The third one is to integrate TinyML into national digital and innovation strategies. So people, when they design a strategy, please don’t forget that you also have this kind of alternative model of really tiny devices running AI. So that’s, you know, a component that should be included. Next one is to fund context-aware pilot projects in key development sectors. We heard from Mozilla about, you know, funding kind of pilots and, you know, testing new solutions. Well, that’s possible for TinyML. Again, we have seen many, many interesting applications. So, you know, funding even more would be extremely useful. And finally, facilitating regional collaboration and knowledge sharing. So we had a few activities which were, like, tailored for specific regions with the idea that, you know, people from the same region have the same issues. And that has been extremely successful. So, you know, supporting this south-to-south collaboration and, you know, focusing on specific regions so that they can use TinyML. to solve their common issues.


Moderator: Super, very interesting. So co-working in many levels, right, as you said, but also interesting because I’ve been participating in several discussions about the DPI, digital public infrastructure, or also public interest infrastructures. And to be honest, I have not been hearing a lot about what you are saying. So I think it’s an interesting way also to connect the dots if this can be more and more presented as a potential solution. So Adam, let me get back to you now. I know that from where you are sitting in the Boston Consulting Group, you are also looking to governance models for this issue. So what are key features there of obviously in three minutes? It’s always a challenge, but that you can tell us on the governance side of the story.


Adham Abouzied: Yeah, yeah, very clear. I think we’ve been mainly looking at basically how to inspire what is the right governance for a systems level change to basically push forward, accelerate the adoption of open source data sharing, intellectual wealth sharing across different systems. And I think the most important thing and is to have the, I would say, the right policies and the right sharing protocols across every industry. And to be very well designed, well enforced, as you said before, with the carrot and the stick at the same time. It is also very important to make sure that the different players are actually incentivized to adopt. And actually, as Mario was saying also earlier, have the right, I would say, set up the right regulatory reference for them to adopt on their own level. And then afterwards, also sharing the outcomes, the insights, the data with others. We have faced so many difficulties in so many sectors while trying to apply AI applications, whether it’s generative or whether it’s other techniques with facing the current regulation. The cloud is one, basically having the proper data governance, data classification, understanding what is sensitive, what should be on the cloud, what should not be on the cloud and under which levels or layers of security. Even worse in several countries and specifically emerging countries that basically do not have hyperscalers or do not have actual physical wealth of data centers implemented locally and to have actual regulation against the data traveling, their data traveling cross border, wouldn’t even allow the, I would say, the prompts, the queries that would go up to query foundational models or other models that are sitting in the cloud somewhere outside of the country. So regulating this, what type of prompts should actually cross the border, which ones shouldn’t, and for which you maybe need to go for local alternatives that are more focused and smaller models, maybe less efficient. you start there until the regulation evolves is something that is that is very, very important. But I mean, after all, what is really, really important is to have certain rules across the players in a value chain about what they can share in which format. And what are the, I would say the rights and responsibilities that go with it as it moves through the value chain, and for them to believe that actually, sharing is a win-win situation. It is not the opposite of, I would say, a free market. I would say competitiveness and keeping your intellectual property and conservative, conserving your competitiveness, it actually contributes to it, it gives you a wealth of information on top of which a layer on top of which you can develop a competitive edge.


Moderator: Thank you. Very, very interesting. And again, I guess, Marco was with me yesterday in another panel about AI and the issue of standardization and rules appeared a lot, right? So we will need to deal with that rather sooner than later. So, Leona, again, as you noticed, the second round is a lot about lessons learned and insights for the different stakeholders. So in your case, what is what you would tell to policymakers from the perspective of what you are doing in Raido? What are key lessons to share with them?


Ioanna Ntinou: As you said, I have more of a technical background, right? So I think that one of the key lessons that I have learned from Raido or working as an AI developer is that sustainable AI is not going to emerge by default. We need to incentivize and it needs to be actively supported. And the reason that even when I train a model, I will opt for a bigger model. I will opt for more data. is the way we are actually measuring success, which is measured by accuracy. So if we are measuring everything by accuracy, and sometimes we neglect the cost that comes with accuracy, we might consume way more energy than what is actually needed. Like, in some cases, a small improvement comes with a huge increase in energy, and we have to be careful with this trade-off. And I think that one of the first steps that we need to do is to actually simply assess how much energy is used in widely used public models. And what I mean is that we… I will give you a concrete example, right? There is now GPT-4, or there is DeepSeq, that they might have 70 billion parameters, and they consume massive amounts of electricity during training and during inference, but we don’t really know how much energy is used when we put a simple prompt on GPT. We are not aware of this. So I think that we should start by having some transparency in the way we report energy. So by simply developing evaluation standards at prompt level, it will be a great first step, because it will develop awareness and transparency. And after this, we can see what can be done with this. But without having the knowledge of how much energy is actually used, I think we cannot focus on legislation, right?


Moderator: Super. Of course, I mean, it’s very important, because the transparency generates the evidence that is needed, right? And it’s a good segue for you, Marc, because the question for you was precisely how the civil society can be even stronger in demanding more transparency and accountability for this company.


Mark Gachara: Yeah, so I will answer the question in two parts. I will start with… The things that as Mozilla we see are missing so right now there’s there’s a lot of money going into climate or smart agriculture or climate AI solutions But it’s to make them more efficient more effective as opposed to how do we mitigate? The harms that they are causing and and the owner has just mentioned a little bit about that So It is important from our end to to think about how the solutions We build can prevent they can also improve transparency and then they can make Also it more visible over the impact on the environment over over the work that you are doing And this could come Through policy work through research and also strengthening community work like strengthening community civil society Organizations that are working with communities on the ground and I’ll give an example of somebody that we worked with Last year, but this is ongoing. So so in Kilifi Kenya, we have the Center for Justice governance and environmental action They Are working on climate justice issues and these plants in Kenya to build a nuclear reactor somewhere in Kilifi in the sea and and one of the things that this CSO did was to Do an ecological mapping of an area in the Indian Ocean and they used AI to do this and they have come up with a report and they are saying that We think that this is ill-advised and they’ve given the reasons why it’s ill-advised We should actually be focusing on using renewable energy sources because Kenya has is is net is a net producer of renewable energy, so Thank you very much. This is what civil society can do, for example. They are pushing back. It’s still a work in progress right now. But you can generate evidence to be able to do advocacy. Unfortunately, once the ship has left the dock for civil society, you are left trying to push back, which is unfortunate. But again, I come back to the three things. How do we use such kind of evidence to advise policy? How can we do action research? And how do we put money into preventive, into making these issues more transparent? So that actually the taxpayers and government can actually quantify the issue in front of them. Thank you.


Moderator: Thank you. Very interesting. And also the need to do that in an unfortunate, shrinking space for the civil society. And with less funds for this accountability and transparency, including for journalism. So now is the time for you. Questions? Leona, also on the online space, do we have questions for the panel? You have two mics here. I know that this setting looks like we are very distant, but we welcome your thoughts and questions. And Leona, from the online, I don’t know if you are hearing us.


Leona Verdadero: Yes, hi. No questions yet.


Moderator: So while maybe you guys are getting less shy, can I ask you, Leona, to be on the screen and give us a teaser of one minute about the report that we are going to launch soon about these issues?


Leona Verdadero: Yes. Hi. Good morning, everyone. Can you hear me? Yeah. Yes. Okay, great. Great to see all of the panelists. and thank you so much for joining. And I think just to also echo UNESCO’s work and really our work on being a laboratory of ideas and also trying to push the envelope on actually being able to define what do we mean by all these sustainable, low-resource, energy-efficient solutions. So yes, we’ve been doing this research in partnership with the University College of London where we’re doing experiments actually looking at how can we optimize the inference space, right, of AI. So inference is what other colleagues have mentioned since when we are interacting with these systems. But really what we are uncovering is which and I really love this conversation so far because most of you have echoed the need to be more efficient climate conscious and looking at AI in a smaller, stronger, smarter way. So here we’re looking at energy efficient techniques such as, you know, optimizing the models, making them smaller with quantization, distillation, all of these technical things. But what we are really seeing here is that these different experiments that we’ve done actually make the models smaller and also just more efficient and better performing. And what that means especially for stakeholders working in low resource settings, right, where you have limited access to compute and infrastructure, these types of models are made more accessible to you. And so it’s really also part of the answering the question, you know, what type of model, what type of AI do we need for the right type of job, right? So also trying to demystify the thought that, you know, bigger is better. It’s not necessarily better, right? So also that’s what you’re seeing now in terms of, you know, all these large language models are power hungry and water thirsty. So, you know, here’s where we’re trying to, you know, merge two things together. So one is very technical research, how we’re able to translate that in a policy setting, what it means for policymakers, if you’re thinking about exactly developing, deploying, procuring AI systems for your use cases. So if we’re able to, by presenting evidence, try to move the needle by promoting more eco-conscious AI choices, right, and model usage, then I think that’s one very concrete start to actually have this bigger push to look very concretely on what it means to use smaller and smarter AI. And so actually also in the chat, we’re going to put here a sign-up link so all of you will be notified when we launch this report. Thank you.


Moderator: Thank you, Leona, for the teaser, and I strongly recommend you’ll see the report. By the way, it’s also beautifully designed by one of our colleagues that is an expert in designing. So it shows, tried to show also through cartoons, etc., in a user-friendly way, some of these issues that are sometimes complex to understand. Questions from here? Otherwise, I have Brent, Jan. Please take a mic.


Audience: Hello, my name is Jan Lublinski. I work for DW Academy in Media Development under the roof of Deutsche Welle, but I’m also a former, in my earlier life, I was a science journalist, that’s 10 years ago, and I’m used to hearing new things, and I must say I’m thinking while I talk because most of what you told us is new to me. So that’s really exciting, and congratulations for the report and putting together the panel the way you have done, and I’m trying to remember other technological revolutions we’ve seen in the past and how they could be made more transparent, like, I don’t know, heating systems in houses. You know, in Germany we have passes for every house where you can see how energy-efficient a house is, and it’s obligatory in the EU, I think, and I think we can think of many other technologies where we made, step by step, made it more transparent to then actually see what it means so people take conscious decisions, and that’s why I’m really glad, again, that you… Stress the transparency issue, but of course next thought I have is we also trade carbon as a negative good, right? So we force the industry To become more efficient by giving them certificates on their carbon emissions, and I wonder would By everything you know now so far about this topic Do you think we can take transparency on energy consumption by this far that we can one? One day actually trade the right to consume energy on behalf of with AI technologies And then and by that way forced industry to you know develop more small models and and be really conscious of whether as you say We really need to take the last step that will be very expensive to get the very Extreme accuracy that we might not need so not in the way to advance in the direction that you all point to Maybe our colleague from Boston starting group has ideas on this as he’s looking at the overarching strategies. Thank you


Moderator: Thank you. Yeah, so we have five minutes and 39 seconds So it’s one minute per person to interact with Jan’s question and our maybe already Offer us one takeaway and maybe if you don’t necessarily want to interact with young questions I would ask you something related to what he said What are the questions you think science journalists should be asking about this issue So Mari over to you


Mario Nobile: Well in one minute is very tricky, but I’ll try I Think that the key words are related so we have sustainability calls for transparency calls for awareness calls for Policies also government policies, and I think that the good question is not when but now we must think about Sustainability, energy consumption, and the AI potential, the potential for AI to displace jobs. Everything is related. We cannot think about energy consumption without job losses. Okay, so I think that the journalist can ask for a solution to job losses, energy consumption, awareness from people. It’s very complicated. We need two hours and not one minute.


Moderator: Thank you, but interesting. And well, that’s the job of the journalists to find the time to do it.


Marco Zennaro: Well, my kind of question would be what kind of models do we need? So in TinyML, of course, in these small devices, you can only have like models which are very specific to an application. Again, you know, coffee leaves or, you know, turtles. So my point is, do we always need these super wide models that can answer every question we have? Or is it better to focus on models that solve specific issues which are useful for, you know, SDGs or for humanity in general?


Moderator: Small is beautiful, right? There was an article in the New Yorker a few years ago. Fantastic. I recommend not about this, about overall small is beautiful. Adam, your one minute.


Adham Abouzied: Yeah, I think it’s very interesting. I would reiterate again what I’m saying. I think, yes, smaller focused models is something that would create significant value and would be much more optimized to get there. They need to have access to data that is currently not necessarily out there. And for this to happen, we need to have in place the policy, the governance. that would make this happen, because there isn’t a wealth of data out there that would allow this. And honestly, to be, I mean, to consume the energy that is required for AI to deliver real value within the systems and value chains that helps develop on the SDGs and help the daily lives is a much more important, I would say, use than consuming this energy for the large models to create videos and images on the internet.


Moderator: Thank you. Ioana?


Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we actually neglect all the progress that has been done so far with the large language models, what all the revolution that has done so far. But then I guess the answer to this is that task-specific small models have still a great value. And I’m actually, I’m talking about actual value. You see our mobile phones, the technology there is small models, because they are bounded from the battery life and the processor that the phone has. We can see that our phones are actually having small task-specific models, meaning that there is still a lot of things to learn, except for the energy part, in terms of science and knowledge and the things that we can achieve as humans from pushing the boundaries of research. So I also think that we should give another type of value on this, which is actually the knowledge and the learning out of this process.


Moderator: Thank you. Mark?


Mark Gachara: Yeah, on my end, to respond a bitto the question that was asked. The theater of where the most impact of climate is is in the global south and it would be a farmer, it would be indigenous and local communities. So it come back and say like how do we have funds that support grassroots organizers and indigenous communities to actually build some of these solutions together. It’s already been said like the problems are really localized and we can focus on local solutions and then we need to foster funding strategies that could actually look into research and creating science that actually solves this climate solutions with these local communities. And procurement has already been mentioned which was a good thing, public procurement. Thank you. Thank you so much.


Moderator: We come to the end of this fascinating discussion. I want to personally thank you a lot each one of you because I have learned a lot. I hope it was the same with the audience and those online. Thank you Adam, thank you Leona for your online participation and here Mario and Marco and Iona and Mark and for all of you to be listening to us attentively and let’s try to make this greener world while still trying to have the good benefits of this fantastic digital and AI revolution. Thank you so much. Enjoy the rest of the IGF. Thank you. Export animation Critical Shirt Critical Birth Critical Birth Main animation Critical Birth Last one


M

Mario Nobile

Speech speed

99 words per minute

Speech length

803 words

Speech time

482 seconds

Italy’s AI strategy rests on four pillars: education, scientific research, public administration, and enterprises to ensure inclusive growth

Explanation

Mario Nobile outlined Italy’s comprehensive approach to AI development that focuses on four key areas to bridge divides and empower individuals in an AI-driven economy. The strategy emphasizes inclusive growth and ensuring that AI benefits reach all sectors of society.


Evidence

The strategy includes promoting research and development of advanced AI technologies, improving efficiency of public services, supporting Italy as the seventh country in the world for exports, and emphasizing education where the debate is not humans versus machines but those who understand AI versus those who don’t


Major discussion point

AI Governance and Strategic Implementation


Topics

Development | Economic | Legal and regulatory


Need for guidelines on AI adoption, procurement, and development across 23,000 public administrations in Italy

Explanation

Mario Nobile emphasized the importance of creating comprehensive guidelines for AI implementation across Italy’s vast public administration network. This involves standardizing approaches to adopting, procuring, and developing AI solutions across all government levels.


Evidence

Italy has 23,000 public administrations that must adopt, procure, and some must develop AI solutions, including ministries like Environment and Energy, and the National Institute for Welfare


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Development


Agreed with

– Adham Abouzied
– Marco Zennaro

Agreed on

Need for comprehensive policy frameworks and governance structures


Transition from large energy-consuming models to vertical and agile foundation models for specific purposes like health, transportation, and manufacturing

Explanation

Mario Nobile advocated for moving away from brute force, large AI models toward smaller, more efficient models designed for specific industry applications. This approach aims to reduce energy consumption while maintaining effectiveness for targeted use cases.


Evidence

Working with Confindustria (enterprises organization in Italy) on applications for manufacturing, health, tourism, transportation; emphasizing universities work on techniques like incremental and federated learning to reduce model size and computational resource demands


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Development | Infrastructure | Economic


Agreed with

– Marco Zennaro
– Adham Abouzied
– Ioanna Ntinou
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


Italy allocates 69 billion euros for ecological transition and 13 billion for business digitalization from National Recovery Plan

Explanation

Mario Nobile highlighted Italy’s significant financial commitment to both environmental sustainability and digital transformation through the National Recovery and Resilience Plan. This substantial funding demonstrates the government’s prioritization of green and digital transitions.


Evidence

69 billion euros for ecological transition and 13 billion euros for business digitalization, plus work on new framework for tax credit for enterprises for AI adoption in small and medium enterprises


Major discussion point

Funding and Policy Mechanisms


Topics

Economic | Development | Legal and regulatory


Open Innovation Framework enables new procurement approaches beyond traditional tenders for public administration

Explanation

Mario Nobile described Italy’s innovative approach to public procurement that moves beyond conventional tendering processes. This framework facilitates better matching of supply and demand for AI solutions in the public sector.


Evidence

The Agency for Digital Italy is working on Open Innovation Framework for new applications, not classic tender about buying something, but enabling public administration to carry planned procedures and facilitate supply and demand in a new way


Major discussion point

Funding and Policy Mechanisms


Topics

Legal and regulatory | Economic | Development


Need to address job displacement alongside energy consumption as interconnected challenges requiring comprehensive solutions

Explanation

Mario Nobile emphasized that sustainability, energy consumption, and job displacement from AI cannot be addressed in isolation. He argued for a holistic approach that considers the social and economic impacts alongside environmental concerns.


Evidence

Key words are related: sustainability, transparency, awareness, policies, and government policies; cannot think about energy consumption without job losses; need solutions for job losses, energy consumption, and awareness from people


Major discussion point

Future Research and Development Directions


Topics

Economic | Development | Sociocultural


M

Marco Zennaro

Speech speed

169 words per minute

Speech length

780 words

Speech time

276 seconds

TinyML enables machine learning on tiny devices with extremely low power consumption and cost under $10 per device

Explanation

Marco Zennaro explained that TinyML allows AI to run on very small devices with minimal memory and slow processors, but with significant advantages in power efficiency and affordability. These devices can operate without internet connection, making AI accessible in remote areas.


Evidence

Devices with few kilobytes of memory and slow processors, chips cost less than a dollar, full device about $10, extremely low power consumption, can run without internet connection


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Development | Infrastructure | Economic


Integration of TinyML into national digital and innovation strategies is essential for comprehensive AI planning

Explanation

Marco Zennaro argued that policymakers should include TinyML as a component in their national strategies rather than overlooking this alternative approach to AI. This integration ensures that low-power, accessible AI solutions are considered in national planning.


Evidence

When people design a strategy, they should not forget that they have this kind of alternative model of really tiny devices running AI as a component that should be included


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Mario Nobile
– Adham Abouzied

Agreed on

Need for comprehensive policy frameworks and governance structures


Investment in local capacity building using open curricula co-developed with global partners prevents reinventing the wheel

Explanation

Marco Zennaro emphasized the importance of building local expertise in embedded AI while leveraging existing open educational resources. This approach avoids duplication of effort and accelerates learning by using proven curricula developed collaboratively.


Evidence

Give priority to funding for curricula co-developed with global partners, not reinventing the wheel but using existing and open curricula such as the TinyML Academic Network which is completely open and can be reused


Major discussion point

Open Source and Collaborative Approaches


Topics

Development | Sociocultural | Infrastructure


Agreed with

– Adham Abouzied
– Mark Gachara

Agreed on

Necessity of open source and collaborative approaches for sustainable AI development


Regional collaboration and south-to-south knowledge sharing has proven extremely successful for TinyML applications

Explanation

Marco Zennaro highlighted the effectiveness of regional cooperation where countries with similar challenges work together on TinyML solutions. This approach recognizes that neighboring regions often face common issues that can be addressed through shared expertise.


Evidence

Activities tailored for specific regions with the idea that people from the same region have the same issues has been extremely successful; supporting south-to-south collaboration focusing on specific regions to use TinyML to solve common issues


Major discussion point

Open Source and Collaborative Approaches


Topics

Development | Sociocultural | Infrastructure


TinyML applications include disease detection in livestock, bee counting, anemia detection, and turtle behavior monitoring across 60+ universities in 32 countries

Explanation

Marco Zennaro provided concrete examples of TinyML applications that address real-world challenges across diverse sectors and geographic regions. These applications demonstrate the practical impact of low-power AI on sustainable development goals.


Evidence

Network created in 2020 with Harvard and Columbia Universities, now 60+ universities in 32 countries; applications include food and mouth disease detection in cows in Zimbabwe, bee counting in Kenya beehives, anemia detection through eyes in Peru villages, turtle behavior monitoring in Argentina


Major discussion point

Practical Applications and Use Cases


Topics

Development | Sustainable development | Sociocultural


Question whether super-wide models answering every question are needed versus specific models solving targeted issues

Explanation

Marco Zennaro challenged the assumption that large, general-purpose AI models are always necessary, suggesting that focused models designed for specific applications might be more appropriate. This approach aligns with sustainability goals and practical problem-solving needs.


Evidence

In TinyML, small devices can only have models which are very specific to an application like coffee leaves or turtles; questioning if we always need super wide models that can answer every question or focus on models that solve specific issues useful for SDGs or humanity


Major discussion point

Future Research and Development Directions


Topics

Development | Infrastructure | Sustainable development


Agreed with

– Mario Nobile
– Adham Abouzied
– Ioanna Ntinou
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


Disagreed with

– Ioanna Ntinou
– Adham Abouzied

Disagreed on

Balance between large foundational models and small specialized models


A

Adham Abouzied

Speech speed

118 words per minute

Speech length

1219 words

Speech time

614 seconds

Open source solutions optimize energy costs by avoiding repetitive development, potentially saving trillions in licensing costs

Explanation

Adham Abouzied presented research showing that open source approaches significantly reduce both development costs and energy consumption by eliminating redundant work. The collaborative nature of open source creates exponential value compared to proprietary development.


Evidence

Harvard University study estimates recreating open source intellectual property would cost $4 billion, but if not open source and every player recreated or paid licenses, cost would increase to $8 trillion due to repetition


Major discussion point

Open Source and Collaborative Approaches


Topics

Economic | Development | Infrastructure


Agreed with

– Marco Zennaro
– Mark Gachara

Agreed on

Necessity of open source and collaborative approaches for sustainable AI development


System-level cooperation and data sharing across value chains is essential for meaningful vertical AI models

Explanation

Adham Abouzied argued that effective vertical AI models require collaboration and data sharing across entire industry value chains, not just individual companies. This systemic approach enables AI to create value across interconnected processes and decisions.


Evidence

AI breakthroughs came from training on wealth of internet data; for vertical impact, models need to see and train on data across value chains; energy sector example where cooperation and data exchange from generation to transmission to usage is needed but currently exists in few countries


Major discussion point

Open Source and Collaborative Approaches


Topics

Economic | Infrastructure | Legal and regulatory


Right policies and sharing protocols across industries are needed with proper regulatory frameworks and incentives

Explanation

Adham Abouzied emphasized the need for comprehensive governance structures that encourage data and intellectual property sharing while maintaining competitive advantages. This includes addressing regulatory barriers that prevent cross-border data flows and AI implementation.


Evidence

Need well-designed, well-enforced policies with carrot and stick; difficulties with current regulation including cloud governance, data classification, and restrictions on cross-border data travel in emerging countries without local hyperscalers


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Economic | Development


Agreed with

– Mario Nobile
– Marco Zennaro

Agreed on

Need for comprehensive policy frameworks and governance structures


AI optimization in energy sector from generation to transmission to usage requires cooperation and data exchange across value chains

Explanation

Adham Abouzied provided a specific example of how AI can help optimize energy systems while reducing its own consumption, but this requires unprecedented cooperation across the entire energy value chain. This approach transforms AI from an energy burden to an energy optimization tool.


Evidence

Energy sector example where AI can help optimize decisions from generation to transmission to usage, but currently very few countries have cooperation and data exchange across different steps of the value chain; this would allow AI to consume less energy while helping the energy sector optimize and become greener


Major discussion point

Practical Applications and Use Cases


Topics

Infrastructure | Development | Sustainable development


Agreed with

– Mario Nobile
– Marco Zennaro
– Ioanna Ntinou
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


I

Ioanna Ntinou

Speech speed

152 words per minute

Speech length

951 words

Speech time

373 seconds

Knowledge distillation can reduce model parameters by 60% while maintaining accuracy, as demonstrated in energy grid forecasting

Explanation

Ioanna Ntinou described a practical technique where a large, accurate model teaches a smaller model to achieve similar performance with significantly fewer parameters. This approach maintains effectiveness while dramatically reducing computational requirements and energy consumption.


Evidence

Raido project example with Greek energy companies PPC and CERF: used knowledge distillation to create a student model from a teacher model for energy demand forecasting in smart homes, reduced parameters by 60% while maintaining close accuracy


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Infrastructure | Development | Economic


Agreed with

– Mario Nobile
– Marco Zennaro
– Adham Abouzied
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


Need for transparency in energy reporting and evaluation standards at prompt level to develop awareness of AI energy usage

Explanation

Ioanna Ntinou argued that without knowing the energy cost of individual AI interactions, it’s impossible to make informed decisions about AI usage or develop appropriate legislation. She emphasized the need for standardized measurement and reporting of energy consumption.


Evidence

Models like GPT-4 or DeepSeq with 70 billion parameters consume massive electricity during training and inference, but we don’t know how much energy is used when putting a simple prompt on GPT; need evaluation standards at prompt level for awareness and transparency


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Mark Gachara
– Moderator

Agreed on

Critical importance of transparency and measurement in AI energy consumption


Assessment of energy usage in widely used public models is essential before focusing on legislation

Explanation

Ioanna Ntinou emphasized that effective policy-making requires concrete data about energy consumption patterns in commonly used AI systems. Without this foundational knowledge, regulatory efforts lack the evidence base needed for effective governance.


Evidence

Simply assess how much energy is used in widely used public models; without having knowledge of how much energy is actually used, cannot focus on legislation


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Legal and regulatory | Development | Infrastructure


Sustainable AI will not emerge by default and needs active support and incentivization beyond just accuracy metrics

Explanation

Ioanna Ntinou pointed out that current AI development practices prioritize accuracy over energy efficiency, leading to unnecessarily resource-intensive models. She argued for deliberate intervention to change these incentive structures and promote more sustainable development practices.


Evidence

When training a model, developers opt for bigger models and more data because success is measured by accuracy; sometimes small improvement comes with huge increase in energy, need to be careful with this trade-off


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Development | Legal and regulatory | Economic


Smaller models deployed in smart homes for energy demand forecasting help companies make more accurate production decisions

Explanation

Ioanna Ntinou described how efficient AI models can provide practical benefits beyond just energy savings, including improved business decision-making and easier deployment to end-users. This demonstrates the multiple advantages of the smaller model approach.


Evidence

Optimized model for day-ahead forecasting of energy demand and supply in smart homes with microgrids; smaller model easier to deploy in small devices, democratizes access to AI, helps companies have more accurate forecast for energy production planning


Major discussion point

Practical Applications and Use Cases


Topics

Infrastructure | Economic | Development


Focus on smaller, task-specific models while not neglecting progress made with large language models

Explanation

Ioanna Ntinou acknowledged the tension between developing efficient small models and continuing to advance the field through large model research. She argued that both approaches have value and that small models offer scientific learning opportunities beyond just energy savings.


Evidence

Mobile phones use small task-specific models due to battery and processor constraints; there is value in terms of science and knowledge from pushing boundaries of research with small models, not just energy benefits


Major discussion point

Future Research and Development Directions


Topics

Development | Infrastructure | Economic


Disagreed with

– Marco Zennaro
– Adham Abouzied

Disagreed on

Balance between large foundational models and small specialized models


M

Mark Gachara

Speech speed

145 words per minute

Speech length

686 words

Speech time

282 seconds

Developers can optimize code energy consumption through tools like Code Carbon to measure environmental impact

Explanation

Mark Gachara highlighted Mozilla’s support for practical tools that help developers understand and reduce the environmental impact of their coding practices. This approach addresses sustainability at the fundamental level of software development.


Evidence

Mozilla grantee Code Carbon from France created open source project asking ‘when I write code, how much energy am I using?’ since fossil fuels generate energy, allowing developers to optimize code before harming environment


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Development | Infrastructure | Sustainable development


Agreed with

– Marco Zennaro
– Adham Abouzied

Agreed on

Necessity of open source and collaborative approaches for sustainable AI development


Civil society can use AI for ecological mapping and evidence generation to advocate against environmentally harmful projects

Explanation

Mark Gachara provided an example of how civil society organizations can leverage AI tools to generate scientific evidence for environmental advocacy. This demonstrates AI’s potential as a tool for environmental protection rather than just consumption.


Evidence

Center for Justice Governance and Environmental Action in Kilifi, Kenya used AI for ecological mapping of Indian Ocean area to oppose proposed nuclear reactor, generating report arguing for renewable energy focus since Kenya is net producer of renewable energy


Major discussion point

Practical Applications and Use Cases


Topics

Human rights | Development | Sustainable development


Support for grassroots organizers and indigenous communities to build localized climate solutions through targeted funding

Explanation

Mark Gachara emphasized that climate impacts are most severe in the Global South and among indigenous communities, so funding should support these groups in developing locally appropriate AI solutions. This approach recognizes that effective climate solutions must be community-driven and context-specific.


Evidence

Theater of most climate impact is in global south with farmers and indigenous communities; need funds supporting grassroots organizers and indigenous communities to build solutions together; problems are localized so need local solutions with these communities


Major discussion point

Funding and Policy Mechanisms


Topics

Development | Human rights | Sustainable development


L

Leona Verdadero

Speech speed

161 words per minute

Speech length

450 words

Speech time

167 seconds

UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing

Explanation

Leona Verdadero described UNESCO’s research partnership with University College London that challenges the assumption that bigger AI models are better. Their experiments show that optimization techniques can create models that are simultaneously smaller, more efficient, and better performing.


Evidence

Partnership with University College London doing experiments on optimizing AI inference space; energy efficient techniques like quantization and distillation make models smaller, more efficient, and better performing; makes AI more accessible in low resource settings with limited compute and infrastructure


Major discussion point

Future Research and Development Directions


Topics

Development | Infrastructure | Economic


Agreed with

– Mario Nobile
– Marco Zennaro
– Adham Abouzied
– Ioanna Ntinou

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


M

Moderator

Speech speed

135 words per minute

Speech length

2419 words

Speech time

1069 seconds

AI governance, climate change, energy resources, and water scarcity are among the most important policy issues requiring integrated solutions

Explanation

The moderator highlighted that multiple global reports identify these interconnected challenges as critical issues of our time. The challenge lies in combining all these elements together to address the planetary crisis we are currently experiencing.


Evidence

Reports consistently identify governance of artificial intelligence, climate change, energy resources and water as the most important and threatening issues for humanity that appear across different policy reports


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Sustainable development | Development


AI presents both opportunities to address environmental risks and contributes to the problem through high energy and water consumption

Explanation

The moderator emphasized the paradoxical nature of AI technology – it can help solve climate and environmental challenges through modeling and analysis, but simultaneously contributes to these problems through its resource consumption. This creates a complex challenge of fostering opportunities while mitigating risks.


Evidence

AI models can help climate scientists, policymakers, and civil society solve complex environmental equations, but the same technology consumes significant energy and water resources


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Sustainable development | Infrastructure | Development


Public sector procurement policies can drive sustainable AI adoption since governments are the biggest single buyers in most countries

Explanation

The moderator noted that in 99% of UN member states, the public sector remains the largest single purchaser, making government procurement policies a powerful tool for promoting sustainable AI practices. Decent procurement policies alone can create significant impact in driving market demand for energy-efficient AI solutions.


Evidence

In 99% of UN member states, the public sector is still the biggest single buyer, making procurement policies a key lever for change


Major discussion point

Funding and Policy Mechanisms


Topics

Legal and regulatory | Economic | Development


Transparency in technology adoption generates evidence needed for effective policymaking, similar to energy efficiency standards in other sectors

Explanation

The moderator drew parallels between AI transparency and other technological revolutions, noting how transparency requirements in areas like building energy efficiency have enabled conscious decision-making. This transparency creates the evidence base necessary for informed policy development and public awareness.


Evidence

Example of energy efficiency passes for houses in Germany and EU that are obligatory, making energy consumption transparent so people can make conscious decisions; similar to carbon trading systems that force industry efficiency


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Ioanna Ntinou
– Mark Gachara

Agreed on

Critical importance of transparency and measurement in AI energy consumption


A

Audience

Speech speed

177 words per minute

Speech length

349 words

Speech time

118 seconds

Carbon trading mechanisms could potentially be applied to AI energy consumption to force industry toward more efficient models

Explanation

An audience member suggested that transparency in AI energy consumption could eventually lead to trading systems similar to carbon markets. This would create economic incentives for developing smaller, more efficient models by making energy consumption a tradeable commodity with associated costs.


Evidence

Reference to existing carbon trading systems that force industry to become more efficient through certificates on carbon emissions; questioning whether transparency on AI energy consumption could lead to trading rights to consume energy with AI technologies


Major discussion point

Funding and Policy Mechanisms


Topics

Economic | Legal and regulatory | Sustainable development


Science journalists should ask comprehensive questions about AI’s interconnected impacts on sustainability, job displacement, and societal awareness

Explanation

The audience member prompted discussion about what questions journalists should be asking about AI and sustainability issues. This highlights the need for media coverage that addresses the complex, interconnected nature of AI’s impacts rather than treating these issues in isolation.


Evidence

Question about what science journalists should be asking about AI sustainability issues, leading to responses about interconnected challenges of energy consumption, job losses, and public awareness


Major discussion point

Future Research and Development Directions


Topics

Sociocultural | Development | Economic


Agreements

Agreement points

Need for smaller, more efficient AI models over large general-purpose models

Speakers

– Mario Nobile
– Marco Zennaro
– Adham Abouzied
– Ioanna Ntinou
– Leona Verdadero

Arguments

Transition from large energy-consuming models to vertical and agile foundation models for specific purposes like health, transportation, and manufacturing


Question whether super-wide models answering every question are needed versus specific models solving targeted issues


AI optimization in energy sector from generation to transmission to usage requires cooperation and data exchange across value chains


Knowledge distillation can reduce model parameters by 60% while maintaining accuracy, as demonstrated in energy grid forecasting


UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing


Summary

All speakers agreed that the future of sustainable AI lies in developing smaller, task-specific models rather than continuing to scale up large general-purpose models. They emphasized that these smaller models can maintain effectiveness while dramatically reducing energy consumption.


Topics

Development | Infrastructure | Economic


Critical importance of transparency and measurement in AI energy consumption

Speakers

– Ioanna Ntinou
– Mark Gachara
– Moderator

Arguments

Need for transparency in energy reporting and evaluation standards at prompt level to develop awareness of AI energy usage


Developers can optimize code energy consumption through tools like Code Carbon to measure environmental impact


Transparency in technology adoption generates evidence needed for effective policymaking, similar to energy efficiency standards in other sectors


Summary

Speakers unanimously agreed that without proper measurement and transparency of AI energy consumption, it’s impossible to make informed decisions about AI usage or develop appropriate legislation. They emphasized the need for standardized measurement tools and reporting mechanisms.


Topics

Legal and regulatory | Development | Infrastructure


Necessity of open source and collaborative approaches for sustainable AI development

Speakers

– Marco Zennaro
– Adham Abouzied
– Mark Gachara

Arguments

Investment in local capacity building using open curricula co-developed with global partners prevents reinventing the wheel


Open source solutions optimize energy costs by avoiding repetitive development, potentially saving trillions in licensing costs


Developers can optimize code energy consumption through tools like Code Carbon to measure environmental impact


Summary

Speakers agreed that open source approaches are essential for sustainable AI development, as they prevent duplication of effort, reduce costs, and enable collaborative problem-solving while minimizing energy consumption through shared resources and knowledge.


Topics

Economic | Development | Infrastructure


Need for comprehensive policy frameworks and governance structures

Speakers

– Mario Nobile
– Adham Abouzied
– Marco Zennaro

Arguments

Need for guidelines on AI adoption, procurement, and development across 23,000 public administrations in Italy


Right policies and sharing protocols across industries are needed with proper regulatory frameworks and incentives


Integration of TinyML into national digital and innovation strategies is essential for comprehensive AI planning


Summary

All speakers emphasized the critical need for comprehensive governance frameworks that include proper policies, guidelines, and regulatory structures to support sustainable AI development and deployment across different sectors and scales.


Topics

Legal and regulatory | Economic | Development


Similar viewpoints

Both speakers emphasized the importance of supporting local and regional communities, particularly in the Global South, to develop context-appropriate AI solutions that address their specific challenges and needs.

Speakers

– Marco Zennaro
– Mark Gachara

Arguments

Regional collaboration and south-to-south knowledge sharing has proven extremely successful for TinyML applications


Support for grassroots organizers and indigenous communities to build localized climate solutions through targeted funding


Topics

Development | Human rights | Sustainable development


Both speakers advocated for innovative approaches to cooperation and procurement that move beyond traditional methods to enable better collaboration and data sharing across organizations and sectors.

Speakers

– Mario Nobile
– Adham Abouzied

Arguments

Open Innovation Framework enables new procurement approaches beyond traditional tenders for public administration


System-level cooperation and data sharing across value chains is essential for meaningful vertical AI models


Topics

Legal and regulatory | Economic | Development


Both speakers emphasized that sustainable AI development requires deliberate intervention and active promotion, challenging the current paradigm that prioritizes model size and accuracy over efficiency and sustainability.

Speakers

– Ioanna Ntinou
– Leona Verdadero

Arguments

Sustainable AI will not emerge by default and needs active support and incentivization beyond just accuracy metrics


UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing


Topics

Development | Legal and regulatory | Economic


Unexpected consensus

Integration of social and environmental concerns in AI development

Speakers

– Mario Nobile
– Mark Gachara
– Ioanna Ntinou

Arguments

Need to address job displacement alongside energy consumption as interconnected challenges requiring comprehensive solutions


Civil society can use AI for ecological mapping and evidence generation to advocate against environmentally harmful projects


Focus on smaller, task-specific models while not neglecting progress made with large language models


Explanation

Unexpectedly, speakers from different backgrounds (government regulator, civil society advocate, and technical researcher) all emphasized the need to consider AI’s social, environmental, and technical impacts as interconnected rather than separate issues. This holistic view was surprising given their different professional perspectives.


Topics

Economic | Development | Sociocultural


Practical implementation focus over theoretical discussions

Speakers

– Mario Nobile
– Marco Zennaro
– Ioanna Ntinou
– Mark Gachara

Arguments

Italy allocates 69 billion euros for ecological transition and 13 billion for business digitalization from National Recovery Plan


TinyML applications include disease detection in livestock, bee counting, anemia detection, and turtle behavior monitoring across 60+ universities in 32 countries


Smaller models deployed in smart homes for energy demand forecasting help companies make more accurate production decisions


Civil society can use AI for ecological mapping and evidence generation to advocate against environmentally harmful projects


Explanation

All speakers, regardless of their institutional background, focused heavily on concrete, practical applications and real-world implementations rather than theoretical discussions. This consensus on pragmatic approaches was unexpected in an academic/policy setting where abstract discussions often dominate.


Topics

Development | Infrastructure | Economic


Overall assessment

Summary

The speakers demonstrated remarkable consensus across multiple key areas: the need for smaller, more efficient AI models; the critical importance of transparency and measurement; the value of open source and collaborative approaches; and the necessity of comprehensive policy frameworks. They also shared unexpected agreement on integrating social and environmental concerns and focusing on practical implementation.


Consensus level

High level of consensus with strong implications for sustainable AI development. The agreement across speakers from different sectors (government, academia, civil society, private sector) suggests these principles have broad support and could form the foundation for coordinated action on sustainable AI. The consensus indicates a mature understanding of the challenges and a convergence toward practical, implementable solutions rather than competing approaches.


Differences

Different viewpoints

Balance between large foundational models and small specialized models

Speakers

– Ioanna Ntinou
– Marco Zennaro
– Adham Abouzied

Arguments

Focus on smaller, task-specific models while not neglecting progress made with large language models


Question whether super-wide models answering every question are needed versus specific models solving targeted issues


AI breakthroughs came from training on wealth of internet data; for vertical impact, models need to see and train on data across value chains


Summary

Ioanna expressed concern about neglecting progress from large language models while focusing on smaller ones, whereas Marco questioned the necessity of super-wide models, and Adham emphasized the value of large foundational models for breakthrough innovations while supporting vertical specialization


Topics

Development | Infrastructure | Economic


Unexpected differences

Scope of open source collaboration versus national strategic control

Speakers

– Mario Nobile
– Adham Abouzied
– Marco Zennaro

Arguments

Need for guidelines on AI adoption, procurement, and development across 23,000 public administrations in Italy


Open source solutions optimize energy costs by avoiding repetitive development, potentially saving trillions in licensing costs


Investment in local capacity building using open curricula co-developed with global partners prevents reinventing the wheel


Explanation

While all speakers supported collaboration, there was an unexpected tension between Mario’s emphasis on national strategic control and standardized guidelines versus Adham and Marco’s push for more open, collaborative approaches that transcend national boundaries


Topics

Legal and regulatory | Economic | Development


Overall assessment

Summary

The discussion showed remarkable consensus on core goals (energy efficiency, sustainability, accessibility) with disagreements primarily focused on implementation approaches and the balance between different technical strategies


Disagreement level

Low to moderate disagreement level. Most conflicts were constructive and focused on technical approaches rather than fundamental goals. The main tension was between preserving innovation from large models while promoting efficiency through smaller ones. This suggests a healthy debate that could lead to complementary rather than competing solutions, with implications for developing comprehensive AI sustainability policies that accommodate multiple technical approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the importance of supporting local and regional communities, particularly in the Global South, to develop context-appropriate AI solutions that address their specific challenges and needs.

Speakers

– Marco Zennaro
– Mark Gachara

Arguments

Regional collaboration and south-to-south knowledge sharing has proven extremely successful for TinyML applications


Support for grassroots organizers and indigenous communities to build localized climate solutions through targeted funding


Topics

Development | Human rights | Sustainable development


Both speakers advocated for innovative approaches to cooperation and procurement that move beyond traditional methods to enable better collaboration and data sharing across organizations and sectors.

Speakers

– Mario Nobile
– Adham Abouzied

Arguments

Open Innovation Framework enables new procurement approaches beyond traditional tenders for public administration


System-level cooperation and data sharing across value chains is essential for meaningful vertical AI models


Topics

Legal and regulatory | Economic | Development


Both speakers emphasized that sustainable AI development requires deliberate intervention and active promotion, challenging the current paradigm that prioritizes model size and accuracy over efficiency and sustainability.

Speakers

– Ioanna Ntinou
– Leona Verdadero

Arguments

Sustainable AI will not emerge by default and needs active support and incentivization beyond just accuracy metrics


UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing


Topics

Development | Legal and regulatory | Economic


Takeaways

Key takeaways

AI governance requires a multi-pillar approach combining education, research, public administration, and enterprise engagement to ensure inclusive growth


Energy-efficient AI solutions like TinyML and model optimization can significantly reduce power consumption while maintaining performance – demonstrated by 60% parameter reduction with maintained accuracy


Open source approaches can save trillions in development costs by avoiding repetitive creation of AI solutions across organizations


Transparency in AI energy consumption reporting is essential before effective legislation can be developed – current lack of visibility into energy usage per prompt or model interaction


Smaller, task-specific AI models often provide better value than large general-purpose models for solving specific problems like healthcare, agriculture, and environmental monitoring


System-level cooperation and data sharing across value chains is necessary for vertical AI models to create meaningful impact in sectors like energy optimization


Civil society can leverage AI tools for evidence generation and advocacy, particularly for environmental justice issues in developing regions


Public procurement policies represent a powerful lever for promoting sustainable AI adoption given governments’ role as largest buyers


Resolutions and action items

UNESCO and University College London to launch ‘Smarter, Smaller, Stronger’ report on resource-efficient generative AI


Italy to implement guidelines for AI adoption, procurement, and development across 23,000 public administrations


Italy developing tax credit framework for small and medium enterprises to incentivize AI adoption


Promotion of open Innovation Framework to facilitate new procurement approaches beyond traditional tenders


Integration of TinyML into national digital and innovation strategies


Investment in local capacity building using open curricula co-developed with global partners


Funding for context-aware pilot projects in key development sectors


Development of evaluation standards for energy consumption at prompt level


Unresolved issues

How to balance the trade-off between AI model accuracy and energy consumption when accuracy remains the primary success metric


Regulatory challenges around data governance, cloud usage, and cross-border data transfer that limit AI implementation


Lack of transparency in energy consumption reporting for widely used public AI models like GPT-4


How to address job displacement concerns alongside energy consumption as interconnected challenges


Whether focusing on smaller models might neglect important progress made with large language models


How to ensure adequate funding reaches grassroots organizations and indigenous communities for localized climate solutions


Implementation of carbon trading mechanisms for AI energy consumption similar to other industries


Suggested compromises

Transition gradually from large energy-consuming models to smaller vertical models while maintaining research progress on both fronts


Use large models as ‘teachers’ through knowledge distillation to create smaller ‘student’ models that maintain performance


Implement hybrid approaches combining cloud and edge computing to optimize energy usage


Focus AI energy consumption on delivering real value within systems that support SDGs rather than general content generation


Develop task-specific models for mobile and resource-constrained environments while continuing research on larger models


Balance accuracy requirements with energy costs by questioning whether maximum accuracy is always necessary for specific use cases


Thought provoking comments

Now the debate is not humans versus machines. Now the debate is about who understands and uses managed AI versus machines. who don’t.

Speaker

Mario Nobile


Reason

This reframes the entire AI discourse from a fear-based narrative about AI replacing humans to a more nuanced understanding about digital literacy and AI competency. It shifts focus from existential concerns to practical skill development and education.


Impact

This comment established education as a central theme throughout the discussion. It influenced subsequent speakers to emphasize capacity building, local training, and the democratization of AI knowledge, moving the conversation from technical solutions to human empowerment.


The study basically estimates that if we would recreate only one time the wealth of open source intellectual property that is available today on the Internet, it would cost us $4 billion. But it does not stop there… you would increase this $4 billion of cost to $8 trillion just because of the repetition.

Speaker

Adham Abouzied


Reason

This provides concrete economic evidence for the value of open-source collaboration, transforming an abstract concept into tangible financial terms. It demonstrates how sharing and collaboration can create exponential value while reducing resource consumption.


Impact

This comment shifted the discussion from individual technical solutions to systemic collaboration. It influenced other panelists to emphasize knowledge sharing, regional cooperation, and the importance of breaking down silos between organizations and sectors.


Sustainable AI is not going to emerge by default. We need to incentivize and it needs to be actively supported… if we are measuring everything by accuracy, and sometimes we neglect the cost that comes with accuracy, we might consume way more energy than what is actually needed.

Speaker

Ioanna Ntinou


Reason

This challenges the fundamental assumption in AI development that bigger and more accurate is always better. It exposes the hidden costs of the current success metrics and calls for a paradigm shift in how we evaluate AI systems.


Impact

This comment introduced critical thinking about measurement frameworks and success criteria. It led to discussions about transparency in energy reporting and the need for new evaluation standards that balance performance with sustainability.


Do we always need these super wide models that can answer every question we have? Or is it better to focus on models that solve specific issues which are useful for, you know, SDGs or for humanity in general?

Speaker

Marco Zennaro


Reason

This question fundamentally challenges the current trajectory of AI development toward ever-larger general-purpose models. It connects technical decisions to broader humanitarian goals and sustainable development.


Impact

This question became a recurring theme that influenced multiple speakers to advocate for task-specific, smaller models. It helped establish the ‘small is beautiful’ philosophy that ran through the latter part of the discussion and connected technical choices to social impact.


The theater of where the most impact of climate is is in the global south and it would be a farmer, it would be indigenous and local communities… we need to foster funding strategies that could actually look into research and creating science that actually solves this climate solutions with these local communities.

Speaker

Mark Gachara


Reason

This comment brings crucial equity and justice perspectives to the technical discussion, highlighting that those most affected by climate change are often least represented in AI development. It challenges the panel to consider who benefits from and who bears the costs of AI solutions.


Impact

This intervention grounded the entire discussion in real-world impact and social justice. It influenced the conversation to consider not just technical efficiency but also accessibility, local relevance, and community empowerment in AI solutions.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI development and success metrics. They moved the conversation from a purely technical focus to a more holistic view that encompasses education, collaboration, sustainability, and social justice. The comments created a progression from individual technical solutions to systemic thinking about governance, measurement, and community impact. Most significantly, they established a counter-narrative to the ‘bigger is better’ approach in AI, advocating instead for targeted, efficient, and socially conscious AI development that prioritizes real-world problem-solving over technical prowess.


Follow-up questions

How can we transition from brute force models to vertical and agile foundation models with specific purposes across different sectors like health, transportation, tourism, and manufacturing?

Speaker

Mario Nobile


Explanation

This is crucial for reducing energy consumption while maintaining AI effectiveness in specific domains, representing a key strategic shift in AI development approach.


How can we effectively break data silos and improve data quality to enable better AI applications in public administration?

Speaker

Mario Nobile


Explanation

Data quality and interoperability are fundamental challenges that need to be addressed for successful AI implementation in government services.


What are the specific energy consumption metrics at the prompt level for widely used public models like GPT-4?

Speaker

Ioanna Ntinou


Explanation

Without transparency in energy reporting at the individual query level, it’s impossible to make informed decisions about sustainable AI usage and develop appropriate legislation.


Can we develop a carbon trading system specifically for AI energy consumption to incentivize more efficient models?

Speaker

Jan Lublinski (Audience)


Explanation

This would create market-based incentives for developing energy-efficient AI systems, similar to existing carbon trading mechanisms in other industries.


Do we always need super wide models that can answer every question, or is it better to focus on models that solve specific issues useful for SDGs?

Speaker

Marco Zennaro


Explanation

This fundamental question challenges the current trend toward large general-purpose models and could reshape AI development priorities toward more sustainable, task-specific solutions.


How can we establish proper data governance and classification standards for cross-border AI applications, especially in emerging countries?

Speaker

Adham Abouzied


Explanation

Current regulations often prevent effective AI implementation due to unclear data governance rules, particularly affecting countries without local hyperscaler infrastructure.


What evaluation standards should be developed to assess energy consumption at the prompt level for AI models?

Speaker

Ioanna Ntinou


Explanation

Standardized evaluation methods are needed to create transparency and enable comparison of energy efficiency across different AI systems.


How can funding strategies be developed to support grassroots organizers and indigenous communities in building localized AI climate solutions?

Speaker

Mark Gachara


Explanation

Since climate impacts are most severe in the Global South and among indigenous communities, research funding should prioritize supporting these communities in developing their own solutions.


What are the optimal policies and sharing protocols needed across industries to accelerate adoption of open source AI solutions?

Speaker

Adham Abouzied


Explanation

System-level cooperation requires well-designed governance frameworks to incentivize data and intellectual property sharing while maintaining competitive advantages.


How can we balance the trade-off between model accuracy and energy consumption in AI development?

Speaker

Ioanna Ntinou


Explanation

Current success metrics focus primarily on accuracy, often neglecting the disproportionate energy costs of marginal improvements, requiring new evaluation frameworks.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #111 Universal Acceptance and Idn World Report 2025

Lightning Talk #111 Universal Acceptance and Idn World Report 2025

Session at a glance

Summary

This lightning talk focused on Universal Acceptance and Internationalized Domain Names (IDNs), presented by Regina Filipova Fuchsova from the .eu registry and Esteve Sanz from the European Commission. The discussion centered on the importance of enabling internet users worldwide to navigate online entirely in their local languages, moving beyond ASCII-only domain names to support various scripts including Cyrillic, Greek, Chinese, and Arabic characters. Universal Acceptance refers to the principle that all domain names and email addresses should be treated equally by internet-enabled applications, devices, and systems, regardless of the script or characters used.


Sanz emphasized that multilingualism is fundamental to EU identity, with the bloc’s motto being “united in diversity” across 24 official languages. He highlighted that 88% of European country-code top-level domains (ccTLDs) technically support IDNs, yet adoption remains slow due to limited user awareness. The speakers presented findings from the freshly published IDN World Report 2025, which revealed concerning trends including flat or negative growth in IDN registrations, high market concentration among a few top-level domains, and persistent technical barriers.


The report found an estimated 4.4 million IDNs worldwide, with 70% under ccTLDs, but showed negative growth rates of nearly 1% for ccTLDs and 5.5% for generic TLDs. Technical challenges persist, with over half of registries not supporting Unicode email addresses and three-fourths not permitting Unicode symbols in contact emails. The main barriers to adoption were identified as technical compatibility issues, low public awareness, and entrenched habits of using ASCII characters. The presenters concluded with a call for action to share IDN stories and expand data collection to better understand and promote multilingual internet adoption.


Keypoints

**Major Discussion Points:**


– **Universal Acceptance and Internationalized Domain Names (IDNs)**: The core focus on enabling domain names and email addresses in non-ASCII scripts (like Cyrillic, Greek, Arabic, Chinese) to work seamlessly across all internet applications and systems, allowing users to navigate the internet entirely in their local languages.


– **European Union’s Multilingual Digital Policy**: Discussion of how the EU’s “united in diversity” motto translates into digital infrastructure, with 24 official languages requiring support, and how regulations like the Digital Services Act incorporate provisions for linguistic diversity online.


– **IDN World Report 2025 Findings**: Presentation of key research results showing concerning trends including negative growth rates for IDNs (-1% for ccTLDs, -5.5% for gTLDs), high market concentration (top 10 TLDs account for 75% of all IDNs), and significant technical barriers to adoption.


– **Technical and Adoption Barriers**: Identification of major obstacles including compatibility issues with browsers and email services, low public awareness, entrenched habits of using ASCII/English domains, and insufficient promotion by registries and registrars due to limited business cases.


– **Cultural Preservation and Digital Inclusion**: Emphasis on preserving linguistic heritage online, with examples like Sámi language and the concept that local language internet access could potentially reduce internet shutdowns by increasing user connection to digital spaces.


**Overall Purpose:**


The discussion aimed to raise awareness about Universal Acceptance and IDN implementation in Europe, present research findings on the current state of multilingual internet infrastructure, and advocate for better support of linguistic diversity online to ensure digital inclusion for all language communities.


**Overall Tone:**


The tone was professional and educational throughout, with speakers maintaining an informative and collaborative approach. The presenters demonstrated expertise while acknowledging challenges and limitations in their research. The tone remained constructive even when discussing concerning trends like negative growth rates, and concluded with an encouraging call-to-action for community involvement and cooperation.


Speakers

– **Regina Filipova Fuchsova**: Works for the .eu registry, involved in Universal Acceptance and IDN (Internationalized Domain Names) research and advocacy


– **Esteve Sanz**: Works for the European Commission, focuses on EU digital priorities and multilingual internet policies, from Catalonia region of Spain


– **Tapani Tarvainen**: From Electronic Frontier Finland, also a non-commercial stakeholder group member in ICANN, expertise in internet governance and minority language digital rights


Additional speakers:


None identified beyond the speakers names list.


Full session report

# Universal Acceptance and Internationalized Domain Names Discussion


## Introduction and Context


This lightning talk session focused on Universal Acceptance and Internationalized Domain Names (IDNs), featuring presentations by Regina Filipova Fuchsova from the .eu registry and Esteve Sanz from the European Commission. The session centered on enabling internet users worldwide to navigate online in their local languages, moving beyond ASCII-only domain names to support diverse scripts including Cyrillic, Greek, Chinese, and Arabic characters. The discussion included questions from audience members, notably Tapani Tarvainen of Electronic Frontier Finland and ICANN’s non-commercial stakeholder group.


## Core Technical Concepts


### Universal Acceptance Framework


Regina Filipova Fuchsova explained that Universal Acceptance refers to the principle whereby all domain names and email addresses should be treated equally by internet-enabled applications, devices, and systems, regardless of the script or characters used. This framework enables domain names to utilize special characters and different scripts beyond ASCII. Unicode versions have to be transferred via so-called Punicode to ASCII characters for DNS processing.


However, significant technical barriers remain. Over half of registries do not support Unicode addresses in email servers, while three-fourths of registries do not permit Unicode symbols as contact emails in their registry database. These limitations represent fundamental obstacles to achieving Universal Acceptance.


## European Union’s Multilingual Digital Policy


### Policy Foundation


Esteve Sanz emphasized that multilingualism forms the cornerstone of EU identity, with the bloc’s motto “united in diversity” encompassing 24 official languages. The European Commission requires all official EU content to be available in every official language, demonstrating institutional commitment to digital linguistic equality.


Sanz noted that the Digital Services Act includes provisions that incentivize platforms to promote linguistic diversity. He also highlighted that most European domains technically support IDNs, with 88% of European ccTLDs able to register names with local characters according to the latest statistics.


## IDN World Report 2025: Key Findings


### Market Statistics and Trends


Regina Filipova Fuchsova presented findings from the IDN World Report 2025, a recurring research project produced by URID (the .eu registry) with partners including UNESCO since 2011. The report, which has evolved from PDF format to a website, revealed concerning trends in the global IDN landscape.


The report estimated 4.4 million IDNs worldwide, with 70% operating under country-code top-level domains (ccTLDs). However, the market shows negative growth rates, with almost minus 1% yearly growth for ccTLDs and minus 5.5% for generic TLDs.


The IDN market exhibits high concentration, with the top 10 TLDs accounting for 75% of all IDNs globally. For the .eu registry specifically, IDN adoption remains at approximately 1% of total registrations, despite technical support being available.


### Technical Implementation Status


The report identified that 67% of Whois services display both Unicode and Punicode versions of domain names. However, significant gaps remain in email infrastructure support, with many registries unable to handle Unicode addresses or contact emails.


## Barriers to IDN Adoption


Three primary barriers hinder IDN adoption:


– Technical compatibility issues with browsers and email services


– Low public awareness of IDN capabilities


– Entrenched habits favoring ASCII/English domains


Additionally, insufficient promotion by registries and registrars, often due to unclear business cases, contributes to low adoption rates.


## Cultural Preservation and Minority Languages


### Indigenous Language Concerns


The discussion emphasized multilingual internet access as crucial for digital inclusion and cultural heritage preservation. Tapani Tarvainen raised specific concerns about minority language support, referencing the Sámi song performed at Monday’s opening ceremony at the town hall and questioning ICANN’s policies regarding IDN top-level domains.


Tarvainen highlighted that small minority languages face particular challenges, noting that characters like the Sámi letter Eng may become less usable due to lack of advocacy, observing that “nobody out there cares” about very small minority languages.


Regina Filipova Fuchsova acknowledged that IDN variant tables, last updated in 2008, require updating to better support special characters used in minority languages.


### Digital Rights Perspective


Esteve Sanz shared an interesting perspective he had heard from others about the connection between local language internet access and internet shutdowns. He explained: “We’re very worried about Internet shutdowns… And very intelligent people that know the local characteristics of these shutdowns, they tell us that if the Internet was in our local language, we would not see a shutdown. Because people would really feel the Internet as themselves, and politicians could not perform that shutdown.”


## Call to Action


Regina Filipova Fuchsova made specific requests for expanding IDN research and awareness:


– More TLDs to participate in IDN World Report data collection


– Communities to share their IDN stories and case studies for publication on the IDN World Report website


– Help reaching out to more ccTLDs to participate in annual research surveys


She also committed to follow-up discussion on Sámi language character support in IDN policies, addressing the minority language concerns raised during the session.


## Conclusion


The discussion highlighted both the technical capabilities and practical challenges of implementing Universal Acceptance and IDN adoption. While strong policy support exists, particularly within the EU framework, significant barriers remain in technical implementation, commercial incentives, and user awareness. The specific concerns raised about minority language support demonstrate how technical governance decisions can impact linguistic diversity and cultural preservation online.


Moving forward, bridging the gap between policy intentions and practical implementation will require coordinated efforts across technical, commercial, and advocacy domains to ensure that Universal Acceptance principles benefit all language communities.


Session transcript

Regina Filipova Fuchsova: Good morning, ladies and gentlemen. Welcome to this lightning talk on Universal Acceptance and IDN Word Report. My name is Regina Fucsova, I work for the .eu registry and together with Esteve Sanz from the European Commission, we will try our best to put this topic in the European context and also provide you with some exclusive insights in the freshly published IDN Word Report 2025. Before we dive into the world of IDNs and Universal Acceptance, let me just shortly put us in the place where we speak about in the context of Internet infrastructure. You can see an example of a web address leading to also quite freshly published report on the digital decade by the European Commission. And if we start from the right hand side, you can see .eu, which we refer to as top level domain, and to the left hand side, Europa. This Europa is referred to as second level domain. Together, Europa.eu is what we commonly refer to as a domain name. And this is also something what you read as the ccTLD registries. The top level domain doesn’t need to have only two letters. We have .org, .net, or maybe you saw around a booth with, for example, .post, one of the new gTLDs. To have it a bit more complicated even, this domain name doesn’t need to be in ASCII codes only. It can have special characters, such as this O in Lilleström, or it can be in a completely different script, Chinese, Arabic, or if we speak about the official European Union languages, Cyrillic or Greek. Another example you can see on the slide is a favorite salty snack, soleti.eu in a Bulgarian language. This soleti.eu in Cyrillic is referred to as Unicode version of the domain name. But for the DNS system to process it, it has to be transferred via so-called Punicode to ASCII characters, and you can see that it starts with xn and two dashes. This will be important for some of the findings of the report later on. In any case, IDNs are necessary for multilingual Internet. The concept of universal acceptance goes a bit further. It says that all domain names and also email addresses should be treated equally and used by all Internet-enabled applications, devices, and systems, regardless of the script or characters. The point is that users around the world should be able to navigate the Internet entirely in their local language. It has a big relevance to IGF and the WSIS. For example, on Tuesday in one of the sessions, it was among the two most voted WSIS-related initiatives. Universal acceptance included Internet infrastructure as for the initiatives which need more support. It was considered as the way how to attract the next billion of Internet users, and throughout this week, you might have repeatedly heard in the sessions the words like linguistic accessibility, digital inclusion, part of a concept that nobody is left behind, both in Internet governance and usage of the Internet, and it has also a strong link to sustainable development goals. Do you remember this lady from our Monday opening ceremony at the town hall? Just to illustrate that multilingualism has a face, we enjoyed a very beautiful song sung in a Sámi language. It’s very important for Norway because most of the Sámi population lives in Norway. It’s actually the only indigenous peoples in the EU, and I bet that you would agree with me that even for enjoying songs in English, it’s a very nice experience. It would be a big pity not to be able to enjoy the one which we heard on Monday. And since the offline and online world are more and more interconnected, then it’s just a logical thing to try our best as much as the technical equipment allows us to preserve the cultural heritage also in the online world. Now I would like to give the floor to Esteve to tell us how this concept is in line with the European digital priorities.


Esteve Sanz: Thank you so much, Regina. Thank you so much for hosting this lightning talk. I’m really looking forward to hear more about the report, which I don’t know about yet, so that will be a premiere for me as well. The EU, DNA really, our identity is based on multilingualism. Our motto is united in diversity. We have 24 official languages in the EU. There are many more languages, and this multilingual priority of the EU carries, of course, into the digital realm. EU policy and funding strongly support online multilingualism in many different aspects, in universal acceptance, but also in other regulations that I will talk about in a minute. And we require, of course, that all official EU content in every language, official language of the EU, and the capacity for everyone intervening in the policy process to exercise that right in that language. This is the DNA, really, of the EU multilingualism, and we are very proud to also support universal acceptance as a core value. As Regina was explaining, universal acceptance means all domain names and email addresses work everywhere online, whether in Latin, Greek, Cyrillic, or any script. This is crucial for Europe, where scripts and accented characters vary, so that French accents, Greek or Cyrillic languages work as smoothly as example.com. Today, 68% of domain registries say universal acceptance is the top factor for boosting multilingual domains. We need every app, browser, and form to accept all EU languages. Most European domains technically support IDNs, 88% of European ccTLDs, according to the latest statistics, can register names with local characters, yet uptake is slow. User awareness of universal acceptance is normally what we are told is the limiting factor. that it’s blocking that, and that’s why we’re happy to have this lightning talk to address that awareness. There are interesting, successful stories in the multilingual internet space in the EU. I’m from Catalonia, a region of Spain, and Catalan has built an extremely strong online presence. There is a dedicated .cat domain with 100,000 sites, and today 66% of websites in Catalonia have a Catalan version, up to only 39% in 2002. We have successful examples in Irish Gaelic, which went from limited to use in the EU to a full EU language, and then this also boosts online presence. There is a lot of coordination with member states. Many countries are championing their language online, whether Spain is supporting co-official languages like Catalan and Basque, or Baltic countries developing AI for the language, for their languages. The EU complements this funding. For example, we have the European language data space, but also injects in the new rules that we have this multilingual element. It’s not well known, but the Digital Services Act, which is our flagship when it comes to digital regulation, has several provisions that incentivize platforms to actually promote linguistic diversity. So, you know, situations, the policies are there, the political intentions are there. We need to bridge that gap. This is extremely important for the EU. It’s also extremely important, if I have to say, for things that we normally don’t think about. So we’ve engaged a lot with Global South players these days in the IGF, as we constantly do. And we heard one very interesting idea that I just wanted to put to you. We’re very worried about Internet shutdowns, that, you know, despite all these commitments, they continue to be on the rise. And very intelligent people that know the local characteristics of these shutdowns, they tell us that if the Internet was in our local language, we would not see a shutdown. Because people would really feel the Internet as themselves, and politicians could not perform that shutdown. I thought that was a very powerful idea.


Regina Filipova Fuchsova: It’s a very interesting aspect. Thank you. Maybe we can consider it for the future to have a look in the study in this aspect. It actually hasn’t come to our mind. Thank you very much. Thank you very much, Esteve, for your explanation. So, in the remaining time, I would like to show you the findings of the report. Just briefly, why URID cares at all, and who we are. I mentioned that we are running .eu, a top-level domain. We are already since 2005 by the appointment of the European Commission, and from the very beginning, we are committed to provide our support in all official EU languages. Throughout the years, we have introduced first IDNs under .eu, then also the Cyrillic version of the top-level domain, and then also the Greek version. In total, we account for 3.7 million of registrations, and about 1% out of them are IDNs. This might seem really low, 1%, but it’s actually in line with what our CCTLDs are experiencing. Also taking into account the technical and other issues, the uptake is not as it could be. And it’s a part of our strategy to bring European values to open, safe, and inclusive Internet. So this corresponds together. The IDN World Report already has a history since 2011, when we started it at that time in a PDF printed form together with UNESCO and other partners. We have got more partners over the years. The aim is to enhance the linguistic diversity through advocacy of internationalized domain names and their usage. This report, which is now in the form of a website, you can see on the right-hand side idnworldreport.eu brings news, registration trends, also information on technical readiness, and also we have added quite recently a kind of sentiment analysis. The outcomes of the 2025 report can be on a very high level grouped in three main topics. One is that many TLDs are experiencing flat or even negative growth trends. Second, the IDN market is highly concentrated, so to say in the hands of a few TLDs. And third, there are still challenges in user awareness and adoption. Some illustration of this, and I have to say at the beginning that the study has its limitation by the number of TLDs which provide us with the data. So this is also a call for action. Whoever from you can address in your communities people and organizations dealing with IDNs, please direct us to our study. So based on the data set, we had over 400 TLDs. There is an estimated 4.4 million of IDNs worldwide, and roughly 70% are under country code TLDs. The growth is indeed negative, almost minus 1% for yearly growth for ccTLDs, and even 5.5% for gTLDs. The top 10 TLDs, which is in line with the concentration I mentioned, account for over 3.3 million of domain names, so 75% more or less of all the IDNs. And the top 5 IDNs are in the Cyrillic version of Russian top-level domain .com.de and the 4th and 5th place belongs to Chinese registrations. We are also following technical aspects of IDN implementation and found out that over half of the registries do not support Unicode addresses in Vail mail servers at all. Unicode was with Soleti.eu in the Cyrillic script, so the form which is readable for humans. Three-fourths of the registries do not permit Unicode symbols as contact emails in their registry database, and actually none of the ccTLDs stated that they would offer support to EIA, stands for email addresses in international form, so it means that they have this Unicode quotation also in the email addresses also before the ad. And the last piece of information in these slides refers to the display of IDNs in this form of Unicode and also the Unicode xn-quotation I was mentioning. When it comes to Whois services, when the registries display both versions in 67% of cases and slightly lower rates are for registry databases. interfaces display and user interfaces display. So only on this information you can already see why the uptake is not as it could be, because if you cannot use the domain name and the connected email address in all the services you are used to, not only connected to registration, but also to the services by big platforms, for example, then it makes less sense. And then also the users who otherwise cannot access internet because of the language barriers are disadvantaged. We also asked how the CCTLDs or TLDs in general, which answered the survey, see the main barriers to adoption. And we identified three main fields. One are technical issues, the compatibility with browser and email services. Second, low public awareness. And also the habit to use ASCII, like English, is so rooted with the users that it’s very difficult to overcome this. One of the case studies we have on our IDN World Report website refers to the introduction of a top-level domain in Hebrew. And even though it was extremely also politically important for the country to introduce this, they reported back that it was extremely difficult to bring to the users that there is this possibility and that it can work in the domains in Hebrew. And the third barrier identified is also a modest level of promotion, both from registries and registrars, which is actually quite understandable. With the registries, it’s a bit better because they have, especially CCTLDs, in their DNA embedded support to local communities. But for registrars who are commercial companies, if there is not a business case, it’s difficult for them to find the resources for the support. This is to illustrate what you can find in the report. We have listed all the results, so you can see that indeed the majority voted for low awareness. Then also one of the sentiment questions was how organizations promote IDN. The most voted for marketing and also presence in events. These complete answers are there for all the questions we have asked. Shortly to the sources and methodology for CCTLD data, we are cooperating, apart from the forum, with the center and other regional organizations in Asia, Latin America, also Africa. And we also take the GTLD data from Mark Datysgeld and also from Zooknik. The mechanism or the season is that in January, the research team circulates two forms. One is about the IDN implementation in technical terms, and one is the sentiment survey. And then it’s complemented with the data, registration data, available via the regional registries organizations. And then also, as much as it is possible, the research team follows up with the CLTLDs to get a better understanding for the data. As our time is almost expired, and I still would like to give a space if there is some question, I will just refer you to also the recent articles and case studies in the website, including our colleagues from .cat, which have a very interesting way to promote and they put a lot of effort there. And also, I think the presentation will be available in the recording of the session, so some useful links. There is a lot of information on the ICANN website. There is also a recently formed CODI group. There were some changes in the universal acceptance steering group at ICANN, so you can have a look for further information. So I would like to finish with a call for action. If you have your IDN story from your country, please share it with us. We will be happy to publish it. Also, if you will be able to reach out so that we can address more CLTLDs, that would be very helpful. So thank you very much for your attention, and if you have a question to Esteve or to me, we still have space to answer it. Yes? I think, can I ask you? Thank you.


Tapani Tarvainen: Okay. Tapani Tajvainen from Electronic Frontier Finland, and also a non-commercial stakeholder group in ICANN. Have you been following the use of IDN top-level domains and ICANN’s policies in that regard? You mentioned Sámi, and in particular, there is a policy going on that looks like the Sámi letter Eng will be much less usable than some other special characters because nobody out there cares, basically. So that these very small minority languages and their characters would need someone to speak for them.


Regina Filipova Fuchsova: It’s a very good point, because there are also languages which cannot be put in written form, of course, and still deserve preservation. But since you mentioned Sámi, actually, I was in a session. It was in Tampere, so what, two years ago? Sandra can help me with Eurodic in Tampere, two years ago. And there was a session, and there was a young lady researcher who did research for the Sámi language. It was herself, I think not a Mavratan, but her family’s Mavratan. And there was obviously some movement of young people to preserve the language and raise awareness. So even though it will not become widely spread, I think especially via the young people, if they are interested in preserving this heritage online, we, as well, I can speak for the technical community, I think are obliged just to do our part of the work to enable this. But otherwise, you are right, some languages are even difficult to bring over to written form. So the content, actually, it starts and finishes with the content, of course, available. Do we have the mic already switched off?


Tapani Tarvainen: Okay, just to follow up. Sámi does not have a problem of not being able to write it. It just uses a bunch of special characters, and these are not well supported. That was the point.


Regina Filipova Fuchsova: But it’s a technical thing, right, to include them in the next version of the ID&A variant? Okay, we can talk about it. We had the last one in 2008, so high time to update. But thank you very much. Thank you very much for your attention. Thank you, Esteve, for your contribution. And please reach out to us. There is an email address. We will be happy to share and cooperate. Thank you.


R

Regina Filipova Fuchsova

Speech speed

121 words per minute

Speech length

2263 words

Speech time

1117 seconds

Domain names can use special characters and different scripts beyond ASCII, requiring conversion through Punicode for DNS processing

Explanation

Domain names don’t need to be limited to ASCII characters and can include special characters like accented letters or be written in completely different scripts such as Chinese, Arabic, Cyrillic, or Greek. For the DNS system to process these internationalized domain names, they must be converted through Punicode to ASCII characters, which creates a technical version starting with ‘xn--‘.


Evidence

Examples provided include Lilleström with special O character, soleti.eu in Bulgarian Cyrillic script, and the technical Punicode conversion showing xn-- prefix


Major discussion point

Technical infrastructure requirements for multilingual internet


Topics

Infrastructure | Multilingualism


Universal acceptance means all domain names and email addresses should work equally across all internet applications regardless of script or characters

Explanation

The concept of universal acceptance requires that all domain names and email addresses be treated equally and function properly in all internet-enabled applications, devices, and systems, no matter what script or characters they use. This enables users worldwide to navigate the internet entirely in their local language, supporting digital inclusion and ensuring nobody is left behind.


Evidence

Connection to WSIS initiatives where universal acceptance was among the two most voted initiatives, described as a way to attract the next billion internet users, and linked to sustainable development goals


Major discussion point

Digital inclusion and multilingual internet access


Topics

Infrastructure | Multilingualism | Development


Over half of registries don’t support Unicode addresses in email servers, and three-fourths don’t permit Unicode symbols as contact emails

Explanation

Technical implementation of IDN support is incomplete across registries, with significant gaps in email functionality. More than half of registries cannot handle Unicode addresses in email servers, and three-quarters don’t allow Unicode symbols in contact emails in their registry databases.


Evidence

Specific statistics: over 50% don’t support Unicode in email servers, 75% don’t permit Unicode in contact emails, and none of the ccTLDs offer support for internationalized email addresses (EAI)


Major discussion point

Technical barriers to IDN adoption


Topics

Infrastructure | Multilingualism


Agreed with

– Esteve Sanz

Agreed on

Technical barriers significantly hinder IDN adoption and universal acceptance


Many top-level domains are experiencing flat or negative growth trends, with ccTLDs showing -1% yearly growth and gTLDs showing -5.5%

Explanation

The IDN market is experiencing declining growth rates rather than expansion. Country code top-level domains show a negative 1% yearly growth while generic top-level domains show an even steeper decline at negative 5.5% yearly growth.


Evidence

Data from 2025 IDN World Report based on over 400 TLDs, showing estimated 4.4 million IDNs worldwide with 70% under ccTLDs


Major discussion point

Declining adoption rates of internationalized domain names


Topics

Infrastructure | Multilingualism


The IDN market is highly concentrated, with top 10 TLDs accounting for 75% of all 4.4 million IDNs worldwide

Explanation

IDN registrations are dominated by a small number of top-level domains rather than being distributed broadly. The top 10 TLDs control over 3.3 million domain names, representing approximately 75% of all internationalized domain names globally.


Evidence

Top 5 IDNs include Cyrillic version of Russian TLD, .com, .de, and Chinese registrations in 4th and 5th places


Major discussion point

Market concentration in IDN space


Topics

Infrastructure | Multilingualism


Main barriers to IDN adoption include technical compatibility issues, low public awareness, and entrenched habits of using ASCII domains

Explanation

Three primary obstacles prevent wider IDN adoption: technical problems with browser and email service compatibility, insufficient public knowledge about IDN availability and functionality, and deeply rooted user habits of using English/ASCII domain names. These barriers create a cycle where even when IDN support exists, users don’t adopt it.


Evidence

Survey results from ccTLDs and case study of Hebrew TLD introduction showing difficulty in user adoption despite political importance; registrar reluctance due to lack of business case


Major discussion point

Barriers preventing IDN adoption


Topics

Infrastructure | Multilingualism | Development


Agreed with

– Esteve Sanz

Agreed on

Technical barriers significantly hinder IDN adoption and universal acceptance


Multilingual internet access is crucial for digital inclusion and preserving cultural heritage online, as illustrated by indigenous languages like Sámi

Explanation

Supporting multilingual internet access is essential for ensuring digital inclusion and maintaining cultural heritage in the online world. Indigenous and minority languages like Sámi represent important cultural assets that deserve preservation and accessibility online, just as they enrich offline cultural experiences.


Evidence

Example of Sámi song performed at IGF opening ceremony, noting Sámi as the only indigenous people in the EU with most population in Norway; connection between offline and online world preservation


Major discussion point

Cultural preservation and digital inclusion


Topics

Multilingualism | Cultural diversity | Development


Agreed with

– Tapani Tarvainen

Agreed on

Need for advocacy and support for minority languages in technical policies


E

Esteve Sanz

Speech speed

132 words per minute

Speech length

630 words

Speech time

284 seconds

EU’s identity is fundamentally based on multilingualism with 24 official languages, and this priority extends into the digital realm

Explanation

The European Union’s core identity and motto ‘united in diversity’ is built on multilingualism, with 24 official languages representing the foundation of EU values. This multilingual priority naturally carries over into digital policy and internet governance, making universal acceptance a core EU value.


Evidence

EU motto ‘united in diversity’, 24 official languages, and requirement that all official EU content be available in every official language


Major discussion point

EU multilingual identity and digital policy


Topics

Multilingualism | Cultural diversity


Agreed with

– Regina Filipova Fuchsova

Agreed on

Multilingualism as fundamental to digital inclusion and cultural preservation


EU policy and funding strongly support online multilingualism, requiring all official EU content to be available in every official language

Explanation

The EU has established comprehensive policies and funding mechanisms to promote multilingualism online through various regulations and initiatives. All official EU content must be provided in every official language, and individuals have the right to participate in policy processes using their preferred official language.


Evidence

European language data space funding, 88% of European ccTLDs can register names with local characters, and 68% of domain registries identify universal acceptance as top factor for boosting multilingual domains


Major discussion point

EU policy framework for multilingual internet


Topics

Multilingualism | Legal and regulatory


The Digital Services Act includes provisions that incentivize platforms to promote linguistic diversity

Explanation

The EU’s flagship digital regulation, the Digital Services Act, contains specific provisions designed to encourage platforms to actively promote and support linguistic diversity. This represents a regulatory approach to ensuring multilingual internet access beyond just technical capabilities.


Evidence

Reference to Digital Services Act as flagship digital regulation with linguistic diversity provisions


Major discussion point

Regulatory support for multilingual platforms


Topics

Legal and regulatory | Multilingualism


Local language internet could potentially reduce internet shutdowns because people would feel more ownership of the internet in their native language

Explanation

An innovative perspective suggests that if internet content and infrastructure were more available in local languages, people would develop stronger personal connections to the internet, making it politically difficult for governments to implement shutdowns. When people feel the internet truly belongs to them through their native language, politicians would face greater resistance to restricting access.


Evidence

Insights from Global South players at IGF discussing the relationship between local language internet and resistance to shutdowns


Major discussion point

Connection between linguistic ownership and internet freedom


Topics

Multilingualism | Human rights principles | Freedom of expression


T

Tapani Tarvainen

Speech speed

155 words per minute

Speech length

119 words

Speech time

46 seconds

Small minority languages face particular challenges with special character support in IDN policies and need advocacy

Explanation

Very small minority languages and their unique characters require dedicated advocacy because they often lack sufficient representation in technical policy discussions. The example of Sámi language shows how specific characters like the Sámi letter Eng may become less usable than other special characters simply due to lack of attention and support.


Evidence

Specific example of Sámi letter Eng being less supported than other special characters in ICANN policies, and the general principle that small minority languages need someone to speak for them


Major discussion point

Advocacy needs for minority language technical support


Topics

Multilingualism | Cultural diversity | Infrastructure


Agreed with

– Regina Filipova Fuchsova

Agreed on

Need for advocacy and support for minority languages in technical policies


Agreements

Agreement points

Multilingualism as fundamental to digital inclusion and cultural preservation

Speakers

– Regina Filipova Fuchsova
– Esteve Sanz

Arguments

Multilingual internet access is crucial for digital inclusion and preserving cultural heritage online, as illustrated by indigenous languages like Sámi


EU’s identity is fundamentally based on multilingualism with 24 official languages, and this priority extends into the digital realm


Summary

Both speakers strongly emphasize that multilingualism is essential for digital inclusion and maintaining cultural diversity online, with specific focus on preserving indigenous and minority languages in the digital space


Topics

Multilingualism | Cultural diversity | Development


Technical barriers significantly hinder IDN adoption and universal acceptance

Speakers

– Regina Filipova Fuchsova
– Esteve Sanz

Arguments

Over half of registries don’t support Unicode addresses in email servers, and three-fourths don’t permit Unicode symbols as contact emails


Main barriers to IDN adoption include technical compatibility issues, low public awareness, and entrenched habits of using ASCII domains


Summary

Both speakers acknowledge that technical implementation gaps and compatibility issues are major obstacles preventing widespread adoption of internationalized domain names


Topics

Infrastructure | Multilingualism


Need for advocacy and support for minority languages in technical policies

Speakers

– Regina Filipova Fuchsova
– Tapani Tarvainen

Arguments

Multilingual internet access is crucial for digital inclusion and preserving cultural heritage online, as illustrated by indigenous languages like Sámi


Small minority languages face particular challenges with special character support in IDN policies and need advocacy


Summary

Both speakers recognize that minority and indigenous languages require dedicated advocacy and technical support to ensure their preservation and accessibility online


Topics

Multilingualism | Cultural diversity | Infrastructure


Similar viewpoints

Both speakers advocate for comprehensive universal acceptance where all languages and scripts should be equally supported across internet infrastructure and applications

Speakers

– Regina Filipova Fuchsova
– Esteve Sanz

Arguments

Universal acceptance means all domain names and email addresses should work equally across all internet applications regardless of script or characters


EU policy and funding strongly support online multilingualism, requiring all official EU content to be available in every official language


Topics

Infrastructure | Multilingualism | Legal and regulatory


Both speakers recognize the need for broader distribution and support of multilingual internet infrastructure beyond current concentrated markets

Speakers

– Regina Filipova Fuchsova
– Esteve Sanz

Arguments

The IDN market is highly concentrated, with top 10 TLDs accounting for 75% of all 4.4 million IDNs worldwide


EU policy and funding strongly support online multilingualism, requiring all official EU content to be available in every official language


Topics

Infrastructure | Multilingualism


Unexpected consensus

Connection between local language internet and resistance to internet shutdowns

Speakers

– Esteve Sanz
– Regina Filipova Fuchsova

Arguments

Local language internet could potentially reduce internet shutdowns because people would feel more ownership of the internet in their native language


Explanation

This represents an unexpected and innovative connection between multilingual internet access and internet freedom/human rights, suggesting that linguistic ownership could serve as a protective factor against government censorship


Topics

Multilingualism | Human rights principles | Freedom of expression


Overall assessment

Summary

Strong consensus exists among all speakers on the fundamental importance of multilingual internet access for digital inclusion, cultural preservation, and universal acceptance. All speakers agree on the technical barriers hindering IDN adoption and the need for advocacy for minority languages.


Consensus level

High level of consensus with complementary perspectives rather than disagreement. The speakers represent different stakeholder groups (registry operator, policy maker, civil society) but share aligned views on core issues. This strong consensus suggests potential for coordinated action on universal acceptance and IDN promotion, though implementation challenges remain significant due to technical and awareness barriers identified by all speakers.


Differences

Different viewpoints

Unexpected differences

Overall assessment

Summary

This discussion showed remarkably high consensus among speakers, with no direct disagreements identified. All speakers shared a common vision of multilingual internet access and universal acceptance.


Disagreement level

Very low disagreement level. The discussion was characterized by complementary perspectives rather than conflicting viewpoints. Regina provided technical and registry perspectives, Esteve contributed EU policy context, and Tapani raised specific advocacy concerns for minority languages. This high level of agreement suggests strong consensus within the technical and policy communities about the importance of multilingual internet infrastructure, though it may also indicate that more challenging implementation debates occur in other forums.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for comprehensive universal acceptance where all languages and scripts should be equally supported across internet infrastructure and applications

Speakers

– Regina Filipova Fuchsova
– Esteve Sanz

Arguments

Universal acceptance means all domain names and email addresses should work equally across all internet applications regardless of script or characters


EU policy and funding strongly support online multilingualism, requiring all official EU content to be available in every official language


Topics

Infrastructure | Multilingualism | Legal and regulatory


Both speakers recognize the need for broader distribution and support of multilingual internet infrastructure beyond current concentrated markets

Speakers

– Regina Filipova Fuchsova
– Esteve Sanz

Arguments

The IDN market is highly concentrated, with top 10 TLDs accounting for 75% of all 4.4 million IDNs worldwide


EU policy and funding strongly support online multilingualism, requiring all official EU content to be available in every official language


Topics

Infrastructure | Multilingualism


Takeaways

Key takeaways

Universal Acceptance is critical for digital inclusion and multilingual internet access, requiring all domain names and email addresses to work equally across applications regardless of script or characters


The IDN market faces significant challenges with negative growth trends (-1% for ccTLDs, -5.5% for gTLDs) and high concentration (top 10 TLDs control 75% of market)


Technical barriers remain substantial – over half of registries don’t support Unicode addresses in email servers, and three-fourths don’t permit Unicode contact emails


Main adoption barriers are technical compatibility issues, low public awareness, and entrenched habits of using ASCII domains


EU’s multilingual digital policy strongly supports Universal Acceptance as aligned with its ‘united in diversity’ identity and 24 official languages


Multilingual internet access has broader social implications, potentially reducing internet shutdowns by increasing local ownership and preserving cultural heritage online


Resolutions and action items

Call for more TLDs to participate in the IDN World Report data collection to improve research comprehensiveness


Request for communities to share their IDN stories and case studies for publication on the IDN World Report website


Need to reach out to more ccTLDs to expand participation in the annual research surveys


Follow-up discussion needed on Sámi language character support in IDN policies between presenters and Tapani Tarvainen


Unresolved issues

How to overcome the low uptake of IDNs despite technical availability (88% of European ccTLDs support IDNs but adoption remains around 1%)


How to incentivize commercial registrars to promote IDNs when business case is unclear


Specific technical solutions for supporting minority language characters like Sámi letter Eng in IDN policies


How to bridge the gap between policy intentions and actual implementation of multilingual internet infrastructure


Strategies for increasing user awareness and changing entrenched ASCII domain usage habits


Suggested compromises

None identified


Thought provoking comments

We’re very worried about Internet shutdowns, that, you know, despite all these commitments, they continue to be on the rise. And very intelligent people that know the local characteristics of these shutdowns, they tell us that if the Internet was in our local language, we would not see a shutdown. Because people would really feel the Internet as themselves, and politicians could not perform that shutdown.

Speaker

Esteve Sanz


Reason

This comment is profoundly insightful because it connects linguistic accessibility to political resistance and digital rights in an unexpected way. It suggests that when people have genuine ownership of the Internet through their native language, it becomes much harder for authoritarian governments to justify shutting it down. This reframes Universal Acceptance from a technical convenience issue to a fundamental tool for digital freedom and resistance to censorship.


Impact

This comment significantly elevated the discussion from technical implementation challenges to geopolitical implications. It prompted Regina to acknowledge this was a completely new perspective they hadn’t considered for future studies, showing how it opened new research directions. The comment transformed the conversation from focusing on European multilingualism to considering global digital rights and political freedom.


Have you been following the use of IDN top-level domains and ICANN’s policies in that regard? You mentioned Sámi, and in particular, there is a policy going on that looks like the Sámi letter Eng will be much less usable than some other special characters because nobody out there cares, basically. So that these very small minority languages and their characters would need someone to speak for them.

Speaker

Tapani Tarvainen


Reason

This comment is thought-provoking because it exposes a critical gap between the idealistic goals of Universal Acceptance and the harsh reality of implementation. It highlights how minority languages face systemic disadvantage not just from market forces, but from the very governance structures meant to support linguistic diversity. The phrase ‘nobody out there cares’ starkly illustrates how technical decisions can perpetuate linguistic marginalization.


Impact

This intervention shifted the discussion from celebrating progress in multilingual internet to confronting uncomfortable truths about whose languages actually get supported. It forced the speakers to acknowledge the limitations of current approaches and the need for active advocacy for minority languages. The comment introduced a note of urgency and activism that wasn’t present in the earlier technical presentation.


This might seem really low, 1%, but it’s actually in line with what our CCTLDs are experiencing. Also taking into account the technical and other issues, the uptake is not as it could be.

Speaker

Regina Filipova Fuchsova


Reason

While seemingly a simple statistic, this admission is insightful because it reveals the stark disconnect between the technical capability (88% of European ccTLDs support IDNs) and actual usage (only 1%). This honest acknowledgment of failure challenges the narrative of progress and forces a reckoning with why, despite years of development and policy support, multilingual domains remain largely unused.


Impact

This statistic served as a reality check that grounded the entire discussion. It shifted the focus from what’s technically possible to why adoption remains so low, leading to the detailed analysis of barriers (technical issues, low awareness, ASCII habits). It provided the empirical foundation that made the subsequent discussion of challenges more credible and urgent.


Overall assessment

These key comments fundamentally transformed what could have been a routine technical presentation into a nuanced exploration of the complex barriers to digital linguistic equality. Esteve’s insight about Internet shutdowns elevated the stakes from convenience to freedom, while Tapani’s intervention about Sámi characters exposed the gap between policy intentions and implementation reality. Regina’s honest acknowledgment of low adoption rates provided the empirical grounding that made these critiques meaningful. Together, these comments shifted the discussion from celebrating technical achievements to confronting systemic challenges, from European focus to global implications, and from abstract policy goals to concrete advocacy needs. The result was a much richer conversation that acknowledged both the importance of Universal Acceptance and the significant obstacles that remain in achieving true digital linguistic equality.


Follow-up questions

Could the relationship between Internet shutdowns and local language Internet usage be studied further?

Speaker

Esteve Sanz


Explanation

Esteve mentioned an interesting idea from Global South players that if the Internet was in local languages, politicians might be less likely to perform shutdowns because people would feel the Internet belongs to them. This concept wasn’t previously considered and could be valuable for future research.


How can more ccTLDs be encouraged to participate in the IDN World Report data collection?

Speaker

Regina Filipova Fuchsova


Explanation

Regina noted that the study has limitations due to the number of TLDs providing data and made a call for action to address more ccTLDs in their communities to participate in the study.


What are ICANN’s policies regarding IDN top-level domains, particularly for minority languages like Sámi?

Speaker

Tapani Tarvainen


Explanation

Tapani raised concerns about ICANN policies that may make certain characters (like the Sámi letter Eng) less usable than others, suggesting that small minority languages need advocates to speak for them in policy discussions.


When will the IDN variant tables be updated to better support special characters used in minority languages?

Speaker

Regina Filipova Fuchsova (in response to Tapani Tarvainen)


Explanation

Regina acknowledged that the last update was in 2008 and it’s ‘high time to update’ the IDN variant tables to include better support for special characters used in languages like Sámi.


How can IDN stories from different countries be collected and shared?

Speaker

Regina Filipova Fuchsova


Explanation

Regina made a call for action asking participants to share IDN stories from their countries, indicating a need for more case studies and examples to be documented and published.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette

WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette

Session at a glance

Summary

This discussion at IGF 2025 focused on the challenge of regulating artificial intelligence in ways that are both ethical and innovation-enabling, examining whether current AI governance frameworks adequately balance these competing demands. The session brought together speakers from multiple regions to explore the tension between AI ethics as surface-level principles versus regulation grounded in ethical foresight that creates real accountability.


Alexandra Krastins Lopes emphasized that ethical foresight must be operationalized rather than theoretical, requiring proactive mechanisms built into organizational governance structures throughout AI systems’ lifecycles. She highlighted the need for global AI governance platforms that can harmonize minimum principles while respecting local specificities, noting that regulatory asymmetry creates concerning scenarios of unaddressed risks and international fragmentation. Moritz von Knebel identified key regulatory gaps including lack of technical expertise among regulators, reactive rather than anticipatory frameworks, and jurisdictional conflicts that create races to the bottom. He advocated for adaptive regulatory architectures and the establishment of shared language around fundamental concepts like AI risk and systemic risk.


Vance Lockton stressed the importance of regulatory cooperation focused on influencing AI design and development rather than just enforcement, noting the vast differences in regulatory capacity between countries. Phumzile van Damme called for binding international law on AI governance and highlighted the exclusion of Global South voices, advocating for democratization across AI’s entire lifecycle including development, profits, and governance. Yasmin Alduri proposed moving from risk-based to use-case frameworks and emphasized the critical need to include youth voices in multi-stakeholder approaches.


The speakers collectively agreed that overcoming simple dichotomies between innovation and regulation is essential, as safe and trustworthy frameworks actually enable rather than hinder technological advancement.


Keypoints

## Major Discussion Points:


– **Ethical Foresight vs. Surface-Level Ethics**: The distinction between treating AI ethics as a superficial add-on versus embedding ethical foresight as a foundational, operational process that anticipates risks before harm occurs and involves all stakeholders from design to deployment.


– **Regulatory Fragmentation and the Need for Shared Language**: The challenge of different countries and regions developing incompatible AI governance frameworks, creating regulatory gaps, jurisdictional conflicts, and the absence of consensus on basic definitions like “AI systems,” “systemic risk,” and regulatory approaches.


– **Inclusion and Global South Participation**: The exclusion of marginalized communities, particularly from the Global South and Africa, from AI governance discussions due to policy absence, resource constraints, and the dominance of Western perspectives in shaping AI regulatory frameworks.


– **Cross-Border Regulatory Cooperation**: The mismatch between globally operating AI platforms and locally enforced laws, requiring new approaches to international regulatory cooperation that focus on influencing design and development rather than just enforcement actions.


– **Innovation vs. Regulation False Dichotomy**: Challenging the narrative that regulation stifles innovation, with speakers arguing that safe, trustworthy frameworks actually enable innovation by providing predictability and building consumer trust, similar to other regulated industries like aviation and nuclear power.


## Overall Purpose:


The discussion aimed to explore how to develop AI governance frameworks that are both ethically grounded and innovation-friendly, moving beyond reactive approaches to create proactive, inclusive regulatory systems that can address the global, borderless nature of AI technology while respecting local contexts and values.


## Overall Tone:


The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving rather than adversarial debate. Speakers acknowledged significant challenges while remaining optimistic about finding solutions through multi-stakeholder cooperation. The tone was academic yet practical, with participants building on each other’s points and seeking common ground. There was a sense of urgency about addressing AI governance gaps, but this was balanced with realistic assessments of the complexity involved in international coordination and inclusive policymaking.


Speakers

**Speakers from the provided list:**


– **Vance Lockton** – Senior technology policy advisor for the office of the Privacy Commissioner in Canada


– **Alexandra Krastins Lopes** – Lawyer at VLK Advogados (Brazilian law firm), provides legal counsel on data protection, AI, cybersecurity, and advising multinational companies including big techs on juridical matters and government affairs. Previously served at the Brazilian Data Protection Authority and has a civil society organization called Laboratory of Public Policy and Internet


– **Deloitte consultant** – AI Governance Consultant at Deloitte, founder of Europe’s first youth-led non-profit focusing on responsible technology


– **Phumzile van Damme** – Integrity Tech Ethics and Digital Democracy Consultant, former lawmaker


– **von Knebel Moritz** – Chief of staff at the Institute of AI and Law, United Kingdom


– **Moderator** – Digital Policy and Governance Specialist


– **Online moderator 1** – Mohammed Umair Ali, graduate researcher in the field of AI and public policy, co-founder and coordinator for the Youth IGF Pakistan


– **Online moderator 2** – Haissa Shahid, information security engineer in a private firm in Pakistan, co-organizer for the Youth IGF, ISOC RIT career fellow


**Additional speakers:**


– **Yasmeen Alduri** – AI Governance Consultant, Deloitte (mentioned in the moderator’s introduction but appears to be the same person as “Deloitte consultant” in the speakers list)


Full session report

# AI Governance Discussion at IGF 2024


## Balancing Ethics and Innovation in Artificial Intelligence Regulation


### Executive Summary


This session at the Internet Governance Forum brought together international experts to examine how to develop AI governance frameworks that balance ethical responsibility with technological advancement. The hybrid session featured speakers from Canada, Brazil, the United Kingdom, South Africa, and Pakistan, representing perspectives from government agencies, law firms, consultancies, and civil society organizations.


The discussion challenged the assumption that regulation necessarily stifles innovation, with speakers arguing that well-designed governance frameworks can actually enable technological progress. While participants agreed on the need for international cooperation and multi-stakeholder engagement, they offered different approaches to implementation, from flexible principle-based frameworks to binding international agreements.


### Session Structure and Participants


The session experienced initial technical difficulties with audio setup for online participants. The discussion was moderated by a Digital Policy and Governance Specialist, with online moderation support from Mohammed Umair Ali and Haissa Shahid from Youth IGF Pakistan.


**Key Speakers:**


– **Yasmeen Alduri**: AI Governance Consultant at Deloitte and founder of Europe’s first youth-led non-profit focusing on responsible technology


– **Alexandra Krastins Lopes**: Lawyer at VLK Advogados in Brazil, previously with the Brazilian Data Protection Authority


– **Phumzile van Damme**: Integrity Tech Ethics and Digital Democracy Consultant and former lawmaker from South Africa


– **Vance Lockton**: Senior Technology Policy Advisor for the Office of the Privacy Commissioner in Canada


– **Moritz von Knebel**: Chief of Staff at the Institute of AI and Law in the United Kingdom


### Key Themes and Speaker Perspectives


#### Moving Beyond Theoretical Ethics to Operational Implementation


Alexandra Krastins Lopes emphasized that ethical foresight “cannot be just a theoretical exercise” but must involve “proactive mechanisms” embedded in organizational structures. She argued for the need to operationalize ethics through concrete governance processes rather than treating it as an add-on to existing AI development.


Yasmeen Alduri reinforced this point by highlighting implementation challenges, noting that “if the developers cannot implement the regulations because they simply do not understand them, we need translators.” She advocated for moving from risk-based to use case-based regulatory frameworks that better account for how AI systems are actually deployed.


#### Regulatory Fragmentation and International Cooperation


Moritz von Knebel provided a stark assessment of current AI regulation, describing it as having “more gaps than substance” with “knowledge islands surrounded by oceans” of uncertainty. He identified critical shortcomings including lack of technical expertise among regulators and reactive rather than anticipatory frameworks.


Vance Lockton emphasized the importance of regulatory cooperation that focuses on “influencing design and development rather than just enforcement actions.” He highlighted the vast differences in regulatory capacity between countries as a critical factor in designing effective international cooperation mechanisms.


Alexandra Krastins Lopes proposed addressing fragmentation through “global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities.”


#### Global South and Youth Inclusion


Phumzile van Damme delivered a powerful critique of current AI governance processes, arguing that “there hasn’t been a proper democratization” of AI. She noted that while there may have been “somewhat of a democratization in terms of access,” true democratization requires addressing “the lifecycle” including “democratization of AI development, democratization of AI profits, and democratization of AI governance itself.”


Van Damme called for “binding international law on AI platform governance” and highlighted the systematic exclusion of African and Global South voices from AI governance discussions.


Yasmeen Alduri emphasized the exclusion of youth voices, noting that “multi-stakeholder approaches” often “forget about youth” despite young people having solutions and being disproportionately affected by AI systems.


#### Challenging the Innovation-Regulation Dichotomy


Multiple speakers challenged the narrative that regulation inherently stifles innovation. Moritz von Knebel argued that “you can have innovation through regulation,” citing examples from the nuclear and aviation industries where “they took off after we had safety standards in place.”


This perspective reframes regulation as potentially enabling innovation by providing predictability, building trust, and creating level playing fields that reward responsible development practices.


### Areas of Agreement and Disagreement


**Consensus Areas:**


– Need for multi-stakeholder engagement across civil society, private sector, and public sector


– Importance of international cooperation while respecting local contexts


– Inadequacy of current reactive regulatory approaches


– Need for adaptive rather than static frameworks


**Different Approaches:**


– **Regulatory mechanisms**: Flexible principle-based approaches versus binding international law


– **Framework design**: Risk-based versus use case-based categorization


– **Implementation focus**: Early-stage development influence versus enforcement cooperation


### Questions and Audience Interaction


The session included questions from online participants about practical implementation challenges and the role of friction between tech companies and regulators. Yasmeen Alduri noted that such friction is expected and that “democratic states specifically live from this friction,” while emphasizing the need for constructive dialogue.


Participants also discussed the challenge of balancing innovation incentives with necessary safety regulations without creating regulatory arbitrage where companies seek the most permissive jurisdictions.


### Practical Recommendations


Speakers proposed several concrete steps:


– Establishing global AI governance platforms for harmonizing minimum principles


– Creating independent technical advisory groups and capacity building programs for regulators


– Developing international dialogues to establish consensus on key AI definitions and terminology


– Creating spaces for meaningful co-creation between different stakeholders


– Shifting narratives around AI from inevitability to focus on desired societal outcomes


### Unresolved Challenges


The discussion highlighted several ongoing challenges:


– How to achieve definitional consensus across different regulatory traditions and cultures


– Addressing vast disparities in regulatory capacity and resources between countries


– Developing practical mechanisms for meaningful Global South and youth inclusion


– Implementing effective cross-border enforcement while respecting national sovereignty


### Conclusion


The discussion demonstrated both the potential for collaborative AI governance and the complexity of implementation. While speakers found common ground on fundamental principles, they revealed significant challenges in translating shared values into coordinated action. The session’s emphasis on inclusion and its challenge to the innovation-regulation dichotomy provide important foundations for future AI governance efforts, though substantial work remains to develop practical, implementable frameworks that serve global needs.


Session transcript

Moderator: I hope everyone can hear me. Hello, everyone. Good morning and welcome to this session, Digital Dilemma, AI Ethical Foresight versus Regulatory Ruling. I’m a Digital Policy and Governance Specialist, and this is a conversation that I’ve been looking forward for many weeks. Today is the final day of IGF 2025, and it’s fitting that we’re closing with one of the most important global governance challenges. How to regulate artificial intelligence in a way that’s both ethical and enabling? Is that even possible? As the awareness of AI’s power has risen, the dominant response has been to turn to AI ethics. Ethics, simply put, is a set of moral principles. We’ve seen governments, companies, and institutions roll out principles, guidelines, and frameworks meant to guide responsible AI development. But here is the problem. When ethics is treated as a surface-level add-on, it often lacks the teeth to create real accountability or change, and that’s where regulation grounded in ethical foresight becomes essential. We’ll be unpacking the challenge today through insights from multiple regions and perspectives. I’m joined in person by three incredible speakers, Yasmeen Alduri, AI Governance Consultant, Deloitte, Alexandra Krastins Lopes lawyer at… and VLK Advogados, and Phumzile van Damme, Integrity Tech Ethics and Digital Democracy Consultant, who will be joining us in just a short while. We also have a vibrant virtual component, my colleagues, Mohammed Umair Ali and Haissa Shahid, are the online moderators. Umair, over to you to introduce yourselves and our online speakers.


Online moderator 1: Hello, everyone. Hello, everyone. Just mic testing. Am I audible? Yes. Great. So welcome, everyone. Welcome to our session. I’m Mohammed Umair Ali. I’ll be serving as the co-host and online moderator for this very session. Briefly about my introduction. I’m a graduate researcher in the field of AI and public policy. I’m also the co-founder and the coordinator for the Youth IGF Pakistan. And by my side is Haissa Shahid. Haissa, can you turn on your camera?


Online moderator 2: Hello, everyone. Sorry, I can’t turn on the camera right now. But hello from my side. I’m Haissa Shahid from Pakistan. And I’m also the co-organizer for the Youth IGF. Along with that, I’m serving as an information security consultant. I’m also the co-founder and the coordinator for the Youth IGF Pakistan. And by my side is Haissa Shahid. Haissa, can you turn on your camera? Hello, everyone. Sorry, I can’t turn on the camera right now. But hello from my side. I am serving as an information security engineer in the private firm in Pakistan. And I’m also an ISOC RIT career fellow this year. So, I’ll bring forward the session ahead. Thank you.


Online moderator 1: And moving towards the introduction of our speakers, so we are joined by speakers from the United Kingdom and Canada. To start with Mr. Vens, who is the senior technology policy advisor for the office of the Privacy Commissioner in Canada. And Mr. Meritz from the United Kingdom, who is the chief of staff in the intelligence division of the ICT division. Please welcome Mr. Meritz. Thank you so much for having me today. Hello, everyone. Mr. Meritz is the chief of staff at the Institute of AI and Law. Welcome aboard, everyone. Over to you, Taiba. Taiba, am I audible?


Moderator: Thank you so much for letting me know. I’m going to repeat myself. We’ll begin with a round of questions for each of our five speakers. One question each, about five minutes per speaker towards the end. We will be taking pictures, speakers with the audience, so please don’t run off too quickly at the end. Alexandra, I’ll start with you. So many of the AI frameworks we see today cite ethics as a very important aspect. But when you look at it a bit closely, ethics often feels like an add-on rather than like a foundational principle. From your perspective, is that enough? And more importantly, what does it look like to truly embed ethical foresight into governance structures so that it shapes the direction of AI development from the very start?


Alexandra Krastins Lopes: Great, thanks. It’s an honor to contribute to this important discussion. And while I have a proudly funded civil society organization called Laboratory of Public Policy and Internet, and served for a few years in the Brazilian Data Protection Authority, today I speak from the private sector perspective. I represent Felicade Avogados, a Brazilian law firm where I provide legal counsel on data protection, AI, cybersecurity, and advising multinational companies including big techs on juridical matters and government affairs. And when we talk about embedding ethical foresight into AI governance, we’re not talking about simply including a list of ethical principles in a regulation or policy document. We’re talking about building a proactive and ongoing process, one that is operational, not theoretical. Ethical foresight means anticipating risks before harm occurs. So it requires mechanisms that allow organizations to… to ask the right questions before AI systems are deployed. Are these systems fair? Are they explainable? Do they reinforce historical discrimination? Are they safe across different contexts? And this ethical foresight must be built into organization governance structures, including AI ethics committees, impact assessments, internal accountability protocols that operate across the entire lifecycle of AI systems. It also requires clarity about roles and responsibilities within organizations. Therefore, ethical foresight is not just the job of the legal or the compliance team, it needs engagement from product designers, data scientists, business leaders, and most importantly, it needs board-level commitment. At the same time, foresight must be context-aware. Many global frameworks are still shaped primarily by Nord countries’ perspectives, with assumptions about infrastructure, economic and regulatory capacity, and also risk tolerance that do not necessarily reflect the realities of the global majority. So the regulatory debate should be connected to social context. In Brazil, for example, historical inequalities, specific challenges for entrepreneurship, and limited institutional enforcement power. Besides that, embedding ethical foresight in governance requires abstract principles. That is, by setting minimum requirements rather than imposing a one-size-fits-all model. And tools can be adapted to specific conditions, such as voluntary codes of conduct, soft law mechanisms, AI ethics boards, and flexible risk-based approaches. Additionally, national policies may have different strategic goals, such as technology development, This regulatory asymmetry creates a highly concerning scenario of unaddressed ethical and social risks, international regulatory fragmentation, and direct normative conflicts. Embedding AI foresight requires international coordination with minimum safeguards, promotion of technological progress, investment, and global competitiveness. Considering this scenario, the proposal I bring for reflection today is the need to establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities. Besides that, incentivize tools such as voluntary codes of practice, which allow business to anticipate and mitigate risks without restricting innovation. Additionally, there should be parameters for cross-border regulatory cooperation for enforcement and accountability, since jurisdictional fragmentation makes enforcement difficult, especially when platforms operate globally, but accountability is local. In my experience advising companies, I’ve seen that while they are willing to act responsibly and ethically, international cooperation and legal certainty are essential for that to happen. Thank you.


Moderator: Thank you so much, Alexandra, and as she mentioned, ethical foresight is not just the job of a legal team, but truly a multi-stakeholder process. Now we’ll turn to our online speaker, Moritz, if you’re there, you’ve been keeping a very close view on how AI regulatory frameworks are evolving and also where they’re falling short. From where you sit, from your perspective, what are some of the key loopholes or blind spots in current AI regulation and what would it take in terms of practical terms to close these gaps and build a governance model that builds digital trust and resilience? Over to you.


von Knebel Moritz: Yeah, thank you and thanks for having me. People have often asked this question, what are the regulatory gaps that we see? And I think that assumption already is a bit flawed and mistaken because that would assume that around these gaps we have well thought out systems and processes and frameworks. And I don’t think we have. And so I will employ a metaphor used by a former professor of mine who said that he does not have knowledge gaps because it would be just too large. But he has knowledge islands and those are surrounded by a lot of ocean that we don’t know about. And then if those islands are well connected enough, then you can travel back and forth and it still kind of works. But there are more gaps than there is actual substance. So with that caveat, let’s look at where those gaps are clearest or where the ocean is deepest, so to speak. And I think on the domestic level, I would identify two big items. One is a lack of technical expertise. And so regulators often lack the deep technical understanding that is needed to effectively oversee those rapidly evolving AI systems. That’s especially true in countries that have historically not had the kind of capacity and resources to build up that infrastructure. And those institutions that then in turn creates a reliance on industry for setting standards for self-reporting, for self-assessing the safety of their models. And that makes it very difficult to craft meaningful technical standards on the. on the political level. That’s one of these very, very deep, deep sea areas. Another one is that we often see reactive frameworks. So current approaches largely respond to known harms and try to track when a harm occurs. But they do little in terms of anticipating emerging risks. And if you think that that is where a huge amount of the risk in the future is going to come from, then frameworks need to be adaptive to that. And the pace of AI development, which proceeds at a breakneck speed, consistently outstrips the pace at which regulatory systems can adapt, which is a huge challenge. On the international level, I think there’s one big item that then feeds into other problems, which is jurisdictional conflicts or races to the bottom. And then that turns into sometimes unjustified, sometimes justified fears of overregulation. And so different national approaches, but also international approaches like the one that the EU has taken, create a lot of complexity for compliance and regulation. And China’s governance system, which focuses on algorithms, differs fundamentally from the EU, which is more risk based. The US, the UK have emphasized innovation friendly approaches, and that creates a regulatory patchwork that is difficult to navigate, but also creates room for regulatory arbitrage. So similar to how you would move your country to your company to a country that has very low tax rates, you might do the same when it comes to regulation. And so that then creates incentives also for countries to to water down and then weaken their regulation. All of this is amplified by the fact that we just do not know enough. So a lot of the terms that are used in the concepts like risk, systemic risk, are insufficiently defined and there’s no consensus on what that actually means. So on that maybe a bit gloomy note, I’ll move on to what we can take to close these gaps or rather to to build bridges between those. that we already have with the UAI Act and some other frameworks that exist around the globe. One thing is I think we need adaptive regulatory architectures, which means that rather than having static rules, we need frameworks that can evolve with technology and rather quickly. And so that could mean that you have principle-based regulations, and then technical implementation standards can be updated through streamlined processes in secondary regulation. You could also have regulatory sandboxes where you try out new approaches and see what works, and have very quick feedback loops, which maybe governments aren’t historically good at, but are definitely getting better at as we speak. There’s a general need for capacity building. So I’ve talked about how expertise is lacking in governments often, but it’s also lacking elsewhere. So creating independent technical advisory groups could be useful that include the perspectives of different people. And that also means that we need inclusive and novel approaches to engage stakeholders. We need dialogues and diplomatic efforts. So I’ve said that some terms are just not sufficiently defined. And so without a shared language, the landscape will remain fragmented and incentives to cut corners will remain. So we see some work on this with international reports coming out and the international dialogues on air safety, for instance, but much more of this is needed to establish consensus on the key questions, because only then can we start to think about regulatory frameworks. And one last thing, because I do work on AI, so I have to flag this. There might also be ways to leverage AI for these processes. And I think relatively little work has gone into this and much more is needed. There are many more gaps and many more solutions that we could talk about, but I’ll leave it at that for now. And I’m happy to receive questions about any or all of these in the Q&A.


Moderator: Thank you, Moritz. I have a question. You mentioned a shared language. Can you elaborate more on that? Because it sounded really interesting.


von Knebel Moritz: Yeah, so this runs the gap. It basically spans from the very fundamental, what is the risk? What is an AI system? Luckily, the OECD provides a definition that most people are happy with. But when it comes to, I’ll give you a concrete example, the definition of systemic risks in the European Union under the AI Act, this will influence a lot on how future AI models are governed and deployed and developed. And so since that is the case, it really matters what people think of as systemic risk. And it’s not a field that has centuries of research behind it, but rather decades. And so some people understand very different things under systemic risk as others do. And so unless you have some consensus on this, it’ll be very, very difficult. That’s on the sort of fine grained regulation and a specific legislative framework. But there’s also high level conversations about what are even the most important risks and to try to create consensus on this. Again, different cultures, different countries see AI in a very different way. And if you consider that, that makes it difficult to cooperate and collaborate internationally. And so to establish a shared language around what does AI risk mean? And I’m specifically focused on risk here, but the same goes for benefits. What are the benefits that we care most about? That will be needed. And that, again, also touches on the technical expertise, because a lot of this requires a kind of technical knowledge and you need to know what accountability is and what adversarial robustness is of an AI model. So that further complicates things.


Moderator: Thank you so much, Moritz. Very enlightening. And the main key takeaway I could see was that different countries see AI in a very different way. And I think we really need to explore this further. Vance, I’m going to go to you now. Given how borderless technology is, there’s a real challenge around jurisdiction and enforcement. And I know Moritz also touched upon it. In a way, that platforms operate globally, but the laws don’t. What opportunities do you see for cross-border regulatory cooperation, and how can regulatory cooperation help tackle jurisdictional conflicts and enforcement barriers, especially for digital platforms that may be lightly regulated in one country, but highly impactful globally? Over to you, Vance.


Vance Lockton: Sure, thanks. So, it’s a very interesting question when we get into this idea of regulatory cooperation, because, I mean, as Moritz has been flagging, not only are the understandings of artificial intelligence difference across countries, there’s such a difference on what regulatory frameworks actually exist, and what can be applied, that once we’re into the realm of enforcement, certainly, there are some countries that have attempted to do shared, make shared enforcement efforts, but I don’t think those are necessarily where we need to focus our efforts. They’re not necessarily going to be the most effective way to address some of these issues. I think what really needs to be happening, when we’re talking about regulatory enforcement, rather than shared enforcement actions, it really has to be, how can regulators have more influence over the design and development and deployment decisions that are being made when these systems are being created or adopted in the first place? And that’s something that a lot of organizations are starting to try to, try to wrap their heads around. I look at a document like the OECD has a framework for anticipatory governance of emerging technologies. I don’t have any particular connection to it. The OPC, my organization, doesn’t have it particularly endorsed this document, but it just kind of creates this useful framework for me of saying, you know, the elements that they set out in this framework of what this regulatory governance needs to look like, saying, you know, it needs to establish guiding values, have strategic intelligence, bring in stakeholder engagement, agile regulation, international cooperation. Again, I don’t want to walk through this framework in particular, but I do think there are useful pieces that come out of that. Because from a regulatory perspective, one of the challenges that we always face is that we don’t really get to set the environment in which we work. Generally speaking, regulators are not going to be the ones who are drafting the laws that they’re overseeing. We may be able to provide input into their drafting, but by and large, we’re going to be handed a piece of legislation over which we have to have oversight. So, there are elements, but even within that, though, there are elements of discretion as far as what kinds of resources can be applied or what strategies can be applied to particular problems. Or there are often, in a lot of statutes, there will be considerations with respect to appropriateness or reasonableness. I think that’s going to be a critical piece for AI governance, for particularly the challenge that you’ve set out, this idea of international impacts of AI systems, where, you know, frankly, as a regulator in a Western nation, we aren’t necessarily going to be able to have that direct visibility, that direct insight into the impacts of that system on the global majority. So we aren’t necessarily going to have, again, that full visibility into what safeguards you need to be pushing for, or what appropriate purposes or what reasonableness actually means in the context of these AI systems, when we aren’t seeing what the true impacts of these systems are. So having those kind of dialogues amongst regulators to share that cultural knowledge is going to be a critical piece of this going forward. And, you know, just the sheer regulatory cooperation of understanding amongst regulators who has what capacities in AI, as Moritz flags, for a lot of regulators, there isn’t necessarily going to be that technical expertise. And that can be quite understandable. You know, for a lot of newer data protection authorities or some of the less resourced authorities, you might have a dozen staff for the entire regulatory agency. And that’s not a dozen staff on AI. That’s a dozen staff covering privacy or data protection as a whole. And obviously, data protection isn’t the only piece of AI regulations. It’s just kind of the piece I’m most familiar with. That’s why I’m kind of focused on it. So, you know, you might have a dozen staff. Canada has about 200 staff, but we have a dedicated tech lab that’s designed to be able to kind of get into these systems. It can take them apart and really understand what’s happening kind of under the hood of a lot of these AI systems. Then you look at something like the United Kingdom’s Information Commissioner’s Office, which has well over a thousand staff. So, well over a thousand staff and a lot of programs set up for ways that you can kind of have more innovative engagements with regulators around things like regulatory sandboxes and things along those lines. So, again, being able to understand amongst regulators who has what capacity within AI is going to be a critical piece. And I think having that shared understanding of who can do what and what the true risks and benefits are going to be is such a critical piece because, you know, I think one of the biggest challenges for me that I’m kind of seeing from a regulatory perspective is – again, from a Western regulatory perspective, I’ll say is that there’s this narrative that’s coming out that AI is such an essential piece of future economic prosperity of a nation or even security, future security of a nation. That there’s this idea that any regulation that creates restrictions on the development of AI is a threat as opposed to this opportunity or as opposed to this necessary thing to get to responsible innovation. And so being able to have that real understanding of what the potential risks are, what the potential harms are, and realistically what benefits we’re trying to achieve, that are not simply just, again, making stock prices for a handful of companies go up or making overall GDP of a country raise.


Moderator: Vance, I’m so sorry. You have 30 seconds left.


Vance Lockton: Yeah, no problem. Not a problem. So, again, we need to have that ability to counter that narrative. And, you know, within countries, regulators are getting better at having that cooperation amongst various sectors. So we are working towards having better cooperation internationally and finding those soft mechanisms, finding those ways to have influence over design and development. Again, it’s a work in progress, but my overall message is going to be we just need to reframe regulatory cooperation away from enforcement cooperation and to finding opportunities to influence that design and development. And I’ll stop there.


Moderator: Thank you so much, Vance. Phumzile, thank you so much for joining us. I’d love to talk to you about power and participation, shifting the lens to inclusion, which is, you know, a very important part of artificial intelligence. So we see that AI governance conversations are often dominated. by a handful of powerful perspectives. In your view, whose values are shaping the way AI is being regulated now, and what would it take to ensure that voices of marginalized or under-representative communities are not just included but also influence decision-making?


Phumzile van Damme: Thank you. I would just like to start with what I view as the ideal on AI governance. I think previous speakers have kind of highlighted a major challenge in that there is one, a lack of shared language. There is the idea that AI, like all technology, knows no jurisdictional borders. It’s global. So what I think the ideal would be is some form of international law around AI governance. And through that process, perhaps run through the UN, there is the ability to get a more inclusive process that includes the language, the ideals, the ethical systems of various countries. So I think that is the direction we need to be going. I think the experience from kind of over the last years with social media and the governance around that and the difficulty with different countries, either in some instances not having any laws, regulations or policies in place and others doing that, is that there is a lack of proper oversight. So I think it’s an opportunity now for international law to be drafted around that. But as it stands right now, outside of what I think is the utopia, my utopia, what I think would be the great thing to have around AI governance, I think there are two challenges of inclusion right now. One is that in the global south, particularly in Africa, there is policy absence. I know in many African countries, AI is just not a consideration, and that’s not because, you know, it’s just something that’s not viewed as important. There is a conflict with bread and butter issues. Most of these are countries that are more concerned or more, you know, they’re more focused on those types of issues. So I think because of that, then there is a exclusion in those discussions because, you know, that’s just a process that hasn’t taken place in those countries. So I think I’d like to see more African countries, countries in the global South, including themselves in those conversations by beginning to begin the process of drafting those policies, crafting those laws. And I always say that it may appear incredibly complex, but I think there is a way of looking at what other jurisdictions have done and kind of amending those laws to reflect local situations. I don’t think it’s a situation where there needs to be a reinvention of the wheel. I used to be a lawmaker, and I don’t want to say this in a polite way, but there’s a bit of, I don’t want to say that tech is, I don’t know, tech literacy is not the right phrase, but it’s just not something that’s…


Moderator: You can be candid. It’s okay.


Phumzile van Damme: Yeah. It’s just, it seems like a very difficult topic to handle, and it’s just avoided. So just to kind of encourage that it’s not that difficult, there are frameworks in existence that can be amended to fit the requirements in each country. So one, so there’s that exclusion. And I think there’s… The theoretical misnomer around what has been seen as a democratization of AI, and I think that is the wrong phrase to use, there hasn’t been a democratization. While there may have been somewhat of a democratization in terms of access, in terms of access to tools, gen AI tools, particularly like ChatGPT and all of those, there hasn’t been a proper democratization. So if we’re going to talk democratization and AI governance, we need to talk about it through the life cycle. So there needs to be democratization, I wrote this down, not only democratization of AI use, democratization of AI development, so that in the creation of AI tools, there’s a really diverse voice is included in those design choices, and large language models that there is someone at the table that says, you need to look at African languages, you need to look at languages from different countries, there needs to be a democratization of AI profits. You know, it can’t be a situation where countries are merely seen as sources of income, and there’s no way to kind of share profits, so there needs to be a democratization of AI profits, and the democratization of AI governance itself. So the way I kind of summarize it, I think there needs to be a deliberate effort by the global South, by African countries, by other countries, to insert themselves forcefully into the discussion, and that requires them beginning those processes where it hasn’t begun, and indeed for there to be a deliberate inviting of those voices to say, come take part in the discussions, and the discussion is not only at the stage where the technology is being deployed from, but through the entire, to the entire lifetime.


Moderator: Thank you. Very insightful. I actually have a question, but we’re running a bit short on time, so I’m probably going to leave it towards the end. But I really like the way that you said that the inclusion of gender inclusion or diversity of perspectives is not very difficult to do so, but it just seems very difficult to do so in real world in making sure that regulations or frameworks actually include a part of that as well. Thank you so much. Yasmin, we’re going to move on to you. We’re going to wrap up this round. We’ve talked a lot about the challenges, what’s not working, but let’s imagine what could. If you had a blank slate to design an ideal AI regulatory framework, one that’s forward-looking, ethically grounded and innovation-friendly, I mean huge to ask what are the three most essential features you would include?


Deloitte consultant: Good morning everyone. My name is Yasmin Alduri. I’m an AI governance consultant at Deloitte and I’m the founder of Europe’s first youth-led non-profit focusing on responsible technology. So we’re basically trying to bring young voices to the responsible tech field and have them basically share their ideals, their fears and their ideas on how to make this world a better place, let’s just say like that. I’m really happy van Damme actually brought up the point of international law because yes, in an ideal world we would all get along and in an ideal world we would have some kind of definition on AI governance, specifically on the international law, but there are two aspects that are really, really, I wouldn’t say problematic but that make this utopia hard to achieve. I wouldn’t say It’s not a possibility to achieve it, but it’s hard to achieve it. And that’s the first part is definitions. We need to get to a point where everyone defines the same aspects the same way. So just the definition of AI systems is a huge, huge issue. We saw this with the European laws already, with the EU AI Act. Then the second part, which is also an issue, is are countries actually upholding international law already? And if we look at the past years, we will see that in a lot of countries we see a huge increase in countries actually not upholding international law. So the first part would be to actually question and to basically push for more accountability from countries. So this is one aspect. I also wanted to bring up one point because I love the fact that also Saba Tiku Beyene brought up the aspect of inclusion. We need to set a base, which is infrastructure. If we don’t have infrastructure in different countries who are lacking the access to AI, we’re not able to democratize the access to AI. So this would also be a base for me. Okay, let’s come to the actual question. Ideal frameworks. As I said, in an ideal world, we wouldn’t have an approach that is so reactive, as Moritz called it earlier, but we would have one that is adaptive. So in this case, what we have right now is a risk-based approach. So we basically look at AI systems and we categorize them in different risks or straight up just prohibit them. And while I do see why we’re doing that, the ideal form would actually be a use case framework. So we actually look at AI systems and we look at them, how they’re used and in what sectors they’re used. The idea behind it is that AI systems, the same AI system, can be used in different use cases and in different ways, which means it can have different harms to different stakeholders. And the idea here is to make sure that we can actually use these systems. AI systems but we make sure that we really do uphold all the different scenarios that could happen and a use case approach would actually make this easier for us. It would also make it easier to keep and check the impact assessments that you have to do as someone who’s actually implementing those laws. That brings me actually to the second part which is we need frameworks that are actually implementable and understandable. So what exactly do I mean? At the end of the day those regulations and laws are being implemented by the people who are developing AI systems because those are the ones who are actually building it and those are the ones who have to uphold it. So if the developers cannot implement the regulations because they simply do not understand them, we need translators. So people like me or Alex who like go down and like try to explain or be the bridge between tech and law. But if we in an ideal world we would have developers who already understand this. We would have developers who would be able to implement each aspect, each principle in a way that is clearly not only defined but understandable. So this is the second part. The third part and I know that some of you will laugh already because you have heard this so many times over the past days. We need to include all stakeholders, multi-stakeholder approaches. So civil society, private sector and the public sector need to come together. Now here’s the thing where I disagree with most multi-stakeholder approaches. We talk a lot about including all stakeholders yet we forget to include the youth. And this is one of the biggest issues I see because in my work within the responsible technology hub I see not only the potential of youth but I see that they mostly even understand the technology better and they know how to implement it better. So we need to include them to not only future-proof our legislation but to make sure that they’re included in means of okay these are their fears, these are their not only ideas but these could be possible solutions. And from my work I can tell you that youth, young people, and young professionals have a lot of ideas for solutions that we’re facing, they’re just not heard. So we need to give them platforms. We need an older generation that leads by example but also gives platform. We need a young generation that refreshes this discourse every time. And we need spaces that are spaces of real co-creation. So these kind of discussions that we’re having right now, they’re a great base. But we need spaces where we actually create, where we have enough space to talk, to discuss, but at the same time to bring different disciplines and different sectors together. So in my work specifically, I see this working actually very well. So having a space where academia comes, where the public sector comes, where the private sector comes, and where we have intergenerational solutions for issues. And we’re actually building solutions together. So with that in mind, those would be the three aspects of, I would say, the ideal AI governance framework for me specifically.


Moderator: Right on time. Thank you so much to all our speakers. So many valuable takeaways. Now we open it up to the floor. The mics are on either side of the room. Please introduce yourself. And if you’re joining virtually, feel free to post in the chat or raise your hand. Omer, do we have any questions?


Online moderator 1: Yes, absolutely. We do have questions. I have received questions in the direct message and the chat box, but do we have any questions from the on-site participants before we move on to the online participants?


Moderator: We’re warming up here. So you know, start with the question. Right, so one of the questions that I received is,


Online moderator 1: and I think it might be for Moritz to, you know, maybe answer this better. The question is that, just like the concept of internet fragmentation, do you see that we are heading towards an era of AI fragmentation due to absence of consensus-based definition, absence of shared language, and if that is the case, how do you suggest to move forward with it?


von Knebel Moritz: Yes, I do think that we do see fragmentation, but I also think that it’s maybe a little bit reductive. I think this often gets portrayed as a fragmentation alongside country lines or jurisdictions like EU, US, China, whereas there are often splits between different factions and their interests and what they see as the ideal governance system, and all these factions have input. And so I see a fragmentation happening on multiple levels, so it goes beyond different countries and or jurisdictions. I am fairly concerned about fragmentation because, as I said, it creates the wrong incentives to race to the bottom. It doesn’t build a great base for trust between different partners, again, different countries, but also different sectors, the non-profit sector, industry, academia, government. So yes, I see this as a concern. In terms of what can be done, I think it goes back to identifying the areas where people already agree on and then building up from there. There’s never going to be perfect agreement and total unison on these things, but I think sometimes there’s more room and more overlap people are willing to give credit for. So in the foreign policy or diplomatic domain, Track 1.5, Track 2 dialogues have been pretty successful, I think, in identifying these areas of consensus. And yeah, I mean, at the end of the day, it’s also events and fora like IGF, where people get together and hopefully get to, at some point, establish a shared language or at least hear what other people, I think sometimes people are also too ambitious here, right? They want everybody to speak the same language and cultural anthropologists and other people will tell us that that is not actually what we should be aiming for, right? It is okay for people to speak different languages. But I think if you dial back your ambition and say, no, we don’t have to have a complete agreement, the first step is hearing how the other side sees things. Then I think having more of these conferences and as other people have said, diverse representation of voices at these conferences and events can be very useful in charting a path forward towards a shared language. It’s an iterative, it’s a painful process. It’s not going to come overnight, but it doesn’t mean that we shouldn’t invest energy in it.


Online moderator 1: All right. So, there is another question here and it says that, right. So, yeah, can you, can any of the speakers point towards any exemplary AI policy that balances innovation and ethics? I understand this is quite a broad-ended question, but if anyone would like to take that up.


Moderator: Anyone from the onsite speakers would like to take this question?


Deloitte consultant: Perfect. Let me take this off first. So, can I point any regulation out there that brings in good governance and with the news lately, so even there we have still discussions on how to implement specific laws and there is still a lot of room. So for example with the Middle East, we barely have any official AI governance there. We see a trend of discussions. We saw this at the last IGF where a lot of Middle Eastern and North African states came together and started having a discussion but this discussion relied more on principles rather than actual frameworks that could be implemented. So there is still, for whoever who asked this question, there is still a lot of room to actually co-create and to bring regulation into the market.


Moderator: Great, Harsha, I believe you got a question from the chat as well, do you want to go?


Online moderator 2: Yeah, we have another question. So Ahmed Khan asked, there has been a recent trend of boringness and putback from tech companies against what they call excessive regulation which restricts innovation and goes against development. So is there a way forward with Harmony or are we going to see a similar friction in the AI space as we have seen in social media and tech space versus government regulation?


Moderator: Anyone from the on-site speakers want to take this ahead or have a comment to make on this? Going to go online as well. Moritz or Vance, do you want to answer this question? Harsha, can you please repeat the question again, though? We didn’t understand it quite as the connection got a bit fuzzy.


Online moderator 2: So the question is, is there a way forward with Harmony, or are we going to see a similar friction in the AI space as we have seen in social media and tech spaces versus government regulations?


Moderator: So what I understand is that, you know, in the past we’ve seen that there has been some friction between platform regulation or platform governance as well. Are we going to be seeing a similar sort of friction in AI framework, AI regulation as well, as we move forward in the future? Do you think so? You don’t think so? You know, just a one word answer would be great as well.


Alexandra Krastins Lopes: Let me answer that one. I hope not. That’s why we’re here, having this multi-stakeholder discussion, so we can achieve some kind of consensus. And also complementing the last question about different kinds of legislations and which one would provide safeguards and would not hinder innovation. I believe that the UK has been taking a good approach by not regulating with a specific bill of law. There is no bill of law going on right now. They are letting the sectoral authorities handle the issues and also they did not define the concepts, so it won’t be a closed concept. So it can let the technology evolve and we can evolve concepts as well. So I hope that, I believe that a principle-based approach and not a strict regulation would be the best approach and that would prevent the friction we’re talking about.


Moderator: Okay, great, perfectly concise answer and good thing too because we’re running a bit short on time. So we’re going to move towards the final stretch. I’d like to hear one last thought from each speaker. Please restrict your answer to one sentence or two sentences at max. Yasmin, we’re going to start with you. What is the most urgent action we must take today to align AI governance with ethical foresight?


Deloitte consultant: To give a quick, quick answer to the last question about the friction between tech companies and regulators, yes, we see the friction. Yes, there will be more friction and I would be surprised if we didn’t have it, if specifically in democratic states specifically live from this friction. So we will see it and hopefully we will find a common ground to discuss these regulations. But what I do think is critical and what we need specifically as a society regardless of government regulation or tech regulation or whatnot, we need to make sure that we’re critically assessing what we’re consuming and we need to make sure that we critically assess what we say, what we perceive and what we create because what we’re seeing right now is very worrisome in means of how we’re using specific AI systems without questioning their outputs, we’re publicizing them without questioning what whether the contexts are right or not. So I believe one of the most important things right now as citizens is to make sure that we’re bringing critical thinking back again.


Moderator: Phumzile, would you want to go next?


Phumzile van Damme: Yeah, I’m gonna restrict it to one sentence. I’ve already gone into it. Binding international law on platform governance, AI platform governance.


Moderator: Perfect. Alexandra, would you want to go next?


Alexandra Krastins Lopes: I would like to leave you with a key action point that we must move from ethical intention to institutional implementation. I think ethical foresight cannot remain a theoretical aspiration or a paragraph and a regulation or a corporate statement. We need to build core structures of governance within companies and organizations in general.


Moderator: Rightly so. Ethical foresight should not be just a corporate statement. Thank you so much, Alexandra. And we’re going to go online to our online speakers. Vance, if you want to go ahead with two or three sentences to this question, what is the most urgent action we must take today to align AI governance with ethical foresight?


Vance Lockton: I’d say it’s to shift the narrative around AI away from AI being either an inevitability or a wholly necessary piece of future economies, to think about… what outcomes we want from future societies and how AI can build into those, can factor into those.


Moderator: Perfect. Moritz, if you want to go next.


von Knebel Moritz: Yeah, I’m gonna just add to that, more generally speaking, adding nuance to the dialogue. So overcoming the simple dichotomies of it’s innovation versus regulation. You can have innovation through regulation. We’ve had that for decades. Overcoming the US versus China or the human rights-based versus risk-based approach. Yeah, breaking away with these dichotomies that are not helpful and that ignore a lot of the nuance that is embedded within these debates.


Moderator: Moritz, I’m going to ask you to elaborate more because this is such an interesting point to make and adds so much value to the discussion. If you can elaborate more, I think we might give you one more minute.


von Knebel Moritz: Yeah, maybe I’ll touch on the, I mean, there’s much in here, but maybe the one thing I also touches on a question that was raised in the chat is on balancing ethics and innovation or regulation and innovation. And again, this is often pitted as a, you can choose. You can either regulate or you can innovate. And the EU is at a crossroads and we can’t fall behind. And yeah, we got to innovate, innovate, innovate. Whereas the reality on the ground is that safe and trustworthy frameworks that people, that give companies also predictability about the future and safety and know that their customers will like and feel that their products are safe is integral to innovation. And we’ve seen this in the past, the nuclear industry, the aviation industry, they took off after we had safety standards in place because that made it possible then to scale operations. And so I think this is just a very fascinating. for those of you interested, I’ve previously done work on this, writing up case studies on different safety standards in other industries and how they contribute to the development of a technology. So, yeah, going against the, we have to choose, the UK is an example of a very pro-innovative regulatory approach, and they didn’t step back and say, we’re not going to regulate. They were thinking about how regulation can serve innovation. And I think there are many ways that we can do this. And this often, unfortunately, gets ignored because it is easier to craft a narrative that pits two things and often two people and two camps against each other.


Moderator: Thank you so much. And what I take away from this is that building safe and trustworthy frameworks is due to these brilliant speakers. I also want to thank our audience, both here and online for their presence and for the conversation as well. Your presence on the final day of IGF 2025 is very important to us. And thank you to the wonderful IGF media team for coordinating the technical aspects. Before we close, I’d love to invite everyone on for a picture. First, we’re going to have a picture with the speakers, and then I’m going to invite the audience if they can come on the stage and have a picture with the speakers and myself. Thank you very much. Thank you. Thank you. The following is a work of fiction. Any resemblance to persons, living or dead, is coincidental and unintentional.


A

Alexandra Krastins Lopes

Speech speed

100 words per minute

Speech length

709 words

Speech time

424 seconds

Ethical foresight requires proactive mechanisms and organizational structures, not just theoretical principles

Explanation

Alexandra argues that embedding ethical foresight into AI governance requires building proactive and ongoing operational processes rather than simply including ethical principles in policy documents. This means creating mechanisms that allow organizations to ask critical questions about fairness, explainability, and safety before AI systems are deployed.


Evidence

She mentions specific mechanisms like AI ethics committees, impact assessments, and internal accountability protocols that operate across the entire lifecycle of AI systems, requiring engagement from product designers, data scientists, business leaders, and board-level commitment.


Major discussion point

Embedding Ethical Foresight in AI Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Deloitte consultant

Agreed on

Need for multi-stakeholder approaches in AI governance


Need for context-aware approaches that consider local realities rather than one-size-fits-all models

Explanation

Alexandra emphasizes that ethical foresight must be context-aware, noting that many global frameworks are shaped primarily by Global North perspectives with assumptions that don’t reflect the realities of the global majority. She argues for regulatory approaches that set minimum requirements while allowing adaptation to specific local conditions.


Evidence

She provides Brazil as an example, citing historical inequalities, specific entrepreneurship challenges, and limited institutional enforcement power as factors that require context-specific approaches. She mentions tools like voluntary codes of conduct, soft law mechanisms, and flexible risk-based approaches.


Major discussion point

Embedding Ethical Foresight in AI Governance


Topics

Legal and regulatory | Development


Agreed with

– Phumzile van Damme
– von Knebel Moritz

Agreed on

International cooperation and coordination is essential


Moving from ethical intention to institutional implementation within companies and organizations

Explanation

Alexandra argues that ethical foresight cannot remain a theoretical aspiration or merely a paragraph in regulation or corporate statement. Instead, there must be concrete institutional structures built within companies and organizations to operationalize ethical principles.


Major discussion point

Embedding Ethical Foresight in AI Governance


Topics

Legal and regulatory | Economic


Agreed with

– von Knebel Moritz

Agreed on

Current regulatory approaches are insufficient and reactive


Principle-based approaches without strict definitions allow technology and concepts to evolve

Explanation

Alexandra supports the UK’s approach of not regulating with specific legislation but letting sectoral authorities handle issues without defining closed concepts. This allows both technology and regulatory concepts to evolve together rather than being constrained by rigid definitions.


Evidence

She specifically mentions the UK’s approach of not having a specific AI bill and allowing sectoral authorities to handle issues, which prevents concepts from being closed and allows for technological evolution.


Major discussion point

Practical Framework Design


Topics

Legal and regulatory


Disagreed with

– Phumzile van Damme

Disagreed on

Regulatory specificity – Principle-based vs Detailed frameworks


v

von Knebel Moritz

Speech speed

165 words per minute

Speech length

1931 words

Speech time

702 seconds

Current AI regulation has more gaps than substance, with regulators lacking technical expertise

Explanation

Moritz argues that the assumption of regulatory gaps is flawed because it presumes well-thought-out systems exist around these gaps. Instead, he suggests there are more gaps than actual substance, using a metaphor of knowledge islands surrounded by vast oceans of unknown territory. He identifies lack of technical expertise as a major domestic-level challenge.


Evidence

He uses his former professor’s metaphor about having knowledge islands rather than knowledge gaps because the gaps would be too large. He notes that regulators often lack deep technical understanding needed to oversee rapidly evolving AI systems, especially in countries without historical capacity to build such infrastructure.


Major discussion point

Regulatory Gaps and Technical Challenges


Topics

Legal and regulatory | Development


Disagreed with

– Deloitte consultant

Disagreed on

Regulatory approach – Risk-based vs Use case-based frameworks


Reactive frameworks that respond to known harms rather than anticipating emerging risks

Explanation

Moritz criticizes current approaches as largely reactive, responding to known harms rather than anticipating emerging risks. He argues that since much future risk will come from emerging threats, frameworks need to be adaptive to address this challenge, especially given AI development’s breakneck pace.


Evidence

He notes that current approaches track when harm occurs but do little to anticipate emerging risks, and that the pace of AI development consistently outstrips the pace at which regulatory systems can adapt.


Major discussion point

Regulatory Gaps and Technical Challenges


Topics

Legal and regulatory


Agreed with

– Alexandra Krastins Lopes

Agreed on

Current regulatory approaches are insufficient and reactive


Need for adaptive regulatory architectures with principle-based regulations and streamlined processes

Explanation

Moritz advocates for adaptive regulatory architectures with principle-based regulations where technical implementation standards can be updated through streamlined secondary regulation processes. He suggests regulatory sandboxes and quick feedback loops as mechanisms to achieve this adaptability.


Evidence

He mentions specific tools like regulatory sandboxes for trying out new approaches, quick feedback loops, and the ability to update technical implementation standards through streamlined processes in secondary regulation.


Major discussion point

Regulatory Gaps and Technical Challenges


Topics

Legal and regulatory


Agreed with

– Deloitte consultant

Agreed on

Frameworks must be adaptive rather than static


Need for shared language and consensus on key terms like systemic risk and AI definitions

Explanation

Moritz emphasizes that without shared language, the regulatory landscape will remain fragmented with incentives to cut corners. He notes that key terms like systemic risk are insufficiently defined and lack consensus, making international cooperation difficult when different cultures and countries see AI very differently.


Evidence

He provides the specific example of systemic risk definition in the European Union under the AI Act, noting it will influence how future AI models are governed but lacks consensus. He mentions the OECD provides an AI system definition most people accept, but other terms remain problematic.


Major discussion point

International Cooperation and Jurisdictional Issues


Topics

Legal and regulatory


Agreed with

– Alexandra Krastins Lopes
– Phumzile van Damme

Agreed on

International cooperation and coordination is essential


Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation

Explanation

Moritz argues for breaking away from simple dichotomies like innovation versus regulation, noting that innovation can occur through regulation as has happened for decades. He advocates for overcoming false choices between different approaches and adding nuance to debates.


Evidence

He provides examples from nuclear and aviation industries that took off after safety standards were in place, enabling scaled operations. He mentions the UK as an example of pro-innovative regulatory approach that thinks about how regulation can serve innovation.


Major discussion point

Regulatory Gaps and Technical Challenges


Topics

Legal and regulatory | Economic


Disagreed with

– Online moderator 2

Disagreed on

Innovation vs Regulation relationship


Need for inclusive stakeholder engagement and capacity building across different sectors

Explanation

Moritz emphasizes the need for capacity building and creating independent technical advisory groups that include diverse perspectives. He argues for inclusive and novel approaches to engage stakeholders through dialogues and diplomatic efforts.


Evidence

He mentions the need for independent technical advisory groups and references international dialogues on AI safety as examples of work being done, though noting much more is needed.


Major discussion point

Infrastructure and Capacity Building


Topics

Development | Legal and regulatory


V

Vance Lockton

Speech speed

122 words per minute

Speech length

1120 words

Speech time

550 seconds

Regulatory cooperation should focus on influencing design and development rather than enforcement

Explanation

Vance argues that rather than focusing on shared enforcement actions, regulators need to find ways to have more influence over the design, development, and deployment decisions made when AI systems are created or adopted. He emphasizes the need to reframe regulatory cooperation away from enforcement toward influencing early-stage development.


Evidence

He references the OECD framework for anticipatory governance of emerging technologies, which includes elements like establishing guiding values, strategic intelligence, stakeholder engagement, agile regulation, and international cooperation.


Major discussion point

International Cooperation and Jurisdictional Issues


Topics

Legal and regulatory


Understanding different regulatory capacities across countries is critical for cooperation

Explanation

Vance emphasizes the importance of regulators understanding who has what capacities in AI governance, noting vast differences in staffing and resources between countries. He argues this understanding is critical for effective international cooperation and sharing of cultural knowledge about AI impacts.


Evidence

He provides specific examples: some newer data protection authorities might have only a dozen staff covering all privacy/data protection (not just AI), Canada has about 200 staff with a dedicated tech lab, while the UK’s ICO has over 1,000 staff with innovative engagement programs like regulatory sandboxes.


Major discussion point

International Cooperation and Jurisdictional Issues


Topics

Legal and regulatory | Development


Shifting narrative away from AI as inevitability toward desired societal outcomes

Explanation

Vance argues for changing the narrative that presents AI as essential for future economic prosperity or national security, where any regulation is seen as a threat rather than an opportunity for responsible innovation. He advocates for focusing on what outcomes society wants and how AI can contribute to those goals.


Evidence

He mentions the problematic narrative that AI is essential for future economic prosperity and security, and that restrictions on AI development are threats rather than opportunities for responsible innovation, noting the need to counter this with focus on actual benefits beyond stock prices or GDP.


Major discussion point

International Cooperation and Jurisdictional Issues


Topics

Economic | Legal and regulatory


P

Phumzile van Damme

Speech speed

134 words per minute

Speech length

813 words

Speech time

361 seconds

International law through UN processes could enable more inclusive AI governance

Explanation

Phumzile advocates for international law around AI governance, ideally run through the UN, as this would enable a more inclusive process that incorporates the language, ideals, and ethical systems of various countries. She sees this as the ideal direction for AI governance given technology’s borderless nature.


Evidence

She references the experience with social media governance and the difficulty with different countries having varying levels of laws, regulations, or policies, leading to lack of proper oversight.


Major discussion point

Global Inclusion and Representation


Topics

Legal and regulatory


Agreed with

– Alexandra Krastins Lopes
– von Knebel Moritz

Agreed on

International cooperation and coordination is essential


Policy absence in Global South countries excludes them from AI governance discussions

Explanation

Phumzile identifies policy absence in many African and Global South countries as a major challenge for inclusion in AI governance discussions. She notes this isn’t because AI isn’t viewed as important, but because these countries are more focused on basic needs and bread-and-butter issues.


Evidence

She mentions that in many African countries, AI is not a consideration due to focus on more immediate concerns, and encourages these countries to begin policy processes by adapting existing frameworks from other jurisdictions rather than reinventing the wheel.


Major discussion point

Global Inclusion and Representation


Topics

Development | Legal and regulatory


Need for democratization across entire AI lifecycle, not just access to tools

Explanation

Phumzile argues that while there may have been some democratization in access to AI tools like ChatGPT, true democratization requires inclusion across the entire AI lifecycle. This includes democratization of AI development, profits, and governance itself, not just usage.


Evidence

She provides specific examples: democratization of AI development should include diverse voices in design choices and large language models that consider African languages; democratization of AI profits means countries shouldn’t just be sources of income without profit sharing; democratization of governance means participation throughout the technology’s lifetime.


Major discussion point

Global Inclusion and Representation


Topics

Development | Economic


Binding international law on AI platform governance is urgently needed

Explanation

Phumzile identifies binding international law on AI platform governance as the most urgent action needed to align AI governance with ethical foresight. This represents her core recommendation for addressing current governance challenges.


Major discussion point

Global Inclusion and Representation


Topics

Legal and regulatory


Disagreed with

– Alexandra Krastins Lopes

Disagreed on

Regulatory specificity – Principle-based vs Detailed frameworks


D

Deloitte consultant

Speech speed

165 words per minute

Speech length

1349 words

Speech time

490 seconds

Critical thinking and assessment of AI outputs by citizens is essential for ethical AI use

Explanation

The Deloitte consultant emphasizes that citizens need to critically assess what they consume, say, perceive, and create when using AI systems. She expresses concern about people using AI systems without questioning their outputs or publicizing them without verifying context and accuracy.


Evidence

She notes worrisome trends of people using AI systems without questioning outputs and publicizing content without verifying whether contexts are correct.


Major discussion point

Embedding Ethical Foresight in AI Governance


Topics

Sociocultural | Human rights


Use case-based approach would be more effective than current risk-based categorization

Explanation

The consultant argues that instead of the current risk-based approach that categorizes AI systems by risk levels, an ideal framework would use a use case-based approach. This would examine how AI systems are used and in what sectors, recognizing that the same system can have different harms for different stakeholders depending on its application.


Evidence

She explains that the same AI system can be used in different ways and sectors, potentially causing different harms to different stakeholders, making use case analysis more comprehensive than risk categorization.


Major discussion point

Practical Framework Design


Topics

Legal and regulatory


Agreed with

– von Knebel Moritz

Agreed on

Frameworks must be adaptive rather than static


Disagreed with

– von Knebel Moritz

Disagreed on

Regulatory approach – Risk-based vs Use case-based frameworks


Frameworks must be implementable and understandable by developers who build AI systems

Explanation

The consultant emphasizes that regulations and laws are ultimately implemented by people developing AI systems, so if developers cannot understand or implement the regulations, the frameworks fail. She advocates for clear, understandable frameworks that don’t require translators between tech and law.


Evidence

She mentions that people like herself and Alexandra serve as bridges between tech and law, but in an ideal world, developers would already understand regulations and be able to implement each principle clearly.


Major discussion point

Practical Framework Design


Topics

Legal and regulatory | Development


Multi-stakeholder approaches must include youth voices and create spaces for real co-creation

Explanation

The consultant argues that while multi-stakeholder approaches are commonly discussed, they often forget to include youth voices. She emphasizes that young people often understand technology better and have valuable solutions, but they need platforms and spaces for real co-creation with different sectors and generations.


Evidence

She draws from her work with the responsible technology hub, noting that youth not only have potential but often understand technology better and know how to implement it better. She describes successful spaces where academia, public sector, and private sector come together for intergenerational solutions.


Major discussion point

Practical Framework Design


Topics

Sociocultural | Development


Agreed with

– Alexandra Krastins Lopes

Agreed on

Need for multi-stakeholder approaches in AI governance


Infrastructure development is essential foundation for democratizing AI access

Explanation

The consultant argues that infrastructure must be established as a base for AI democratization, noting that without infrastructure in countries lacking AI access, true democratization cannot occur. She sees this as a foundational requirement before other governance measures can be effective.


Major discussion point

Infrastructure and Capacity Building


Topics

Infrastructure | Development


M

Moderator

Speech speed

139 words per minute

Speech length

1452 words

Speech time

622 seconds

Session facilitates important dialogue on global AI governance challenges

Explanation

The moderator frames the session as addressing one of the most important global governance challenges: how to regulate AI in a way that’s both ethical and enabling. The session brings together multiple regional perspectives to unpack this challenge through insights from various experts.


Evidence

The moderator notes this is the final day of IGF 2025 and it’s fitting to close with this important topic. The session includes speakers from multiple regions and both in-person and virtual components to ensure diverse perspectives.


Major discussion point

Infrastructure and Capacity Building


Topics

Legal and regulatory


O

Online moderator 1

Speech speed

171 words per minute

Speech length

329 words

Speech time

114 seconds

AI fragmentation is occurring due to absence of consensus-based definitions and shared language

Explanation

Online moderator 1 raises the concern that similar to internet fragmentation, we are heading towards an era of AI fragmentation. This fragmentation is attributed to the absence of consensus-based definitions and shared language around AI governance and regulation.


Evidence

The moderator draws parallels to the concept of internet fragmentation and asks about moving forward given the lack of shared language and consensus-based definitions.


Major discussion point

International Cooperation and Jurisdictional Issues


Topics

Legal and regulatory


Need for exemplary AI policies that balance innovation and ethics

Explanation

Online moderator 1 seeks identification of exemplary AI policies that successfully balance innovation with ethical considerations. This reflects the ongoing challenge of finding regulatory approaches that don’t stifle technological advancement while ensuring ethical safeguards.


Evidence

The moderator acknowledges this is a broad-ended question but seeks concrete examples of policies that achieve this balance.


Major discussion point

Practical Framework Design


Topics

Legal and regulatory | Economic


O

Online moderator 2

Speech speed

156 words per minute

Speech length

221 words

Speech time

84 seconds

Tech companies are pushing back against regulation they view as excessive and innovation-restricting

Explanation

Online moderator 2 highlights the recent trend of tech companies resisting what they perceive as excessive regulation that restricts innovation and development. This raises questions about whether harmony is possible or if we’ll see continued friction between industry and government regulation in the AI space, similar to what occurred with social media.


Evidence

The moderator references the recent trend of pushback from tech companies and draws parallels to friction seen in social media and tech space versus government regulation.


Major discussion point

Regulatory Gaps and Technical Challenges


Topics

Legal and regulatory | Economic


Disagreed with

– von Knebel Moritz

Disagreed on

Innovation vs Regulation relationship


Agreements

Agreement points

Need for multi-stakeholder approaches in AI governance

Speakers

– Alexandra Krastins Lopes
– Deloitte consultant

Arguments

Ethical foresight requires proactive mechanisms and organizational structures, not just theoretical principles


Multi-stakeholder approaches must include youth voices and create spaces for real co-creation


Summary

Both speakers emphasize that effective AI governance requires engagement across multiple stakeholders – Alexandra mentions engagement from product designers, data scientists, business leaders, and board-level commitment, while the Deloitte consultant specifically advocates for civil society, private sector, and public sector collaboration, with particular emphasis on including youth voices.


Topics

Legal and regulatory | Development


Frameworks must be adaptive rather than static

Speakers

– von Knebel Moritz
– Deloitte consultant

Arguments

Need for adaptive regulatory architectures with principle-based regulations and streamlined processes


Use case-based approach would be more effective than current risk-based categorization


Summary

Both speakers reject static, rigid regulatory approaches. Moritz advocates for adaptive regulatory architectures that can evolve with technology, while the Deloitte consultant proposes moving from risk-based to use case-based approaches that better account for context and application.


Topics

Legal and regulatory


International cooperation and coordination is essential

Speakers

– Alexandra Krastins Lopes
– Phumzile van Damme
– von Knebel Moritz

Arguments

Need for context-aware approaches that consider local realities rather than one-size-fits-all models


International law through UN processes could enable more inclusive AI governance


Need for shared language and consensus on key terms like systemic risk and AI definitions


Summary

All three speakers recognize the need for international coordination while respecting local contexts. Alexandra calls for global AI governance platforms with minimum safeguards, Phumzile advocates for binding international law through UN processes, and Moritz emphasizes the need for shared language and consensus on key terms.


Topics

Legal and regulatory


Current regulatory approaches are insufficient and reactive

Speakers

– von Knebel Moritz
– Alexandra Krastins Lopes

Arguments

Reactive frameworks that respond to known harms rather than anticipating emerging risks


Moving from ethical intention to institutional implementation within companies and organizations


Summary

Both speakers criticize current approaches as inadequate. Moritz identifies reactive frameworks as a key problem, while Alexandra emphasizes that ethical foresight cannot remain theoretical but must be institutionally implemented with concrete structures.


Topics

Legal and regulatory


Similar viewpoints

Both speakers advocate for flexible, principle-based regulatory approaches that avoid rigid definitions and false dichotomies. They both support the idea that regulation can enable rather than hinder innovation when properly designed.

Speakers

– Alexandra Krastins Lopes
– von Knebel Moritz

Arguments

Principle-based approaches without strict definitions allow technology and concepts to evolve


Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation


Topics

Legal and regulatory | Economic


Both speakers emphasize the importance of understanding and building regulatory capacity, particularly recognizing the vast differences in resources and expertise across different countries and the need for capacity building initiatives.

Speakers

– Vance Lockton
– von Knebel Moritz

Arguments

Understanding different regulatory capacities across countries is critical for cooperation


Need for inclusive stakeholder engagement and capacity building across different sectors


Topics

Legal and regulatory | Development


Both speakers recognize that true democratization of AI requires more than just access to tools – it requires fundamental infrastructure development and participation across the entire AI lifecycle, with particular attention to inclusion of underrepresented voices.

Speakers

– Phumzile van Damme
– Deloitte consultant

Arguments

Need for democratization across entire AI lifecycle, not just access to tools


Infrastructure development is essential foundation for democratizing AI access


Topics

Development | Infrastructure


Unexpected consensus

Friction between tech companies and regulators is inevitable and potentially beneficial

Speakers

– Deloitte consultant
– Alexandra Krastins Lopes

Arguments

Critical thinking and assessment of AI outputs by citizens is essential for ethical AI use


Moving from ethical intention to institutional implementation within companies and organizations


Explanation

Unexpectedly, when asked about friction between tech companies and regulators, the Deloitte consultant stated that friction is inevitable and that democratic states ‘live from this friction,’ suggesting it’s a healthy part of the democratic process. Alexandra hoped to prevent such friction through multi-stakeholder discussions, but both acknowledged the reality of this tension while seeing potential positive outcomes.


Topics

Legal and regulatory | Economic


Youth inclusion is critical but often overlooked in AI governance

Speakers

– Deloitte consultant
– Phumzile van Damme

Arguments

Multi-stakeholder approaches must include youth voices and create spaces for real co-creation


Need for democratization across entire AI lifecycle, not just access to tools


Explanation

Both speakers unexpectedly converged on the critical importance of youth inclusion in AI governance. The Deloitte consultant specifically criticized multi-stakeholder approaches for forgetting youth voices, while Phumzile’s call for democratization across the AI lifecycle implicitly includes youth participation. This consensus was unexpected given their different professional backgrounds and regional perspectives.


Topics

Development | Sociocultural


Overall assessment

Summary

The speakers demonstrated strong consensus on several fundamental principles: the need for adaptive rather than static regulatory frameworks, the importance of international cooperation while respecting local contexts, the inadequacy of current reactive approaches, and the critical role of multi-stakeholder engagement including often-overlooked voices like youth.


Consensus level

High level of consensus on core principles with constructive disagreement on implementation approaches. This suggests a mature understanding of AI governance challenges and creates a solid foundation for collaborative policy development, though significant work remains in translating shared principles into practical, coordinated action across different jurisdictions and stakeholder groups.


Differences

Different viewpoints

Regulatory approach – Risk-based vs Use case-based frameworks

Speakers

– Deloitte consultant
– von Knebel Moritz

Arguments

Use case-based approach would be more effective than current risk-based categorization


Current AI regulation has more gaps than substance, with regulators lacking technical expertise


Summary

The Deloitte consultant advocates for moving from risk-based categorization to use case-based approaches, arguing that the same AI system can have different harms depending on application. Moritz focuses more on the fundamental gaps in current regulation and lack of technical expertise, suggesting the problem is more systemic than just the categorization method.


Topics

Legal and regulatory


Regulatory specificity – Principle-based vs Detailed frameworks

Speakers

– Alexandra Krastins Lopes
– Phumzile van Damme

Arguments

Principle-based approaches without strict definitions allow technology and concepts to evolve


Binding international law on AI platform governance is urgently needed


Summary

Alexandra advocates for flexible, principle-based approaches that avoid strict definitions to allow evolution, while Phumzile calls for binding international law, which would necessarily involve more specific and rigid legal frameworks.


Topics

Legal and regulatory


Innovation vs Regulation relationship

Speakers

– von Knebel Moritz
– Online moderator 2

Arguments

Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation


Tech companies are pushing back against regulation they view as excessive and innovation-restricting


Summary

Moritz argues that the innovation vs regulation dichotomy is false and that regulation can enable innovation, while the online moderator highlights the real-world friction where tech companies view regulation as restrictive to innovation.


Topics

Legal and regulatory | Economic


Unexpected differences

Role of friction in democratic governance

Speakers

– Deloitte consultant
– Alexandra Krastins Lopes

Arguments

Critical thinking and assessment of AI outputs by citizens is essential for ethical AI use


Moving from ethical intention to institutional implementation within companies and organizations


Explanation

When asked about friction between tech companies and regulators, the Deloitte consultant stated that friction is expected and that ‘democratic states specifically live from this friction,’ while Alexandra hoped to avoid such friction through multi-stakeholder discussions. This represents an unexpected philosophical disagreement about whether regulatory friction is beneficial or problematic for democratic governance.


Topics

Legal and regulatory | Sociocultural


Overall assessment

Summary

The speakers showed remarkable consensus on high-level goals (ethical AI governance, international cooperation, inclusion) but significant disagreements on implementation approaches, regulatory specificity, and the role of friction in governance processes.


Disagreement level

Moderate disagreement with high convergence on principles but divergent implementation strategies. This suggests that while there is broad agreement on the need for ethical AI governance, the path forward remains contested, which could slow progress toward unified global AI governance frameworks. The disagreements reflect deeper tensions between flexibility vs. standardization, proactive vs. reactive approaches, and different regional perspectives on governance mechanisms.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for flexible, principle-based regulatory approaches that avoid rigid definitions and false dichotomies. They both support the idea that regulation can enable rather than hinder innovation when properly designed.

Speakers

– Alexandra Krastins Lopes
– von Knebel Moritz

Arguments

Principle-based approaches without strict definitions allow technology and concepts to evolve


Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation


Topics

Legal and regulatory | Economic


Both speakers emphasize the importance of understanding and building regulatory capacity, particularly recognizing the vast differences in resources and expertise across different countries and the need for capacity building initiatives.

Speakers

– Vance Lockton
– von Knebel Moritz

Arguments

Understanding different regulatory capacities across countries is critical for cooperation


Need for inclusive stakeholder engagement and capacity building across different sectors


Topics

Legal and regulatory | Development


Both speakers recognize that true democratization of AI requires more than just access to tools – it requires fundamental infrastructure development and participation across the entire AI lifecycle, with particular attention to inclusion of underrepresented voices.

Speakers

– Phumzile van Damme
– Deloitte consultant

Arguments

Need for democratization across entire AI lifecycle, not just access to tools


Infrastructure development is essential foundation for democratizing AI access


Topics

Development | Infrastructure


Takeaways

Key takeaways

Ethical foresight in AI governance must move beyond theoretical principles to institutional implementation with proactive mechanisms, organizational structures, and board-level commitment


Current AI regulation has significant gaps with regulators lacking technical expertise and frameworks being reactive rather than anticipatory of emerging risks


International cooperation requires establishing shared language and consensus on key AI definitions, focusing on influencing design and development rather than enforcement


AI governance discussions lack global inclusion, particularly from the Global South, requiring democratization across the entire AI lifecycle rather than just access to tools


Effective AI frameworks should be use case-based rather than risk-based, implementable by developers, and include multi-stakeholder approaches with youth representation


Safe and trustworthy regulatory frameworks actually enable innovation rather than hinder it, as demonstrated in other industries like aviation and nuclear power


Citizens must develop critical thinking skills to assess AI outputs and question what they consume and create


Infrastructure development is essential for democratizing AI access globally


Resolutions and action items

Establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities


Create adaptive regulatory architectures with principle-based regulations and streamlined processes for technical implementation standards


Build independent technical advisory groups and capacity building programs for regulators


Develop international dialogues to establish consensus on key AI risk definitions and terminology


Create spaces for real co-creation between academia, public sector, private sector, and intergenerational voices


Implement binding international law on AI platform governance through UN processes


Shift narrative around AI from inevitability to focus on desired societal outcomes


Unresolved issues

How to achieve consensus on fundamental AI definitions and terminology across different countries and cultures


How to balance innovation-friendly approaches with necessary safety regulations without creating regulatory arbitrage


How to address the capacity and resource gaps between different countries’ regulatory authorities


How to effectively include Global South voices in AI governance when many countries lack basic AI policies


How to prevent AI fragmentation while respecting different cultural values and regulatory approaches


How to ensure meaningful youth inclusion in AI governance beyond tokenistic participation


How to implement cross-border enforcement mechanisms for globally operating AI platforms


Suggested compromises

Principle-based regulatory approaches that allow concepts and technology to evolve rather than strict definitions


Voluntary codes of conduct and soft law mechanisms that allow businesses to mitigate risks without restricting innovation


Risk-based approaches with flexible implementation adapted to specific local conditions and contexts


Sectoral regulatory approaches where existing authorities handle AI issues within their domains rather than creating new comprehensive AI laws


Regulatory sandboxes and iterative processes that allow testing of new approaches with quick feedback loops


Minimum safeguards with international coordination while respecting local specificities and strategic goals


Thought provoking comments

I don’t think we have [well thought out systems]. I will employ a metaphor used by a former professor of mine who said that he does not have knowledge gaps because it would be just too large. But he has knowledge islands and those are surrounded by a lot of ocean that we don’t know about.

Speaker

von Knebel Moritz


Reason

This metaphor fundamentally reframes the entire premise of AI regulation discussion. Instead of assuming we have solid regulatory frameworks with mere gaps to fill, Moritz suggests we have isolated islands of knowledge in vast oceans of uncertainty. This challenges the conventional approach to regulatory development and highlights the magnitude of what we don’t know about AI governance.


Impact

This comment shifted the discussion from gap-filling to foundational questioning. It influenced subsequent speakers to think more holistically about regulatory frameworks rather than incremental improvements, and established a more humble, realistic tone about the current state of AI governance knowledge.


We need to establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities… international cooperation and legal certainty are essential for that to happen.

Speaker

Alexandra Krastins Lopes


Reason

This comment bridges the tension between global coordination and local autonomy – a central challenge in AI governance. Alexandra’s perspective from the Global South adds crucial nuance by acknowledging that one-size-fits-all approaches don’t work, while still advocating for minimum global standards.


Impact

This set up a recurring theme throughout the discussion about balancing international cooperation with local contexts. It influenced later speakers to consider how regulatory frameworks can be both globally coordinated and locally relevant, particularly affecting Phumzile’s call for international law and Vance’s discussion of regulatory cooperation.


There hasn’t been a proper democratization. While there may have been somewhat of a democratization in terms of access… we need to talk about it through the life cycle. So there needs to be democratization of AI development, democratization of AI profits, and democratization of AI governance itself.

Speaker

Phumzile van Damme


Reason

This comment deconstructs the popular narrative of AI ‘democratization’ and reveals it as incomplete. By breaking down democratization into development, profits, and governance phases, Phumzile exposes how current approaches only address surface-level access while ignoring deeper structural inequalities.


Impact

This reframing influenced the discussion to move beyond simple inclusion rhetoric to examine power structures more critically. It connected to Yasmin’s later emphasis on youth inclusion and reinforced the need for more fundamental changes in how AI governance is approached, not just who gets invited to the table.


We need frameworks that are actually implementable and understandable… if the developers cannot implement the regulations because they simply do not understand them, we need translators… but in an ideal world we would have developers who already understand this.

Speaker

Deloitte consultant (Yasmin)


Reason

This comment identifies a critical practical gap between regulatory intent and implementation reality. It highlights that even well-intentioned regulations fail if the people who must implement them cannot understand or operationalize them, pointing to a fundamental communication and education challenge.


Impact

This shifted the conversation from high-level policy design to practical implementation challenges. It influenced the discussion to consider not just what regulations should say, but how they can be made actionable by the technical communities that must implement them.


We need to shift the narrative around AI away from AI being either an inevitability or a wholly necessary piece of future economies, to think about what outcomes we want from future societies and how AI can build into those.

Speaker

Vance Lockton


Reason

This comment challenges the fundamental framing of AI discussions by questioning the assumption that AI development is inevitable or inherently necessary. It redirects focus from technology-driven to outcome-driven thinking, suggesting we should define desired societal outcomes first, then determine AI’s role.


Impact

This reframing influenced the final discussion segment and connected with Moritz’s closing comments about overcoming false dichotomies. It helped establish that AI governance should be purpose-driven rather than technology-driven, affecting how other speakers framed their concluding thoughts.


Overcoming the simple dichotomies of it’s innovation versus regulation. You can have innovation through regulation. We’ve had that for decades… safe and trustworthy frameworks… is integral to innovation… the nuclear industry, the aviation industry, they took off after we had safety standards in place.

Speaker

von Knebel Moritz


Reason

This comment dismantles one of the most persistent false narratives in tech policy – that regulation inherently stifles innovation. By providing concrete historical examples from other industries, Moritz demonstrates how safety standards actually enabled scaling and growth, offering a compelling counter-narrative.


Impact

This fundamentally challenged the framing used by many in the AI industry and provided a research-backed alternative perspective. It influenced the moderator to ask for elaboration and helped conclude the discussion on a note that regulation and innovation can be mutually reinforcing rather than opposing forces.


Overall assessment

These key comments collectively transformed the discussion from a conventional regulatory gap-analysis into a more fundamental examination of AI governance assumptions and power structures. The conversation evolved through several phases: Moritz’s ‘knowledge islands’ metaphor established intellectual humility about current understanding; Alexandra and Phumzile’s comments highlighted global power imbalances and the need for inclusive approaches; Yasmin’s focus on implementation practicality grounded the discussion in real-world constraints; and the final exchanges by Vance and Moritz reframed the entire innovation-regulation relationship. Together, these interventions created a more nuanced, critical, and globally-aware discussion that moved beyond surface-level policy tweaks to examine foundational assumptions about AI development, governance, and societal impact. The speakers built upon each other’s insights, creating a layered analysis that challenged dominant narratives while offering constructive alternatives.


Follow-up questions

How can we establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities?

Speaker

Alexandra Krastins Lopes


Explanation

This addresses the critical need for international coordination to prevent regulatory fragmentation while accommodating different cultural and economic contexts


What specific mechanisms can be developed to leverage AI for regulatory processes and governance?

Speaker

von Knebel Moritz


Explanation

Moritz mentioned that relatively little work has gone into using AI to improve regulatory frameworks, suggesting this as an underexplored area with potential


How can we develop adaptive regulatory architectures that can evolve quickly with technology rather than having static rules?

Speaker

von Knebel Moritz


Explanation

This addresses the fundamental challenge of regulatory frameworks being outpaced by rapid AI development


What would binding international law on AI platform governance look like in practice?

Speaker

Phumzile van Damme


Explanation

Van Damme identified this as her most urgent recommendation, but the practical implementation details need further exploration


How can we create effective spaces for real co-creation between different disciplines, sectors, and generations in AI governance?

Speaker

Deloitte consultant (Yasmin Alduri)


Explanation

While mentioning success in her work, the specific methodologies and scalable approaches for such co-creation spaces need further development


How can use case-based frameworks be practically implemented compared to current risk-based approaches?

Speaker

Deloitte consultant (Yasmin Alduri)


Explanation

This represents a fundamental shift in regulatory approach that requires detailed exploration of implementation mechanisms


What specific soft mechanisms can regulators use to influence AI system design and development rather than just enforcement?

Speaker

Vance Lockton


Explanation

Lockton emphasized this as critical but noted it’s a work in progress requiring further development of practical approaches


How can we systematically include youth voices in AI governance beyond tokenistic participation?

Speaker

Deloitte consultant (Yasmin Alduri)


Explanation

While identifying youth as having valuable solutions, the specific mechanisms for meaningful inclusion need further research


What are the specific case studies and methodologies for how safety standards in other industries (nuclear, aviation) contributed to innovation that can be applied to AI?

Speaker

von Knebel Moritz


Explanation

Moritz mentioned having done previous work on this but suggested more research is needed to apply these lessons to AI governance


How can we achieve democratization across the entire AI lifecycle – development, profits, and governance – not just access?

Speaker

Phumzile van Damme


Explanation

This represents a comprehensive framework for AI democratization that needs detailed exploration of implementation mechanisms


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #265 Using Digital Platforms to Promote Info Integrity

Day 0 Event #265 Using Digital Platforms to Promote Info Integrity

Session at a glance

Summary

This panel discussion focused on how digital platforms, particularly TikTok, can be used to promote information integrity and combat misinformation. The session was moderated by Maurice Turner from TikTok’s public policy team and featured panelists from various organizations including a medical professional, representatives from humanitarian agencies, and advocacy groups. Dr. Ahmed Ezzat, a surgeon who creates medical content on TikTok, shared how he transitioned from casual content creation to serious public health communication after one of his posts about childhood infections reached 4.5 million views. He emphasized the importance of accountability and evidence-based practice for healthcare professionals on social media platforms.


Representatives from UNHCR, the Red Cross, and the Internet Society discussed their strategies for creating trustworthy content while facing significant challenges from misinformation campaigns. Gisela Lomax from UNHCR highlighted how misinformation directly harms refugee communities and can even contribute to forced displacement, citing the Myanmar crisis as an example. The panelists emphasized that effective content creation requires extensive expertise, collaboration across multiple disciplines, and significant time investment despite appearing simple to audiences. They stressed the importance of building trust over time rather than focusing solely on metrics like followers or views.


Key challenges identified included operating in polarized environments, competing against well-resourced misinformation campaigns, and balancing the need for quick responses with thorough fact-checking. The discussion concluded with recommendations for increased partnerships between humanitarian organizations, tech companies, and civil society groups to address information integrity challenges more effectively. The panelists agreed that while the battle against misinformation is complex and ongoing, strategic collaboration and maintaining high standards for content creation remain essential for success.


Keypoints

**Major Discussion Points:**


– **Platform strategies for promoting credible information**: Panelists shared how their organizations (TikTok, medical professionals, UNHCR, Red Cross, Internet Society) use digital platforms to disseminate trustworthy content, including partnerships with fact-checkers, creator networks, and evidence-based messaging to reach large audiences quickly and effectively.


– **Challenges in combating misinformation and disinformation**: Discussion covered the difficulties of fighting organized misinformation campaigns, including well-resourced disinformation “farms,” the polarized nature of online discourse especially in conflict zones, and the challenge of responding to false information once it spreads.


– **Building trust and credibility online**: Panelists emphasized the importance of accountability, transparency, using real names/credentials, engaging authentically with audiences, and building long-term relationships rather than focusing solely on metrics like views or followers.


– **Resource constraints and timing challenges**: A key tension emerged around balancing speed versus accuracy – the need to respond quickly to misinformation while taking time to verify information, assess risks, and coordinate with experts across multidisciplinary teams.


– **Collaborative approaches and partnerships**: Strong emphasis on the necessity of partnerships between humanitarian organizations, tech companies, civil society, academics, and content creators to effectively address information integrity challenges, especially given limited resources.


**Overall Purpose:**


The discussion aimed to explore how digital platforms, particularly TikTok, can be leveraged by credible organizations and creators to promote information integrity, share best practices for combating misinformation, and foster collaboration between different stakeholders in the information ecosystem.


**Overall Tone:**


The tone was professional yet conversational, with panelists sharing practical experiences and honest challenges. It remained consistently collaborative and solution-oriented throughout, with speakers building on each other’s points and emphasizing the shared nature of information integrity challenges. The discussion maintained an optimistic outlook about the potential for positive impact through digital platforms while acknowledging the serious difficulties involved.


Speakers

– **Maurice Turner**: Public policy team at TikTok, panel moderator


– **Dr. Ahmed Ezzat**: Surgeon in training doing general and breast oncoplastic surgery in London, creates clinical content and health news on TikTok


– **Eeva Moore**: Works at the Internet Society, focuses on educational content, advocacy, and community stories to help connect the remaining third of the world that’s not online


– **Gisella Lomax**: Leads UNHCR’s capacity on information integrity, works with the UN’s humanitarian agency protecting refugees, forcibly displaced persons, and asylum seekers


– **Ghaleb Cabbabe**: Marketing and advocacy manager of the IFRC (International Federation of Red Cross and Red Crescent Societies), based in Geneva


– **Audience**: Includes Bia Barbosa, a journalist from Brazil and civil society representative at the Brazilian Internet Steering Committee


Additional speakers:


None identified beyond those in the speakers names list.


Full session report

# Digital Platforms and Information Integrity: A Multi-Stakeholder Perspective on Combating Misinformation


## Executive Summary


This panel discussion, moderated by Maurice Turner from TikTok’s public policy team, brought together representatives from healthcare, humanitarian organisations, and civil society to examine how digital platforms can promote information integrity and combat misinformation. The session featured Dr. Ahmed Ezzat, a surgeon creating medical content on TikTok; Gisella Lomax from UNHCR’s information integrity team; Ghalib Cabbabe from the International Federation of Red Cross and Red Crescent Societies; and Eeva Moore from the Internet Society. The discussion highlighted both the transformative potential of digital platforms for credible information dissemination and the complex challenges organisations face when combating well-resourced misinformation campaigns.


## Key Themes and Discussions


### Platform Reach and Impact


Maurice Turner opened by highlighting TikTok’s partnerships with fact-checking organisations and media literacy programmes designed to empower users. The panellists demonstrated the unprecedented reach that digital platforms offer for credible information dissemination. Dr. Ahmed Ezzat provided a compelling example, describing how one of his posts about childhood infections reached four and a half million views and was shared 156,000 times within 24 hours at zero cost, illustrating the massive public health potential of these platforms.


Gisella Lomax emphasised that UNHCR was “the first UN agency to create a TikTok account” and noted the vital importance of digital platforms for their work with displaced populations globally. She explained that these platforms are essential for providing lifesaving protection information. Ghalib Cabbabe outlined how the Red Cross utilises digital platforms for brand awareness, crisis communication, information sharing, and fundraising campaigns, citing recent examples including the Myanmar earthquake, Gaza, and Iran. Eeva Moore added that platforms serve a crucial role in connecting communities through educational content and stories.


### Content Creation and Trust Building


Dr. Ahmed Ezzat, who is part of the Clinical Creator Network in the UK, emphasised the importance of accountability for healthcare professionals, advocating for the use of real names and avoiding commercial endorsements that could damage credibility. He made a particularly insightful observation about public intelligence: “Actually, members of the public, regardless of what their understanding is, bearing in mind in the UK the reading age is about six or seven, the maths age is about three to four, they are phenomenally intuitive. They can sniff out right from wrong. You just need to be able to explain that information in an easy and simple way.”


Gisella Lomax outlined UNHCR’s approach to successful content creation, which requires creativity, partnerships with influencers, uplifting community voices, and making facts entertaining whilst remaining educational. She stressed the importance of building partnerships with tech companies, academic institutions, and civil society organisations.


Eeva Moore provided crucial insight into the hidden complexity of content creation, noting that expertise must be “baked into” the production process with multiple layers of review. She explained that whilst content should appear simple to audiences, it requires extensive behind-the-scenes work. Ghalib Cabbabe reinforced this point, emphasising that organisations must rely on field experts rather than just marketing teams and prepare proactive strategies including scenario planning.


### The Challenge of Misinformation


A central theme emerged around the asymmetric nature of the battle against misinformation. Dr. Ahmed Ezzat described how misinformation creates an unfair defensive position where evidence-based voices must defend against accusations whilst being constrained by factual accuracy.


Gisella Lomax connected online misinformation to devastating real-world consequences: “Information risks such as hate speech and misinformation are directly and indirectly causing real world harm. And I mean violence, killings, persecution. It can even be a factor in forced displacement, in causing refugees… as we saw in Myanmar back in 2016, 2017, when hate speech… had a decisive role in the displacement, I think, of 700,000 Rohingya refugees into Bangladesh who are still there today.”


Ghalib Cabbabe highlighted the resource disparity, noting that organisations face opponents with significant resources including state actors and misinformation farms operating in highly polarised environments.


### Time as the Fundamental Challenge


Eeva Moore identified time as the core challenge: “You could boil the challenge down perhaps to one word, and that’s time. If you’re in the business of trying to disrupt or break something, you get to move at a much faster pace than if you’re on the other side of that equation.”


This time constraint creates tension between speed and accuracy. Ghalib Cabbabe advocated for balancing speed with thoroughness by assessing risks before communicating, sometimes taking time to understand how misinformation evolves rather than rushing responses. Eeva Moore emphasised the need for creative solutions and pre-prepared resources to overcome these constraints.


### Platform Responsibility and Resource Concerns


Gisella Lomax raised concerns about platform responsibility, particularly regarding content moderation in less common languages: “We have seen, to our dismay, a weakening of these capacities and perhaps less resourcing from some companies. And I would extend that, for example, to content moderation in less common languages… Are you adequately providing content moderation in less common languages in these very volatile contexts where you don’t actually have a business argument?”


This exposed a critical gap in platform safety measures for vulnerable populations who communicate in languages that aren’t commercially viable for platforms.


## Areas of Consensus and Strategic Differences


The panellists agreed that digital platforms provide unprecedented reach for information dissemination and that content creation requires multi-layered expertise across organisations. All speakers recognised that misinformation poses serious real-world threats requiring proactive, systematic responses.


However, some tactical differences emerged around response strategies. Dr. Ahmed Ezzat emphasised the defensive disadvantage of being constrained by evidence, whilst Ghalib Cabbabe advocated for strategic patience. Eeva Moore focused on speed through creative solutions and pre-preparation.


## Audience Engagement and Future Collaboration


The discussion included audience participation, with Brazilian journalist Bia Barbosa asking questions about platform engagement. Gisella Lomax actively invited collaboration from academics, tech companies, and NGOs, specifically mentioning a Wednesday 2 PM event showcasing UNHCR’s South Africa pre-bunking project and referencing “Katie in the pink jacket” for those interested in UNHCR’s information integrity toolkit.


The panellists recommended examining comment sections on Red Cross digital platforms to understand the complexity of misinformation challenges firsthand.


## Key Recommendations


Strategic recommendations included:


– Focusing on quality of engagement rather than vanity metrics


– Building partnerships to share resource burdens


– Preparing proactive content strategies rather than only reactive responses


– Collaborating across organisations rather than building all capabilities internally


– Developing scenario planning and pre-prepared responses


## Unresolved Challenges


Several challenges remained unresolved, including how to adequately resource content moderation in less common languages, addressing weakening trust and safety capacities from some platforms, and overcoming the fundamental time asymmetry between misinformation creators and fact-based responders.


## Conclusion


Maurice Turner’s closing remarks emphasised the tensions revealed in the discussion and the key takeaways about collaboration and resource sharing. The panel demonstrated that effective information integrity work requires sophisticated understanding of platform dynamics, audience psychology, and collaborative resource management.


As Eeva Moore noted, this work “should be labour-intensive” and should “look like a light lift, but actually, in fact, be a pretty heavy one.” The discussion revealed that information integrity is not merely a technical challenge but a humanitarian imperative with life-and-death consequences, requiring sustained collaboration between humanitarian organisations, tech companies, civil society, and academic institutions.


Session transcript

Maurice Turner: Welcome to the folks in the audience here in person as well as online. We’re going to be discussing using digital platforms to promote information integrity. My name is Maurice Turner and I’m on the public policy team at TikTok. Hello, hello. Welcome everyone who’s here in person as well as online. Welcome everyone who is here in person and online. My name is Maurice Turner. My name is Maurice Turner and we’re going to be discussing using digital platforms to promote information integrity. Again, I want to welcome everyone here in person and online. And for our panel discussion, we’re going to be talking about using digital platforms to promote information integrity. I have several panelists here and I’d like to start off with a brief introduction to how we’re facing this challenge at TikTok. And then I will get into introductions for our panelists. And we will jump right in to a back and forth that will also include Q&A from the audience. So if you have questions, please do make a note of them during this panel discussion. And towards the end, we will have microphones on both sides of the table so that you can get up and ask your questions. I’m also encouraging questions from our audience online. So if you have any questions, feel free to go ahead and type those in. And we have someone monitoring that online discussion. So your question can be asked toward the end of the session. At TikTok, we believe this conversation is important because information integrity itself is important, both to inspire our creators to promote credible information, but also to ensure that organizations are doing important work to amplify their own messages. We look forward to hearing more about organizations doing that work in our discussion later on today. At TikTok, we remain committed to using fact-checking to make sure that the content on our platform is free from misinformation. We partner with more than 20 accredited fact-checking organizations across 60 different markets. We do this work continuously to ensure that we are making sure that our platform is as free from misinformation as possible. And we also empower our users through media literacy programs to give them resources to recognize misinformation, assess content critically, and report any violative content. And now for our panelists. I’d like to go ahead and start off with introductions. Dr. Ahmed, he’s a creator that produces content. We also have Eva from the Information Society, as well as Gisela from the UNHCR. As an icebreaker, let’s go ahead and start off with the introduction. Dr. Ahmed, how would you like to introduce yourself and talk about how you produce content on TikTok?


Dr. Ahmed Ezzat: Thank you very much for the introduction. So I’m Dr. Ahmed Izzat. I’m a surgeon in training doing general and breast oncoplastic surgery in London. I create clinical content and my focus is health news. And I also do lots of content strategy and campaigns. And one of the draws to being on a platform like TikTok is the force of good it can portray in the way that it’s quite sensationalist, that if you get the balance right from a public health perspective, for instance, in the UK, there’s been a heat wave. You can reach a million people, half a million people within less than 24 hours at zero cost, essentially, which is a massive gain, I find. So I think there’s a massive power of good if it’s harnessed well using evidence-based information.


Maurice Turner: And Eva, how do you use the platform?


Eeva Moore: At the Internet Society, we are really focused on a couple types of content. Educational. We work to help connect the remaining third of the world that’s not online. And that can look like building infrastructure in one of the 181 countries where we have chapters. So a lot of it is educational. There’s a lot of advocacy. And then there’s a lot of community stories. How do we demonstrate the impact that we’re having in the world without making ourselves the center of the story itself? Thanks.


Gisella Lomax: Hi, everyone. My name is Gisela Lomax, and I lead UNHCR’s capacity on information integrity. For anyone not familiar with UNHCR, and you might not be familiar because certainly this is my first time at the IGF, we’re one of the UN’s largest humanitarian agencies tasked with protecting the world’s refugees, forcibly displaced, asylum seekers. That’s 123 million people now across, I think, 133 countries. So we’re using social media in many sophisticated and extensive ways. It’s vital. As Dr. Ahmed said, the reach is massive. One, to provide vital, lifesaving protection information to communities, both in emergency context, protracted context. And the other is to inform the general public as well as specific audiences and stakeholders, including government partners, civil society, the private sector, about our work. To lift up the voices of our refugee partners as well, and amplify communities, and to inspire. And I would say as a kind of a little unknown fact is that we were the first UN agency to create a TikTok account quite a few years ago now. And I think it’s very interesting the way TikTok really seeks to make important information fun and entertaining. Although that also comes with challenges, but I think we’ll come to that.


Maurice Turner: We’re also joined online by our panelist, Ghalib from the Red Cross. Would you please go ahead and introduce yourself and let us know how you are using TikTok?


Ghaleb Cabbabe: Sure. First, apologies for making this panel hybrid again. And it’s nice to connect even online. And hi, everyone. So I’m Ghalib. I’m the marketing and advocacy manager of the IFRC, the International Federation of Red Cross and Red Crescent Societies, based in Geneva. How do we use digital platforms? Because we’re also tackling different fields, different topics. So depending on the situation, it would be, let’s say, different platforms. But usually the main use is, of course, for brand awareness to make sure that people know what we do, how we do it. We also use digital platforms. And this is a big part of the work in crisis mode sometimes when, for example, we have a disaster, an earthquake, let’s say, lately in Myanmar or the situation also in Gaza or Iran. So this is more in crisis mode. Sometimes it’s to share information. Sometimes it’s to try not to have misinformation. I think we’ll open also this topic more in detail in the next coming minutes. And sometimes we also use digital platforms for direct or indirect fundraising campaigns. So we’re happy to be on TikTok. And I can recall working with the Red Cross that was maybe back in 2017, where it was, we can say, the beginning of the platform. And to see where it is today, I think that’s quite a great achievement. Thank you.


Maurice Turner: I’m going to go ahead and start us off with a specific question for Dr. Ahmed. How did you get your start on TikTok? And what are some of the ways that you achieve your goal of making sure that you are getting content out on the platform and out to an audience specifically related to STEM inequality?


Dr. Ahmed Ezzat: So I think I’ll share my journey because I think it’s representative and it’s an honest one. So when I first started my journey on TikTok as a medical professional and academic surgeon, I wanted to set up my content for actually very different reasons, which is trying to do stuff that’s seemingly for fun, lighthearted. But then I absolutely did not want to be that cliche medic who’s creating content on medicine. But then it was too difficult to resist because there was such a need. And we are quite fortunate in the UK. There’s a very nice and well-organizing climate of clinicians that create content. And then I had my first bit of content, which went viral and it hit four and a half million views. And this was on infections in childhood. At the time, there was an outbreak. And then this was shared 156 times. 6,000 times. And I thought, well, the UK population is, you know, in the in the in the 60 plus million four and a half million is phenomenal. So from that point on, I actually really started to look into tick tock in a very different way, because the impact is so phenomenal. So that if there were outbreaks on say, food poisoning, I’ve had agencies give me information, but say, well, we can’t disclose it publicly just to try and govern misinformation to try and help information. And so I shifted from a name that I had to begin with, which was a nickname, to actually moving on to using my real name on tick tock, which is to celebrate the whole and leverage the whole point of the fact that you should be held accountable for the medical the medical information that you say. And I really do also see a massive shift in healthcare professionals and colleagues of mine, who to begin with, used to look at social media, tick tock, you know, as a as a as a as a vanity project. But then now, government have been in, you know, been working with us, you know, institutions that are verified, you know, political organizations to charities to esteemed companies, because they see the value and the reach that you can have just by creating this content. But the really important bit, which is going to your point, how do you go about achieving your goals? tick tock, for example, has been fantastic in trying to balance freedom of information against integrity of information. And so we work together to set up the clinical creator network in the United Kingdom. And it’s through having some, you know, microclimate of clinicians that are really there to do a force of good, but also using evidence based practice in the same way that they would be accountable to the General Medical Council or whichever organization they’re at wherever in the world. I think that’s the most important thing. Because as a healthcare professional, you’re not a lifestyle creator, meaning that if I was to buy a brand of jeans tomorrow and put it on my posts, no one would care. But if I was to suddenly pick up a brand deal to try and promote weight loss medicines, for example, which is an allure that we get every day, then this would absolutely decimate credibility. And so you really have to carry the responsibility extremely carefully as a clinic, as a clinician.


Maurice Turner: Now, a question that I get pretty regularly is how do I make content that’s popular on the platform? And I’m not the expert in that. So I’ll leave it to others. But I think a related question is, what are some of the strategies that might be used to push out content that is actually trustworthy and credible, so that people on the platform are getting the information that they’re looking for? Gisela, do you have any sort of a response for that?


Gisella Lomax: Yeah, definitely. Well, I was going to add to how we use our strategies for social media platforms and digital channels. And I think first on the communication side, that’s very much being creative. It’s partnering with influencers and brands, uplifting refugee-led storytelling, using the facts, but trying to put them across in an entertaining, educative way. So all of this good stuff to amplify refugee voices, spark empathy, drive action offline, as well as online. However, the challenge, of course, is this is increasingly undermined and threatened by the growth of misinformation, disinformation, hate speech, these types of information risks. And so my role, and I used to work more on the communication side, and now I pay tribute to our communications colleagues and the social media team at UNHCR and at countries who do a fantastic job. My work now is on addressing these risks. So at UNHCR, we have this information integrity capacity, and then field-based projects around the world in around nine countries, basically developing a humanitarian response to mis- disinformation and hate speech. We recently launched a toolkit. If you Google UNHCR information integrity, you’ll find it, which has got practical tools, guidance, common standards on all of this quite technical stuff, such as assessing the problem and then different types of responses. And we really have tested a plethora of responses, and I can highlight just a couple, given the time. Obviously, there’s continuing to fill the feeds with proactive, trusted, reliable, credible, accessible, multilingual, the list goes on, content and information. And working with technical, with digital platforms, including TikTok, to try and get this content uplifted, because it often naturally isn’t the sort of content that rises to the top of algorithms. And so those partnerships are key. But the other aspects are how do we deal with a spread of, say, hate speech that’s inciting violence, disinformation that’s undermining humanitarian action, misinformation that’s making people feel, you know, more hostile or mistrustful of communities. And I’d like to highlight one quite exciting response. And if I may, a little plug, we have a really interesting project in South Africa, testing the notion of pre-bunking. This is the inoculation theory of trying to help societies and communities become more resilient to hate and lies in partnership with Norway, South Africa. But we’ve also benefited from some technical support from TikTok and from Google and from others as well. And we have an event where we’ll be showcasing this on Wednesday at two o’clock, and my colleague Katie there in the pink jacket can tell you more. So that’s a really positive example. But as you can see, it really runs the gamut, and I’m happy to go into a bit more detail on some of those.


Maurice Turner: Ghaleb, I’ll pass it over to you. As marketing manager, what are some of the strategies that you use to get out that trustworthy information?


Ghaleb Cabbabe: It’s a very, let’s say, timely question. It’s an important one, not an easy one, because unfortunately, we don’t have today all the tools. Because if we look today at the environment of misinformation, let’s face it, we have sometimes companies, farms in some places where their only goal is to produce misinformation. And usually, it’s very hard to fight against these in terms of resources, in terms of means, in terms of targets. So what we do, of course, internally, is always check, double check, and check again, the information we want to share, the information sometimes we want to react on, we want to comment on. I’ll give you an example. It’s been unfortunately the case in the past few weeks and months with the killing of PRCS, Palestinian Red Crescent colleagues in Gaza, for example, in Iran lately as well, where we had the Iranian Red Crescent colleagues killed in the latest attacks. So first, checking the information, rechecking at local, at regional level is, of course, something that has to be done. We also rely on the experts in their own fields. Of course, we’re the marketing, we’re the social media team. But it’s not us who make sometimes the call to a certain line, to a certain reactive line or key messages that we want to share. So relying on experts is really essential. And also the monitoring part, making sure that we try as much as possible to anticipate what could go wrong, to anticipate information that’s going out, and that could lead to a misinformation situation. So the monitoring part is also important. And also, we are always trying to be, trying to be again, because it’s not an easy battle, a step ahead in terms of being proactive. This would be, for example, by preparing reactive lines, by also trying to identify the different potential scenarios and be ready if and when it happens. So these are strategies that we put in place. Again, it’s not an easy file. It’s a very essential one. And we’re putting resources, we’re also as much as possible trying to train staff internally, to avoid situations like this, because sometimes also, this type of misinformation is triggered by a certain, let’s say, tweet or information or post, be it on TikTok or other platforms that could also trigger a reaction, a comment that can cascade. So being proactive, training also staff is part of the different strategies. And let’s not forget that the main objective of our communication is, of course, to be as impactful as possible. And we see this information as a threat, not only to the communication that we share on social media, but also sometimes to the credibility of the organization.


Maurice Turner: Thank you. Those were quite interesting strategies. Eva, what about you?


Eeva Moore: Kind of building on that, I think I think you have to bake expertise into the production process. Anything that you see on the social media platforms should be easily digestible, but it should be built on top of multiple layers of expertise and conversations and trust that we have across our community. We do advocacy efforts in countries that I may never have visited, which means I’m not the person to create that. And so it should be labor-intensive. I mean, dealing with these types of issues when we’re creating them, when we want to be accurate, should look like a light lift, but actually, in fact, be a pretty heavy one. And that takes, I think, just incredible amounts of conversation and listening and being very online and seeing what others are saying. I mean, you have to be consuming a lot of information. Sadly, that includes coming across the disinformation, if you’re going to understand it and to understand how it’s navigating the space. And the expertise also has to be credible, because your audience doesn’t want to be spoken down to. So when I say expertise, that is legal, that is technical, but it’s also the people who are impacted by the – I mean, you touched upon refugee communities. For us, it’s about the credibility of people. We can all be led down a disinformation path, but I think people have a pretty keen sense of whether or not they want to listen to somebody, and credibility is built into that. So I think relationships, it’s sort of the world that you inhabit in real life, how it manifests online is, I think, reflected into that work. So you just have to bake it in, and you have to have your work reviewed, and you have to be willing to produce things that maybe don’t make it out into the world. You also have to be just willing to see – to do your best and listen to others and make sure that by the time it gets out there, it’s accurate.


Maurice Turner: I think that’s so key in that it takes the consumption of so much information and the recognition that not one single person or even one organization has all the expertise. There’s a reliance on other folks in the community in building those relationships so that you can get that information, distill it, and at the end of the day put out a product that looks like it was easy to make because it was actually easy to understand, and there’s quite a bit of work that goes into that. I’m sure all of us, not only on stage and online as well as in the audience, have come across challenges in information integrity. So I’d like to shift the conversation to hear more about some of those challenges that we face specifically, and then also how we were able to overcome those challenges. Dr. Oppenman, can you share a challenge that you faced and how you were able to overcome that in terms of information integrity and maybe even an example where you weren’t necessarily able to overcome it?


Dr. Ahmed Ezzat: Yeah, absolutely, and some fantastic points here. I think just to reiterate, one of the strong points of social media, especially short-form social media, is that it can really reach those with lived experiences, and you are very much there, very much accessible, and very much accountable to the hundreds of comments you get. But of course, disinformation sometimes is difficult to combat. Once it’s out, it’s very difficult, but you’re almost on the defense, almost trying to defend an accusation, and sometimes it is an unfair footing because you can’t go outside of descriptive evidence. But I remember when I, and this was working after a call by a Labour MP, who now they’re in government, who had called out for a, oh, why wouldn’t it be perfect to do a campaign on the MMR vaccine, because there was a rise, and TikTok responded and said, oh, well, we’ll do it. And then I fronted that campaign, and I remember we filmed up in the Midlands of England, in London, in different demographics of different ethnicities, backgrounds, etc. And I actually remember the disconnect between the reality, which was that many people were believing in the vaccination importance, but then this was after the backdrop of the COVID vaccination saga, where first of all, there was a time where these medicines were tested in Africa. That created catastrophes in terms of bringing back lots of bad memories around it, historical memories. But then you had lots of top-down suited and booted officials telling people what to do, and that made things even worse. And the result was that the public health message has transformed forever, in that people really don’t want to be expected to listen to officials telling them what to do. And this is where social media had risen. But then I received a massive backlash when I did the content on the MMR vaccine, and with very, very, very loud voices who are a very, very, very small minority spreading disinformation. And I think going back to what you’ve said, that if you build trust and people know that you are doing your best and understand your values over a period of time, as opposed to just a one-off hit, then that tone starts to soften. And the other thing I would just say is, and I’ve had this conversation with very senior officials from another part of the world, who said, well, you may say that your group of viewers are intuitive, but ours aren’t. And this is another well-developed nation, one of the most well-developed and one of the richest. And I was actually saying it’s exactly this mindset which sets us back. Actually, members of the public, regardless of what their understanding is, bearing in mind in the UK the reading age is about six or seven, the maths age is about three to four, they are phenomenally intuitive. They can sniff out right from wrong. You just need to be able to explain that information in an easy and simple way. And the last thing I’d say, so I don’t take up too much time, when you’re looking at content quality, please don’t look at followership. Don’t look at likes. Look at engagement and quality of engagement in the comments and the type of content. Because if the content is high quality, it will deliver on engagement. I wouldn’t worry too much about the followerships and the millions of views. It’s about the quality of engagement.


Maurice Turner: Ghaleb, can you share a particular challenge that you’ve faced and maybe offer some recommendations to the audience about how you were able to overcome them in this particular space?


Ghaleb Cabbabe: Sure, actually. To give us maybe an indication on the situation that we’re dealing with and that sometimes we have to face. I believe and maybe I invite also people in the audience to maybe go on our digital platforms and also the digital platforms of maybe also other Red Cross and Red Crescent movement partners like the ICRC and check the comments section. And this is where you would see how complex this environment is. Because today, unfortunately, we are moving, working, reacting, engaging in a very, very polarized world. And this is even more present because we are talking and our work is in war zones, conflict zones. So this is even more polarized. So misinformation is even more present here. So I think the challenge here is really the nature of what we’re facing when we talk about misinformation. As I was saying previously, there are people whose work is only to create this type of misinformation and to feed it and to make it grow online. So sometimes the battle is not an easy one because the opponent, if we can call it so, of misinformation is an opponent with sometimes many resources. You have sometimes states behind it. So I think the challenge comes from here, the very polarized context and also the people who could be behind this misinformation. How we’re trying to deal with this, again, is by sharing, cross-checking, and checking again the information before sharing it, making sure sometimes also in very sensitive context to anticipate what could be the risk. This is really very important to assess the risk before communicating. And although we are in a world on digital platforms, not only TikTok, where you have to be prompt, you have to be really fast in terms of reaction, one of my recommendations would be that sometimes it’s better not to rush. You have to be prompt, of course. You have to be fast, reactive. But sometimes by taking the time to assess, to see how maybe a certain piece of misinformation is evolving online, the direction it’s taking, it gives you maybe a better understanding of the situation and a better way to tackle it. So, yes, being prompt and fast, but do not rush. It’s a daily work. It’s an ongoing work. It starts, actually, before the misinformation it shares. It starts with what we plan to produce, what we plan to communicate on in our statements, in our press release, in our interviews with the press. And again, it was mentioned in the panel, it’s not only a social media team work. It’s the work of the whole team in terms of expert on the legal side, on the media side, on the different expertise. So it’s an ongoing work, a daily work. And again, it’s a complex one, and it’s not an easy battle.


Maurice Turner: Thank you. And it seems that not only doing the work beforehand to prepare and have a process, but also having the confidence and the patience to be able to understand what you’re going to be responding to is part of the strategy in responding to that challenge. Gisela, do you have a particular challenge related to information integrity that you’d be able to share with us and maybe some recommendations for how you’re able to respond to that?


Gisella Lomax: Thank you. And I like you say a particular challenge. There are many, but perhaps I could just highlight three, but then there are some recommendations I can make as well. I think building off the points thatGhaleb Cabbabe just shared, the first is to say this is… Sorry, this is a very jargonistic term. This is multifunctional work. This is not just communications. This is protection, policy, operational, governance and communications. And so really it’s getting that message across and then making sure these multifunctional teams are resourced. That’s more of an internal challenge for how we address it. But I would say, I mean, there’s three very distinct challenges. And one is that protection challenge. Information risks such as hate speech and misinformation are directly and indirectly causing real world harm. And I mean violence, killings, persecution. It can even be a factor in forced displacement, in causing refugees. In fact, as we saw in Myanmar back in 2016, 2017, when hate speech, which had already been circulating for some time, really exploded and had a decisive role in the displacement, I think, of 700,000 Rohingya refugees into Bangladesh who are still there today. And then also on communities, it’s normalising hostility towards refugees and migrants. So the problem, the harms over time, the very direct harms, that’s on the community, the refugee side, on the operational side. It’s increasingly hampering humanitarians from doing their job around the world in conflicts in Sudan, for example, or other places, disinformation narratives about humanitarians perhaps politicising their work can also erode people’s trust. And, you know, trust is everything. And we know that trust can take years or decades to build and can be destroyed very, very quickly. And I think I recognise the importance of individual voices and I think I celebrate what you’re doing, Dr Ahmed, and people like you, but let’s not forget that we also need to help institutions remain trusted as well. If people can’t trust public health bodies or UN entities, then we also have a challenge. And then a third, very more specific one, is that of trust and safety capacities from digital platforms and tech companies. We have seen, to our dismay, a weakening of these capacities and perhaps less resourcing from some companies. And I would extend that, for example, to content moderation in less common languages. You know, I think this is always a question I ask to all tech companies. Are you adequately providing content moderation in less common languages in these very volatile contexts where you don’t actually have a business argument? You’re not selling ads or making money, but your platforms are still delivering fundamental information. So there are three challenges. But on to a recommendation. I think we’re sat here at an event designed to create international cooperation to solve these problems, so it’s partnerships. And I think speaking as a humanitarian, for us, it’s increasingly building partnerships with tech, with digital, digital rights organisations, civil society NGOs that bring in the skillset, expertise and knowledge that we don’t have, and given a very dramatic and grave financial situation for the humanitarian sector, we’re not going to have more capacity. So how can we partner, collaborate to bring in that knowledge, to test these responses, innovative, long over time, like digital literacy, more immediate for these kind of quite acute harms and conflicts? So my recommendation is to please talk to us. If you’re academic, there are many research gaps and we can help give you access to try and fill those. Then we can take that research to inform strategies and policies. If you’re a tech company, I’ve already mentioned some of the ways that we can help each other. If you’re a digital rights, an NGO, I think there’s many more. So that’s both a recommendation and a plea from us, and that’s what brings me here to Oslo this week.


Maurice Turner: Eva, I’d like to hear more about maybe more of the technical side of the challenges. What are some of the challenges that you’re hearing about from the folks that you’re working with and how they’re trying to tackle information integrity, and what are some of those recommendations that seem to have surfaced to the top?


Eeva Moore: You could boil the challenge down perhaps to one word, and that’s time. If you’re in the business of trying to disrupt or break something, you get to move at a much faster pace than if you’re on the other side of that equation. So from a purely practical standpoint, that means, as comms people, leaning into the creative side of it. So how can we tell a compelling story on a timeline that mirrors, say, a news cycle? We did that with the proposed TikTok ban in the United States. That was a timeline for which this long period of review did not lend itself well to. But we have the experts. They had already done the work. So my very qualified, fantastic colleague Celia went in and just plucked headlines from a blog post, designed it within seconds, and put it out there. Not seconds, perhaps minutes. So it’s knowing what resources are already there that you can act on, because I agree with what Gallup said, is that sometimes you have to wait, but there’s also the aspect of it where sometimes you can’t afford to, right? So the creative side of it, which is in and of itself interactive, but that’s one way around time. The other way is simply doing the heavy lifting for your community, remembering that if you’re the person whose job it is to create these things, you are not asking somebody. We work with volunteers, essentially, around the world, right? These are people with lives, with families, with jobs, often in challenging circumstances as well as less challenging circumstances. It’s my job, it’s my team’s job, to try to ask as little of them so that then when we do ask, they’re able to actually jump in and help. But time, it’s labor intensive. It’s not something that people are swimming in.


Maurice Turner: It seems like that’s a poignant point to wrap up that part of discussion, time. It’s a resource that no one quite has enough of, and as we heard earlier, there needs to be a balance of when do you find that right time do you do it fast, do you wait to be more thorough and what’s appropriate for the response that you need given that particular situation. At this time, I’d like to go ahead and open up the time for questions from the audience. Again, we have microphones, so if you have a question, feel free to step up to the microphones on the side. If you’re a little bit shy, that’s okay too. You can just raise your hand, maybe pass a note to someone who might have a question, and if you’re online, please do type in a question there and we’ll have that moderated as well.


Audience: May I? Hi, good afternoon. Thank you for the panel. My name is Bia Barbosa. I’m a journalist from Brazil. I’m also a civil society representative at the Brazilian Internet Steering Committee. Thank you for all the sharing of your experience producing content for TikTok, specifically if I got it, or for social media in general. And how you face these challenges regarding information integrity for us as a journalist perspective is something that is in our most… This is one of the most important preoccupations at this time. And I would like to hear from you because regarding the challenges, I didn’t see, besides Gisela mentioning, the resources. Thank you very much.


Maurice Turner: Do we have any other questions from the audience? Feel free to step up to the microphone if you have one. Any questions from the online audience? Excellent. Well, what I’ll go ahead and do is wrap us up. Feel free to engage with our panelists as we exit the stage and close out the session, or if not now, then go ahead and find them throughout the week. I think we had an interesting discussion in hearing about how these organizations that you all represent utilize the platform or utilize digital platforms to put out information to push back against some of that misinformation that’s out there. We also heard about some strategies for attacking the safety issue regarding information integrity, and we also heard some of the recommendations that came through as well. I particularly enjoyed hearing about those tensions that naturally take place between having to be viewed as experts, recognizing that you also rely on a network of expertise, so it’s not just one individual or one sole organization. And also that tension between balancing that limited resource of time, so being very prompt, but also allowing time for that situation to evolve to better understand how best to respond to it in a way that is not only authentic, but in a way that is also prompt, so that even when attentions might be shortened, that message can get out effectively. So again, please do join me in thanking the panel, and I’d also like to thank you all for joining this discussion, and I hope you enjoy the rest of your week at the conference. Thank you. Thank you.


M

Maurice Turner

Speech speed

164 words per minute

Speech length

1439 words

Speech time

525 seconds

TikTok partners with over 20 fact-checking organizations across 60 markets and empowers users through media literacy programs

Explanation

TikTok has established partnerships with accredited fact-checking organizations globally to combat misinformation on their platform. They also provide media literacy resources to help users recognize misinformation, assess content critically, and report violative content.


Evidence

More than 20 accredited fact-checking organizations across 60 different markets


Major discussion point

Using Digital Platforms for Information Integrity and Outreach


Topics

Sociocultural | Legal and regulatory


Agreed with

– Dr. Ahmed Ezzat
– Gisella Lomax
– Ghaleb Cabbabe
– Eeva Moore

Agreed on

Digital platforms provide unprecedented reach and impact for information dissemination


D

Dr. Ahmed Ezzat

Speech speed

166 words per minute

Speech length

1293 words

Speech time

465 seconds

Medical professionals can reach millions of people within 24 hours at zero cost using evidence-based information, representing massive public health potential

Explanation

Healthcare professionals can leverage TikTok’s reach to disseminate important public health information rapidly and cost-effectively. This represents a significant opportunity for public health communication when balanced correctly with evidence-based practice.


Evidence

During a UK heat wave, could reach a million people or half a million people within less than 24 hours at zero cost; first viral content on childhood infections hit 4.5 million views and was shared 156,000 times


Major discussion point

Using Digital Platforms for Information Integrity and Outreach


Topics

Sociocultural | Development


Agreed with

– Maurice Turner
– Gisella Lomax
– Ghaleb Cabbabe
– Eeva Moore

Agreed on

Digital platforms provide unprecedented reach and impact for information dissemination


Healthcare professionals must maintain accountability and use real names to build credibility, avoiding commercial endorsements that could damage trust

Explanation

Medical professionals creating content must be held accountable for their medical information by using their real names rather than nicknames. They must carefully avoid brand deals that could compromise their credibility, particularly those related to medical products.


Evidence

Shifted from using a nickname to real name on TikTok; receives daily offers for weight loss medicine brand deals which would ‘absolutely decimate credibility’


Major discussion point

Content Creation Strategies and Building Trust


Topics

Human rights | Sociocultural


Agreed with

– Gisella Lomax
– Ghaleb Cabbabe
– Eeva Moore

Agreed on

Content creation requires multi-layered expertise and cannot be done by single individuals or departments alone


Misinformation creates an unfair defensive position where evidence-based voices must defend against accusations while being constrained by factual accuracy

Explanation

When misinformation spreads, healthcare professionals find themselves in a defensive position trying to counter false claims. This creates an unfair advantage for misinformation spreaders who aren’t constrained by evidence, while medical professionals must stick to factual, evidence-based responses.


Evidence

Received massive backlash when creating MMR vaccine content, with very loud voices from a small minority spreading disinformation


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Sociocultural | Human rights


Agreed with

– Gisella Lomax
– Ghaleb Cabbabe

Agreed on

Misinformation poses serious real-world threats that require proactive, systematic responses


Disagreed with

– Ghaleb Cabbabe
– Eeva Moore

Disagreed on

Speed vs. Thoroughness in Response Strategy


Focus on quality of engagement and comments rather than follower counts or view numbers when measuring content success

Explanation

The effectiveness of content should be measured by the quality of engagement and meaningful interactions in comments rather than vanity metrics like followers or views. High-quality content will naturally deliver better engagement even without massive followership.


Major discussion point

Recommendations for Effective Information Integrity


Topics

Sociocultural


G

Gisella Lomax

Speech speed

156 words per minute

Speech length

1329 words

Speech time

509 seconds

Digital platforms are vital for providing lifesaving protection information to 123 million displaced people across 133 countries

Explanation

UNHCR uses social media extensively to provide critical protection information to refugees and displaced populations globally. These platforms also help inform the general public and stakeholders about their work while amplifying refugee voices.


Evidence

UNHCR protects 123 million people across 133 countries; was the first UN agency to create a TikTok account


Major discussion point

Using Digital Platforms for Information Integrity and Outreach


Topics

Development | Human rights


Agreed with

– Maurice Turner
– Dr. Ahmed Ezzat
– Ghaleb Cabbabe
– Eeva Moore

Agreed on

Digital platforms provide unprecedented reach and impact for information dissemination


Successful content requires being creative, partnering with influencers, uplifting community voices, and making facts entertaining while remaining educational

Explanation

Effective social media strategy involves creative approaches including influencer partnerships and refugee-led storytelling. The challenge is presenting factual information in an entertaining and educational format that can spark empathy and drive action.


Evidence

UNHCR works with influencers and brands, focuses on refugee-led storytelling, and tries to make facts entertaining and educative


Major discussion point

Content Creation Strategies and Building Trust


Topics

Sociocultural | Human rights


Information risks directly cause real-world harm including violence, killings, and forced displacement, as seen with Rohingya refugees in Myanmar

Explanation

Misinformation and hate speech have direct consequences including violence, persecution, and can even be factors in causing forced displacement. These information risks also normalize hostility toward refugee and migrant communities over time.


Evidence

Myanmar 2016-2017 hate speech explosion had a decisive role in displacing 700,000 Rohingya refugees to Bangladesh who remain there today


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Human rights | Cybersecurity


Agreed with

– Dr. Ahmed Ezzat
– Ghaleb Cabbabe

Agreed on

Misinformation poses serious real-world threats that require proactive, systematic responses


Build partnerships with tech companies, academic institutions, and civil society organizations to leverage expertise and resources that humanitarian organizations lack

Explanation

Given the dramatic financial constraints facing the humanitarian sector, organizations must collaborate with tech companies, researchers, and NGOs to access specialized knowledge and skills. These partnerships can help test innovative responses and fill research gaps.


Evidence

UNHCR has a project in South Africa testing ‘pre-bunking’ with technical support from TikTok, Google, and others in partnership with Norway and South Africa


Major discussion point

Recommendations for Effective Information Integrity


Topics

Development | Legal and regulatory


Agreed with

– Dr. Ahmed Ezzat
– Ghaleb Cabbabe
– Eeva Moore

Agreed on

Content creation requires multi-layered expertise and cannot be done by single individuals or departments alone


G

Ghaleb Cabbabe

Speech speed

138 words per minute

Speech length

231 words

Speech time

100 seconds

Digital platforms serve multiple purposes including brand awareness, crisis communication, information sharing, and fundraising campaigns

Explanation

The International Federation of Red Cross uses digital platforms for various strategic purposes depending on the situation. This includes building brand awareness, crisis response during disasters, combating misinformation, and supporting fundraising efforts.


Evidence

Used platforms during recent disasters in Myanmar, Gaza, and Iran; has been working with Red Cross since 2017 when TikTok was beginning


Major discussion point

Using Digital Platforms for Information Integrity and Outreach


Topics

Sociocultural | Development


Agreed with

– Maurice Turner
– Dr. Ahmed Ezzat
– Gisella Lomax
– Eeva Moore

Agreed on

Digital platforms provide unprecedented reach and impact for information dissemination


Organizations must rely on field experts rather than just marketing teams, and prepare proactive strategies including reactive lines and scenario planning

Explanation

Effective information integrity requires input from subject matter experts beyond just social media teams. Organizations need to prepare proactive strategies, develop reactive messaging in advance, and train staff to prevent situations that could trigger misinformation cascades.


Evidence

Red Cross relies on experts in their fields for key messages; prepares reactive lines and identifies potential scenarios; trains staff internally to avoid triggering misinformation


Major discussion point

Content Creation Strategies and Building Trust


Topics

Legal and regulatory | Sociocultural


Agreed with

– Dr. Ahmed Ezzat
– Gisella Lomax
– Eeva Moore

Agreed on

Content creation requires multi-layered expertise and cannot be done by single individuals or departments alone


Organizations face opponents with significant resources including state actors and misinformation farms operating in highly polarized environments

Explanation

The challenge of combating misinformation is amplified by well-resourced opponents, including organized misinformation farms and sometimes state actors. This is particularly difficult in polarized contexts involving war zones and conflict areas where the Red Cross operates.


Evidence

There are people and companies whose only work is to produce misinformation; sometimes states are behind misinformation efforts; Red Cross works in very polarized war zones and conflict zones


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Cybersecurity | Human rights


Agreed with

– Dr. Ahmed Ezzat
– Gisella Lomax

Agreed on

Misinformation poses serious real-world threats that require proactive, systematic responses


Balance speed with thoroughness by assessing risks before communicating, sometimes taking time to understand how misinformation evolves rather than rushing responses

Explanation

While digital platforms require prompt responses, organizations should resist rushing and instead take time to assess risks and understand how misinformation is developing. This strategic patience can lead to more effective responses even in fast-paced digital environments.


Evidence

Recommends being prompt and fast but not rushing; suggests taking time to see how misinformation evolves online and understanding its direction


Major discussion point

Recommendations for Effective Information Integrity


Topics

Sociocultural | Legal and regulatory


Agreed with

– Eeva Moore

Agreed on

Time constraints create fundamental challenges in responding to misinformation effectively


Disagreed with

– Dr. Ahmed Ezzat
– Eeva Moore

Disagreed on

Speed vs. Thoroughness in Response Strategy


Information verification requires multiple levels of checking and cross-checking, especially when dealing with sensitive situations involving colleague safety

Explanation

Organizations must implement rigorous verification processes that involve checking, double-checking, and checking again at local and regional levels. This is particularly critical when dealing with sensitive information such as attacks on humanitarian workers.


Evidence

Example of verifying information about killing of Palestinian Red Crescent colleagues in Gaza and Iranian Red Crescent colleagues in recent attacks


Major discussion point

Content Creation Strategies and Building Trust


Topics

Human rights | Legal and regulatory


Monitoring and anticipation are essential components of misinformation prevention strategy

Explanation

Organizations need to actively monitor information environments and try to anticipate potential misinformation scenarios before they occur. This proactive approach helps organizations prepare appropriate responses and identify risks early.


Evidence

Red Cross tries to anticipate what could go wrong and what information could lead to misinformation situations


Major discussion point

Recommendations for Effective Information Integrity


Topics

Cybersecurity | Sociocultural


Misinformation threatens organizational credibility and communication impact, requiring it to be treated as a serious institutional risk

Explanation

Misinformation is not just a communications challenge but poses a direct threat to organizational credibility and the effectiveness of their messaging. Organizations must view misinformation as a fundamental threat to their institutional reputation and mission effectiveness.


Evidence

Red Cross sees misinformation as a threat not only to social media communication but also to the credibility of the organization


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Legal and regulatory | Human rights


Red Cross has been working with TikTok since 2017 when the platform was in its beginning stages, witnessing significant platform growth

Explanation

The International Federation of Red Cross has a long history with TikTok, starting their engagement in 2017 when the platform was just emerging. This early adoption has allowed them to observe and benefit from the platform’s tremendous growth over the years.


Evidence

Working with Red Cross since 2017 when TikTok was in the beginning stages; seeing where the platform is today represents quite a great achievement


Major discussion point

Using Digital Platforms for Information Integrity and Outreach


Topics

Sociocultural | Development


Misinformation can be triggered by organizational posts and cascade across platforms, requiring staff training to prevent such situations

Explanation

Organizations must recognize that their own social media posts can inadvertently trigger misinformation campaigns that spread across multiple platforms. This requires comprehensive staff training to help employees understand how their communications might be misinterpreted or weaponized.


Evidence

Sometimes misinformation is triggered by a certain tweet or post on TikTok or other platforms that could cascade; training staff internally is part of different strategies


Major discussion point

Content Creation Strategies and Building Trust


Topics

Sociocultural | Legal and regulatory


The polarized nature of conflict zones and war environments makes misinformation challenges more complex and prevalent

Explanation

Working in conflict zones and war environments creates an especially challenging context for information integrity because these situations are inherently polarized. This polarization makes misinformation more likely to spread and more difficult to counter effectively.


Evidence

Red Cross works in war zones and conflict zones which are very polarized environments; misinformation is even more present in these contexts


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Human rights | Cybersecurity


Combating misinformation is an ongoing daily work that starts before misinformation appears and involves the whole organizational team

Explanation

The fight against misinformation is not just reactive but requires continuous proactive effort that begins with planning what to communicate in statements, press releases, and interviews. This work requires coordination across multiple departments and expertise areas, not just social media teams.


Evidence

It starts with what we plan to produce, communicate in statements, press releases, interviews; involves experts on legal, media, and different expertise areas


Major discussion point

Recommendations for Effective Information Integrity


Topics

Legal and regulatory | Sociocultural


The complexity of misinformation battles requires understanding that opponents may have significant institutional backing and resources

Explanation

Organizations fighting misinformation must recognize they face well-funded and organized opposition that may include state actors and dedicated misinformation operations. This creates an uneven playing field where the battle is inherently difficult due to resource disparities.


Evidence

There are farms in some places where their only goal is to produce misinformation; sometimes states are behind misinformation efforts


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Cybersecurity | Human rights


E

Eeva Moore

Speech speed

169 words per minute

Speech length

746 words

Speech time

263 seconds

Platforms help connect the remaining third of the world that’s not online through educational content and community stories

Explanation

The Internet Society uses digital platforms to support their mission of connecting unconnected populations globally. Their content focuses on educational material, advocacy, and community impact stories across their 181 country chapters.


Evidence

Internet Society works to connect the remaining third of the world that’s not online; has chapters in 181 countries


Major discussion point

Using Digital Platforms for Information Integrity and Outreach


Topics

Development | Infrastructure


Agreed with

– Maurice Turner
– Dr. Ahmed Ezzat
– Gisella Lomax
– Ghaleb Cabbabe

Agreed on

Digital platforms provide unprecedented reach and impact for information dissemination


Expertise must be baked into the production process with multiple layers of review, making content appear simple while requiring heavy behind-the-scenes work

Explanation

Creating trustworthy content requires extensive expertise and consultation built into the production process. While the final product should be easily digestible, it must be supported by multiple layers of expertise, conversations, and trust within the community.


Evidence

Content should look like a light lift but actually be a heavy one; requires consuming lots of information including disinformation to understand the space; involves legal, technical, and community expertise


Major discussion point

Content Creation Strategies and Building Trust


Topics

Sociocultural | Human rights


Agreed with

– Dr. Ahmed Ezzat
– Gisella Lomax
– Ghaleb Cabbabe

Agreed on

Content creation requires multi-layered expertise and cannot be done by single individuals or departments alone


Time constraints create fundamental challenges since disruption moves faster than constructive response, requiring creative solutions and pre-prepared resources

Explanation

The core challenge in information integrity is that those seeking to disrupt or spread misinformation can move much faster than those trying to provide accurate information. This requires creative approaches and having resources prepared in advance to respond quickly when needed.


Evidence

During the proposed TikTok ban in the US, they quickly adapted existing blog content into social media posts within minutes using pre-existing expert work


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Ghaleb Cabbabe

Agreed on

Time constraints create fundamental challenges in responding to misinformation effectively


Disagreed with

– Dr. Ahmed Ezzat
– Ghaleb Cabbabe

Disagreed on

Speed vs. Thoroughness in Response Strategy


Prepare heavy-lifting resources in advance for community volunteers and maintain credibility through relationship-building and authentic expertise

Explanation

Organizations should do the intensive preparation work so that when they need community support, they’re asking as little as possible from volunteers who have their own lives and responsibilities. Credibility comes from authentic relationships and expertise that people can trust.


Evidence

Internet Society works with volunteers around the world who have lives, families, and jobs; credibility is built into relationships and people have a keen sense of whether they want to listen to somebody


Major discussion point

Recommendations for Effective Information Integrity


Topics

Development | Sociocultural


A

Audience

Speech speed

127 words per minute

Speech length

113 words

Speech time

53 seconds

Resource constraints are a major challenge for information integrity work that needs to be addressed

Explanation

A journalist from Brazil representing civil society highlighted that resource limitations are one of the most important concerns when dealing with information integrity challenges. This was noted as an area that wasn’t sufficiently covered in the panel discussion despite being a critical issue.


Evidence

Mentioned as a journalist from Brazil and civil society representative at the Brazilian Internet Steering Committee


Major discussion point

Challenges in Combating Misinformation and Disinformation


Topics

Development | Legal and regulatory


Agreements

Agreement points

Digital platforms provide unprecedented reach and impact for information dissemination

Speakers

– Maurice Turner
– Dr. Ahmed Ezzat
– Gisella Lomax
– Ghaleb Cabbabe
– Eeva Moore

Arguments

TikTok partners with over 20 fact-checking organizations across 60 markets and empowers users through media literacy programs


Medical professionals can reach millions of people within 24 hours at zero cost using evidence-based information, representing massive public health potential


Digital platforms are vital for providing lifesaving protection information to 123 million displaced people across 133 countries


Digital platforms serve multiple purposes including brand awareness, crisis communication, information sharing, and fundraising campaigns


Platforms help connect the remaining third of the world that’s not online through educational content and community stories


Summary

All speakers agreed that digital platforms, particularly TikTok, offer massive reach and potential for positive impact when used strategically for information dissemination, whether for health information, humanitarian aid, or global connectivity.


Topics

Sociocultural | Development | Human rights


Content creation requires multi-layered expertise and cannot be done by single individuals or departments alone

Speakers

– Dr. Ahmed Ezzat
– Gisella Lomax
– Ghaleb Cabbabe
– Eeva Moore

Arguments

Healthcare professionals must maintain accountability and use real names to build credibility, avoiding commercial endorsements that could damage trust


Build partnerships with tech companies, academic institutions, and civil society organizations to leverage expertise and resources that humanitarian organizations lack


Organizations must rely on field experts rather than just marketing teams, and prepare proactive strategies including reactive lines and scenario planning


Expertise must be baked into the production process with multiple layers of review, making content appear simple while requiring heavy behind-the-scenes work


Summary

All speakers emphasized that effective content creation requires collaboration across multiple expertise areas, from medical professionals to legal experts to field specialists, rather than relying on single individuals or departments.


Topics

Sociocultural | Legal and regulatory | Human rights


Misinformation poses serious real-world threats that require proactive, systematic responses

Speakers

– Dr. Ahmed Ezzat
– Gisella Lomax
– Ghaleb Cabbabe

Arguments

Misinformation creates an unfair defensive position where evidence-based voices must defend against accusations while being constrained by factual accuracy


Information risks directly cause real-world harm including violence, killings, and forced displacement, as seen with Rohingya refugees in Myanmar


Organizations face opponents with significant resources including state actors and misinformation farms operating in highly polarized environments


Summary

Speakers agreed that misinformation is not just a communications challenge but poses direct threats to safety, security, and organizational credibility, requiring systematic and proactive responses.


Topics

Human rights | Cybersecurity | Legal and regulatory


Time constraints create fundamental challenges in responding to misinformation effectively

Speakers

– Ghaleb Cabbabe
– Eeva Moore

Arguments

Balance speed with thoroughness by assessing risks before communicating, sometimes taking time to understand how misinformation evolves rather than rushing responses


Time constraints create fundamental challenges since disruption moves faster than constructive response, requiring creative solutions and pre-prepared resources


Summary

Both speakers identified time as a critical constraint, noting that while misinformation spreads quickly, effective responses require careful preparation and strategic timing rather than rushed reactions.


Topics

Sociocultural | Legal and regulatory


Similar viewpoints

Both emphasized that successful content strategy should prioritize meaningful engagement and authentic community connections over vanity metrics, focusing on quality interactions and creative approaches to make factual information accessible and engaging.

Speakers

– Dr. Ahmed Ezzat
– Gisella Lomax

Arguments

Focus on quality of engagement and comments rather than follower counts or view numbers when measuring content success


Successful content requires being creative, partnering with influencers, uplifting community voices, and making facts entertaining while remaining educational


Topics

Sociocultural | Human rights


Both humanitarian organizations recognized that they operate in highly polarized, conflict-affected environments where misinformation has direct, severe consequences including violence and displacement, and where they face well-resourced opposition.

Speakers

– Gisella Lomax
– Ghaleb Cabbabe

Arguments

Information risks directly cause real-world harm including violence, killings, and forced displacement, as seen with Rohingya refugees in Myanmar


Organizations face opponents with significant resources including state actors and misinformation farms operating in highly polarized environments


Topics

Human rights | Cybersecurity


Both emphasized the intensive, multi-layered verification and review processes required to ensure information accuracy, particularly in sensitive contexts, while making the final product appear accessible and simple to audiences.

Speakers

– Ghaleb Cabbabe
– Eeva Moore

Arguments

Information verification requires multiple levels of checking and cross-checking, especially when dealing with sensitive situations involving colleague safety


Expertise must be baked into the production process with multiple layers of review, making content appear simple while requiring heavy behind-the-scenes work


Topics

Legal and regulatory | Human rights


Unexpected consensus

Strategic patience in responding to misinformation rather than immediate reaction

Speakers

– Ghaleb Cabbabe
– Eeva Moore

Arguments

Balance speed with thoroughness by assessing risks before communicating, sometimes taking time to understand how misinformation evolves rather than rushing responses


Time constraints create fundamental challenges since disruption moves faster than constructive response, requiring creative solutions and pre-prepared resources


Explanation

Despite the fast-paced nature of social media and the pressure to respond quickly to misinformation, both speakers advocated for strategic patience and thorough preparation over immediate reactions. This is unexpected given the common assumption that social media requires instant responses.


Topics

Sociocultural | Legal and regulatory


The need for institutional trust alongside individual credibility

Speakers

– Dr. Ahmed Ezzat
– Gisella Lomax

Arguments

Healthcare professionals must maintain accountability and use real names to build credibility, avoiding commercial endorsements that could damage trust


Build partnerships with tech companies, academic institutions, and civil society organizations to leverage expertise and resources that humanitarian organizations lack


Explanation

While much discussion around social media focuses on individual influencers and personal branding, both speakers emphasized the continued importance of institutional credibility and partnerships, suggesting that individual and institutional trust must work together rather than compete.


Topics

Human rights | Sociocultural | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated strong consensus on the transformative potential of digital platforms for positive information dissemination, the serious nature of misinformation threats, and the need for multi-stakeholder, expertise-driven approaches to content creation and verification.


Consensus level

High level of consensus with complementary perspectives rather than conflicting viewpoints. The agreement spans across different sectors (healthcare, humanitarian, technology, civil society) suggesting broad applicability of these principles. This consensus indicates that despite different organizational contexts, there are shared challenges and effective strategies that can be applied across sectors for information integrity.


Differences

Different viewpoints

Speed vs. Thoroughness in Response Strategy

Speakers

– Dr. Ahmed Ezzat
– Ghaleb Cabbabe
– Eeva Moore

Arguments

Misinformation creates an unfair defensive position where evidence-based voices must defend against accusations while being constrained by factual accuracy


Balance speed with thoroughness by assessing risks before communicating, sometimes taking time to understand how misinformation evolves rather than rushing responses


Time constraints create fundamental challenges since disruption moves faster than constructive response, requiring creative solutions and pre-prepared resources


Summary

Dr. Ahmed emphasizes the defensive disadvantage of being constrained by evidence when responding to misinformation, while Ghaleb advocates for strategic patience and taking time to assess situations. Eeva focuses on the need for speed through creative solutions and pre-preparation, representing different approaches to the time-versus-accuracy dilemma.


Topics

Sociocultural | Legal and regulatory


Unexpected differences

Platform Algorithm Engagement Strategy

Speakers

– Dr. Ahmed Ezzat
– Gisella Lomax

Arguments

Focus on quality of engagement and comments rather than follower counts or view numbers when measuring content success


Successful content requires being creative, partnering with influencers, uplifting community voices, and making facts entertaining while remaining educational


Explanation

While both work on the same platform (TikTok), Dr. Ahmed advocates for focusing on engagement quality over metrics, while Gisella emphasizes creative partnerships and entertainment value. This represents different philosophies about how to effectively use social media algorithms – organic engagement versus strategic content optimization.


Topics

Sociocultural


Overall assessment

Summary

The panel showed remarkable consensus on core challenges and goals, with disagreements primarily centered on tactical approaches rather than fundamental principles. Main areas of difference included response timing strategies and platform engagement methods.


Disagreement level

Low to moderate disagreement level. The speakers demonstrated strong alignment on identifying challenges (misinformation threats, resource constraints, need for expertise) and broad goals (information integrity, community protection, credible content creation). Disagreements were primarily tactical and complementary rather than contradictory, suggesting different but potentially compatible approaches to shared challenges. This level of agreement is significant as it indicates a mature understanding of the field where practitioners can focus on refining methods rather than debating fundamental approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both emphasized that successful content strategy should prioritize meaningful engagement and authentic community connections over vanity metrics, focusing on quality interactions and creative approaches to make factual information accessible and engaging.

Speakers

– Dr. Ahmed Ezzat
– Gisella Lomax

Arguments

Focus on quality of engagement and comments rather than follower counts or view numbers when measuring content success


Successful content requires being creative, partnering with influencers, uplifting community voices, and making facts entertaining while remaining educational


Topics

Sociocultural | Human rights


Both humanitarian organizations recognized that they operate in highly polarized, conflict-affected environments where misinformation has direct, severe consequences including violence and displacement, and where they face well-resourced opposition.

Speakers

– Gisella Lomax
– Ghaleb Cabbabe

Arguments

Information risks directly cause real-world harm including violence, killings, and forced displacement, as seen with Rohingya refugees in Myanmar


Organizations face opponents with significant resources including state actors and misinformation farms operating in highly polarized environments


Topics

Human rights | Cybersecurity


Both emphasized the intensive, multi-layered verification and review processes required to ensure information accuracy, particularly in sensitive contexts, while making the final product appear accessible and simple to audiences.

Speakers

– Ghaleb Cabbabe
– Eeva Moore

Arguments

Information verification requires multiple levels of checking and cross-checking, especially when dealing with sensitive situations involving colleague safety


Expertise must be baked into the production process with multiple layers of review, making content appear simple while requiring heavy behind-the-scenes work


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

Digital platforms like TikTok offer unprecedented reach for credible information – medical professionals can reach millions within 24 hours at zero cost, making them powerful tools for public health and humanitarian communication


Trust and credibility are built over time through consistent, accountable content creation – healthcare professionals must use real names and avoid commercial endorsements that could damage credibility


Effective content creation requires extensive behind-the-scenes work – multiple layers of expertise, fact-checking, and community input must be ‘baked into’ the production process to make content appear simple


Misinformation creates asymmetric warfare where bad actors with significant resources (including state actors) can move faster than those trying to provide accurate information


Quality of engagement matters more than follower counts – focus should be on meaningful comments and interactions rather than vanity metrics


Information risks cause real-world harm including violence, displacement, and erosion of trust in institutions, as demonstrated by the Myanmar Rohingya crisis


Partnerships are essential – humanitarian organizations, tech companies, academic institutions, and civil society must collaborate to leverage complementary expertise and resources


Time is the fundamental challenge – balancing the need for prompt response with thorough fact-checking and risk assessment


Resolutions and action items

UNHCR invited collaboration from academics, tech companies, and NGOs, specifically requesting partnerships to fill research gaps and develop strategies


Gisella Lomax promoted a Wednesday 2 PM event showcasing their South Africa pre-bunking project testing inoculation theory


Recommendation to check UNHCR’s information integrity toolkit by Googling ‘UNHCR information integrity’


Suggestion for audience members to examine comment sections on Red Cross digital platforms to understand the complexity of misinformation challenges


Unresolved issues

How to adequately resource content moderation in less common languages, especially in volatile contexts where platforms don’t have business incentives


The ongoing challenge of weakening trust and safety capacities from some digital platforms and tech companies


How to effectively combat well-resourced misinformation operations including state actors and misinformation farms


The fundamental time asymmetry between those creating misinformation and those trying to counter it with factual information


How to maintain institutional trust while individual voices gain prominence in information sharing


Balancing the need for speed in digital communication with thorough fact-checking and risk assessment processes


Suggested compromises

Balance speed with thoroughness by preparing reactive strategies and scenario planning in advance, allowing for quick but informed responses


Focus on proactive content creation to fill information voids rather than only reactive responses to misinformation


Leverage partnerships to share the resource burden – organizations should collaborate rather than trying to build all capabilities internally


Accept that sometimes it’s better to wait and assess how misinformation evolves rather than rushing to respond immediately


Combine individual creator authenticity with institutional expertise – use real names and personal accountability while maintaining organizational standards


Thought provoking comments

Actually, members of the public, regardless of what their understanding is, bearing in mind in the UK the reading age is about six or seven, the maths age is about three to four, they are phenomenally intuitive. They can sniff out right from wrong. You just need to be able to explain that information in an easy and simple way.

Speaker

Dr. Ahmed Ezzat


Reason

This comment challenges the condescending assumption that the public lacks intelligence or intuition about information quality. It reframes the problem from ‘people are gullible’ to ‘experts need to communicate better,’ which is a fundamental shift in perspective about information integrity challenges.


Impact

This comment shifted the discussion from focusing on platform mechanics to emphasizing the importance of respectful, accessible communication. It influenced the later emphasis on building trust through authentic engagement rather than top-down messaging.


Information risks such as hate speech and misinformation are directly and indirectly causing real world harm. And I mean violence, killings, persecution. It can even be a factor in forced displacement, in causing refugees… as we saw in Myanmar back in 2016, 2017, when hate speech… had a decisive role in the displacement, I think, of 700,000 Rohingya refugees into Bangladesh who are still there today.

Speaker

Gisella Lomax


Reason

This comment elevated the stakes of the discussion by connecting online misinformation to concrete, devastating real-world consequences. It moved beyond abstract concerns about ‘information integrity’ to demonstrate how digital platform content can literally displace populations and cause humanitarian crises.


Impact

This dramatically shifted the tone and urgency of the conversation. It transformed the discussion from a technical/marketing challenge to a humanitarian imperative, influencing subsequent speakers to emphasize the life-or-death importance of their work and the need for stronger partnerships and resources.


You could boil the challenge down perhaps to one word, and that’s time. If you’re in the business of trying to disrupt or break something, you get to move at a much faster pace than if you’re on the other side of that equation.

Speaker

Eeva Moore


Reason

This insight crystallizes a fundamental asymmetry in information warfare – that destructive actors have inherent speed advantages over those trying to build trust and provide accurate information. It’s a profound observation about the structural challenges facing legitimate information providers.


Impact

This comment provided a unifying framework for understanding the challenges all panelists had described. It led to the moderator’s closing reflection on the ‘tension between balancing that limited resource of time’ and influenced the final discussion about strategic timing in responses.


We have seen, to our dismay, a weakening of these capacities and perhaps less resourcing from some companies. And I would extend that, for example, to content moderation in less common languages… Are you adequately providing content moderation in less common languages in these very volatile contexts where you don’t actually have a business argument?

Speaker

Gisella Lomax


Reason

This comment exposed a critical gap in platform responsibility – the tendency to under-resource content moderation in markets that aren’t profitable, even when those markets may be experiencing the most severe consequences of misinformation. It challenges the business model underlying content moderation.


Impact

This introduced a more critical perspective on platform responsibility that hadn’t been present earlier in the discussion, adding complexity to what had been a more collaborative tone between content creators and platforms.


It should be labor-intensive. I mean, dealing with these types of issues when we’re creating them, when we want to be accurate, should look like a light lift, but actually, in fact, be a pretty heavy one… you have to be consuming a lot of information. Sadly, that includes coming across the disinformation, if you’re going to understand it and to understand how it’s navigating the space.

Speaker

Eeva Moore


Reason

This comment reveals the hidden complexity behind seemingly simple social media content and acknowledges the psychological toll of constantly engaging with misinformation. It challenges the expectation that good content should be quick and easy to produce.


Impact

This deepened the discussion about resource allocation and the true cost of maintaining information integrity, supporting other panelists’ calls for more institutional support and partnership approaches.


Overall assessment

These key comments fundamentally elevated and reframed the discussion from a tactical conversation about social media best practices to a strategic dialogue about asymmetric information warfare and humanitarian responsibility. Dr. Ahmed’s insight about public intuition shifted the focus from platform mechanics to communication respect and accessibility. Gisella’s Myanmar example transformed the stakes from abstract ‘information integrity’ to concrete life-and-death consequences, while her critique of platform resource allocation introduced necessary tension about corporate responsibility. Eva’s ‘time’ framework provided a unifying theory for understanding the structural disadvantages faced by legitimate information providers. Together, these comments created a progression from individual content creation strategies to systemic analysis of power imbalances, resource constraints, and humanitarian imperatives in the digital information ecosystem. The discussion evolved from ‘how to create good content’ to ‘how to address fundamental inequities in information warfare while serving vulnerable populations.’


Follow-up questions

How can digital platforms adequately provide content moderation in less common languages in volatile contexts where there’s no business argument?

Speaker

Gisella Lomax


Explanation

This addresses a critical gap in platform safety measures for vulnerable populations who communicate in languages that aren’t commercially viable for platforms but still need protection from harmful content


What are the research gaps in addressing information integrity challenges in humanitarian contexts?

Speaker

Gisella Lomax


Explanation

Academic research is needed to inform strategies and policies for combating misinformation and hate speech that directly harms refugee and displaced populations


How can the effectiveness of ‘pre-bunking’ or inoculation theory be measured in building community resilience against hate speech and misinformation?

Speaker

Gisella Lomax


Explanation

UNHCR is testing this approach in South Africa but more research is needed to understand its effectiveness and scalability across different contexts


What metrics should be prioritized when evaluating content quality beyond traditional engagement metrics like followers and likes?

Speaker

Dr. Ahmed Ezzat


Explanation

Understanding how to measure meaningful engagement and content impact is crucial for organizations trying to combat misinformation effectively


How can institutions maintain public trust in an era where people increasingly distrust official sources?

Speaker

Gisella Lomax


Explanation

This addresses the broader challenge of institutional credibility when individual voices are often more trusted than official organizations


What are the resource implications and funding challenges for organizations trying to combat misinformation?

Speaker

Bia Barbosa (audience member)


Explanation

The question was cut off in the transcript but relates to the resource constraints mentioned by multiple panelists in fighting well-funded misinformation campaigns


How can partnerships between humanitarian organizations, tech companies, and civil society be structured to effectively address information integrity challenges?

Speaker

Gisella Lomax


Explanation

Given the multifunctional nature of this work and limited resources, understanding effective partnership models is crucial for scaling solutions


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action

Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action

Session at a glance

Summary

This discussion centered on the Freedom Online Coalition’s updated Joint Statement on Artificial Intelligence and Human Rights for 2025, presented during a session on shaping global AI governance through multi-stakeholder action. The panel featured representatives from Estonia, the Netherlands, Germany, Ghana, and Microsoft, highlighting the collaborative effort to address AI’s impact on human rights.


Ambassador Ernst Noorman from the Netherlands emphasized that human rights and security are interconnected, noting that when rights are eroded, societies become more unstable rather than safer. He stressed that AI risks are no longer theoretical, citing examples of AI being used to suppress dissent, distort public discourse, and facilitate gender-based violence. The Netherlands learned this lesson through their painful experience with biased automated welfare systems that deepened injustice for citizens.


The panelists identified surveillance, disinformation, and suppression of democratic participation as the most urgent human rights risks posed by AI, particularly when embedded in government structures without transparency or accountability. Dr. Erika Moret from Microsoft outlined the private sector’s responsibilities, emphasizing adherence to UN guiding principles on business and human rights, embedding human rights considerations from design through deployment, and ensuring fairness and inclusivity in AI systems.


Several binding frameworks were discussed as important next steps, including the EU AI Act, the Council of Europe’s AI Framework Convention, and the Global Digital Compact. The panelists emphasized the importance of multi-stakeholder collaboration, procurement policies that prioritize human rights-compliant AI systems, and the need for transparency and accountability in high-impact AI applications. The statement, endorsed by 21 countries at the time of the session, remains open for additional endorsements and represents a principled vision for human-centric AI governance grounded in international human rights law.


Keypoints

## Major Discussion Points:


– **Launch of the Freedom Online Coalition’s Joint Statement on AI and Human Rights 2025**: The primary focus was presenting an updated statement led by the Netherlands and Germany that establishes human rights as the foundation for AI governance, with 21 countries endorsing it and more expected to follow.


– **Urgent Human Rights Risks from AI**: Key concerns identified include arbitrary surveillance and monitoring by governments, use of AI for disinformation and suppression of democratic participation, bias and discrimination against marginalized groups (especially women and girls), and concentration of power in both state and private sector hands without adequate oversight.


– **Multi-stakeholder Responsibilities and Collaboration**: Discussion emphasized that addressing AI governance requires coordinated action from governments (through regulation and procurement policies), private sector (through human rights due diligence and ethical AI principles), and civil society, with Microsoft presenting their comprehensive approach as an example.


– **Binding International Frameworks and Next Steps**: Panelists highlighted several key governance mechanisms including the EU AI Act, Council of Europe’s AI Framework Convention, UN Global Digital Compact, and various UN processes, emphasizing the need for broader adoption and implementation of these frameworks.


– **Practical Implementation Challenges**: The discussion addressed real-world complexities of protecting human rights in repressive regimes, balancing innovation with rights protection, and the need for transparency while protecting privacy, with examples from the Netherlands’ own failures in automated welfare systems.


## Overall Purpose:


The discussion aimed to launch and promote the Freedom Online Coalition’s updated joint statement on AI and human rights, while building consensus among governments, private sector, and civil society on the urgent need for human rights-centered AI governance and identifying concrete next steps for implementation.


## Overall Tone:


The tone was consistently collaborative and constructive throughout, with speakers demonstrating shared commitment to human rights principles. There was a sense of urgency about AI risks but also optimism about multilateral cooperation. The discussion maintained a diplomatic yet determined quality, with participants acknowledging challenges while emphasizing collective action and practical solutions. The tone became slightly more interactive and engaging during the Q&A portion, with audience questions adding practical perspectives from different regions and sectors.


Speakers

– **Zach Lampell** – Senior Legal Advisor and Coordinator for Digital Rights at the International Center for Not-for-Profit Law; Co-chair of the Freedom Online Coalition’s Task Force on AI and Human Rights


– **Rasmus Lumi** – Director General for International Organizations and Human Rights with the Government of Estonia; Chair of the Freedom Online Coalition in 2025


– **Ernst Noorman** – Cyber Ambassador for the Netherlands


– **Maria Adebahr** – Cyber Ambassador of Germany; Co-chair of the Task Force on AI and Human Rights


– **Devine Salese Agbeti** – Director General of the Cyber Security Authority of Ghana


– **Erika Moret** – Director with UN and international organizations at Microsoft


– **Audience** – Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Svetlana Zenz** – Works on Asia region, focusing on bridging civil society and tech/telecoms in Asia


– **Carlos Vera** – From IGF Ecuador


Full session report

# Freedom Online Coalition’s Joint Statement on AI and Human Rights 2025: A Multi-Stakeholder Discussion on Global AI Governance


## Executive Summary


This session focused on the launch of the Freedom Online Coalition’s updated Joint Statement on Artificial Intelligence and Human Rights for 2025, an update to the coalition’s original 2020 statement. The panel brought together representatives from Estonia, the Netherlands, Germany, Ghana, and Microsoft to discuss the statement and broader challenges in AI governance. At the time of the session, the statement had been endorsed by 21 countries, with expectations for additional endorsements from both FOC and non-FOC members.


The discussion centered on how AI systems pose risks to fundamental human rights including freedom of expression, right to privacy, freedom of association, and freedom of assembly. Speakers addressed the need for human rights to be embedded in AI development from the design phase, the importance of multi-stakeholder collaboration, and the role of international frameworks in governing AI systems.


## Key Participants and Their Contributions


**Rasmus Lumi**, Director General for International Organizations and Human Rights with the Government of Estonia and Chair of the Freedom Online Coalition in 2025, opened the session with remarks about AI-generated notes, noting humorously that the AI “did say all the right things” regarding human rights, which led him to wonder about AI’s understanding of these concepts.


**Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, shared his country’s experience with automated welfare systems, stating: “In the Netherlands, we have learned this the hard way. The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures.” He emphasized that “Innovation without trust is short-lived. Respect for rights is not a constraint, it’s a condition for sustainable, inclusive progress.”


**Maria Adebahr**, Cyber Ambassador of Germany and co-chair of the FOC Task Force on AI and Human Rights, highlighted transnational repression as a key concern, noting: “AI unfortunately is also a tool for transnational repression… in terms of digital and AI, we are reaching here new levels, unfortunately.” She announced that Germany had doubled its funding for the Freedom Online Coalition to support this work.


**Devine Salese Agbeti**, Director General of the Cyber Security Authority of Ghana, provided perspective on bidirectional AI misuse, observing: “I have seen how citizens have used AI to manipulate online content, to lie against government, to even create cryptocurrency pages in the name of the president, etc. So it works both ways.” He identified surveillance, disinformation, and suppression of democratic participation as key concerns.


**Dr. Erika Moret**, Director with UN and international organisations at Microsoft, outlined private sector responsibilities, emphasizing that companies must adhere to UN guiding principles on business and human rights and embed human rights considerations throughout the AI development lifecycle.


**Zach Lampell**, co-chair of the FOC Task Force on AI and Human Rights, facilitated the discussion and emphasized that the statement remains open for additional endorsements, representing a principled vision for human-centric AI governance.


## Key Themes and Areas of Discussion


### Human Rights as Foundation for AI Governance


All speakers agreed that human rights should serve as the foundation for AI governance rather than being treated as secondary considerations. The discussion emphasized the need to put humans at the center of AI development and ensure compliance with international human rights law.


### Multi-Stakeholder Collaboration


Participants stressed that effective AI governance requires collaboration between governments, civil society, private sector, academia, and affected communities. Speakers noted that no single sector can address AI challenges alone.


### International Frameworks and Standards


The discussion covered several key governance mechanisms:


– **EU AI Act**: Referenced as creating predictability and protecting citizens, with potential for global influence


– **Council of Europe AI Framework Convention**: Presented as providing a globally accessible binding framework


– **UN Global Digital Compact**: Noted as representing the first time all UN member states agreed on an AI governance path


– **Hamburg Declaration on Responsible AI for the SDGs**: Mentioned as another relevant framework


– **UNESCO AI Ethics Recommendation**: Referenced in the context of global standards


### Urgent Human Rights Risks


Speakers identified several critical areas where AI poses immediate threats:


– Arbitrary surveillance and monitoring by governments


– Use of AI for disinformation campaigns and suppression of democratic participation


– Transnational repression using AI tools


– Bias and discrimination against marginalized groups


– Concentration of power without adequate oversight


### Private Sector Responsibilities


Dr. Moret outlined comprehensive responsibilities for companies, including:


– Adhering to UN guiding principles on business and human rights


– Embedding human rights considerations from design through deployment


– Conducting ongoing human rights impact assessments


– Ensuring transparency and accountability in AI systems


– Engaging in multi-stakeholder collaboration


## Practical Implementation Approaches


### Government Procurement


Ernst Noorman highlighted procurement as a practical tool, stating: “We have to use procurement as a tool also to force companies to deliver products which are respecting human rights, which have human rights as a core in their design of their products.”


### Coalition Building


Speakers discussed using smaller coalitions to achieve broader global adoption, with Noorman referencing the “oil spill effect” where regional frameworks like the Budapest Convention eventually gain wider acceptance.


### Diplomatic Engagement


The discussion emphasized combining formal diplomatic engagement with informal discussions to promote human rights principles in AI governance globally.


## Audience Questions and Broader Participation


The Q&A session included questions from both in-person and online participants. Carlos Vera from IGF Ecuador asked about how civil society organizations and non-FOC members can provide comments and support for the declaration. Svetlana Zenz raised questions about how risks can be diminished when tech companies work with oppressive governments and what practical actions civil society can take.


Online participants contributed questions about improving transparency in AI decision-making while protecting sensitive data, and identifying global frameworks with binding obligations on states for responsible AI governance.


## The FOC Joint Statement: Content and Next Steps


The statement addresses threats to fundamental freedoms including freedom of expression, right to privacy, freedom of association, and freedom of assembly. It provides actionable recommendations for governments, civil society, and private sector actors, addressing commercial interests, environmental impact, and threats to fundamental freedoms.


Several concrete next steps emerged from the discussion:


– The Task Force on AI and Human Rights committed to meeting to discuss creating space for civil society comments and support


– FOC members agreed to continue diplomatic engagement with non-member governments


– Continued work on promoting the statement’s principles through both formal and informal channels


## Implementation Challenges


The discussion acknowledged several ongoing challenges:


– Balancing transparency in AI decision-making with protection of sensitive data and privacy rights


– Addressing both government and citizen misuse of AI systems


– Ensuring meaningful participation of Global South countries and marginalized communities


– Protecting human rights in AI systems operating under repressive regimes


## Conclusion


The session demonstrated broad agreement among diverse stakeholders on the need for human rights-based AI governance. The FOC Joint Statement on AI and Human Rights 2025 provides a framework for coordinated international action, with concrete recommendations for implementation across sectors. The discussion emphasized practical approaches including government procurement policies, multi-stakeholder engagement, and diplomatic outreach to promote these principles globally.


The collaborative approach demonstrated in the session, combined with specific commitments to follow-up actions and broader engagement, positions the Freedom Online Coalition’s initiative as a significant contribution to global AI governance discussions. The success of this approach will depend on sustained commitment to implementation and continued collaboration across sectors and borders.


Session transcript

Zach Lampell: Welcome to the session, Shaping Global AI Governance Through Multi-Stakeholder Action, where we are pleased to present the Freedom Online Coalition’s Joint Statement on Artificial Intelligence and Human Rights 2025. My name is Zach Lampell. I’m Senior Legal Advisor and Coordinator for Digital Rights at the International Center for Not-for-Profit Law. I’m also pleased to be a co-chair of the Freedom Online Coalition’s Task Force on AI and Human Rights with the Government of the Netherlands and the Government of Germany. I want to welcome Mr. Rasmus Lumi, Director General for International Organizations and Human Rights with the Government of Estonia and the Chair of the Freedom Online Coalition in 2025.


Rasmus Lumi: Thank you very much. Good morning, everybody. It is a great honor for me to be here today to welcome you all to this session on the issue of how we feel or are deceived by the extremely smart artificial intelligence. When I read through the notes that it offered me, it did say all the right things, which is totally understandable. The question is, did it do it on purpose, maybe, maliciously, trying to deceive us into thinking that AI also believes in human rights? So we’ll have to take care of this. And this joint statement that we have developed under the leadership of the Netherlands is exactly one step in the way of doing this, putting humans in the center of AI development. So I would like to take this opportunity to very much thank the Netherlands for leading this discussion, this preparation in the Freedom Online Coalition, and I hope that the coalition and also elsewhere, this work will continue in order to make sure that humans and human rights will remain in the focus of all technological development. Thank you very much.


Zach Lampell: Thank you. I now turn the floor to Ambassador Ernst. Noorman, the Cyber Ambassador for the Netherlands.


Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in The Hague to discuss defence and security, we are here to address a different but equally urgent task, protecting human rights in the age of AI. These are not separate. Human rights and security are, or should be, two sides of the same coin. When rights are eroded, when civic space shrinks, when surveillance escapes oversight, when information is manipulated, societies don’t become safer. They become more unstable, more fragile. Since the original FOC statement on AI and human rights in 2020, a lot has happened. I only have to mention the introduction of Chad Gipetit, just referred to by Rasmus, in November 2022, and how different AI tools are evolving every single day. It is shaping governance, policy, and daily life. Its benefits are real, but so are the risks. And those risks are no longer theoretical. We now see AI used to represent dissent, distort public discourse, and facilitate gender-based violence. In some countries, these practices are becoming embedded in state systems, with few checks and less or no transparency. At the same time, only a handful of private actors shape what we see. They influence democratic debate, and dominate key markets, without meaningful oversight. This double concentration of power threatens both public trust and democratic resilience. That’s why the Netherlands, together with Germany, and the International Centre for Not-for-Profit Law, has led the update of the Freedom on the Line Coalition’s joint statement on artificial intelligence and human rights. I’m grateful to all of you, governments, civil society, private sector, experts, for the thoughtful contributions that shaped it. This updated statement of our joint response to the present reality of AI sets out a principled and practiced vision. Vision for a human-centric AI, governed with care, grounded in human rights, and shaped through inclusive multi-stakeholder processes. It recognizes that risks arise across the AI lifecycle, not only through misuse, but from design to deployment. And the statement calls for clear obligations for both states and private sector, strong safeguards for those most at risk, especially women and girls. It calls for transparency and accountability in high-impact systems, and for cultural and linguistic inclusion, and attention should be given to the environmental and geopolitical dimensions of AI. Some claim that raising these issues could hinder innovation. We disagree. Innovation without trust is short-lived. Respect for rights is not a constraint, it’s a condition for sustainable, inclusive progress. In the Netherlands, we have learned this the hard way. The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures. It showed how algorithms, if not designed and deployed correctly, can deepen injustice. And it takes already years trying to correct the personal harm it caused. In response, we have strengthened our approach to avoid similar accidents, by applying human rights impact assessments, by applying readiness assessment methodology for AI human rights with the UNESCO, and by launching a national algorithm registry, with now more than a thousand algorithms being registered. But no country can solve this alone. All AI transcends borders. So must be our response. As of today, right now, 21 countries have endorsed this joint statement, and we expect more in the days ahead. The text will be published after this session, and remain open for further endorsements, including from non-FOC countries. Let us not stand idly by while others define the rules. Let us lead, clearly, collectively, and with conviction. Human rights must not be an afterthought in AI governance. They must be the foundation. Thank you very much.


Zach Lampell: Thank you, Ambassador Noorman. I think you’re absolutely right. Human rights needs to be the foundation of AI governance, and that was precisely what TFAIR, the Task Force on AI and Human Rights within the FOC, wanted to do with this joint statement. We wanted to build on previous statements and also make sure that there is a strong foundation for governance principles now with actionable, clear recommendations for governments, civil society, and the private sector. We’re really, really pleased with the joint statement and again want to thank the Netherlands and Germany for their co-leadership with ICNL and all of the TFAIR members and Freedom Online Coalition members for their support and input to the statement. We have an amazing, excellent panel today. I want to briefly introduce them and then we’ll get into questions. So joining us virtually is Maria Adebar, Cyber Ambassador of Germany, again co-chair of the Task Force on AI and Human Rights. To my right, Mr. Devine Salese-Agbeti, Director General of the Cyber Security Authority of Ghana. And next to him is Dr. Erika Moray, Director with UN and international organizations at Microsoft. So the first question, and this is directed at Devine. What is, in your view, the most urgent human rights risks posed by AI that this statement addresses?


Devine Salese Agbeti: Thank you very much. Firstly, I would like to thank the government of the Netherlands and also the FOC support unit for extending an invitation. Thank you for the invitation to Ghana for, or to participate in such an important conversation. In my view, I think the most urgent human risk, or the most urgent human right risk posed by AI is the arbitrary use of AI for monitoring or surveillance and also the use of AI for disinformation purposes and also for the suppression of democratic participation, particularly when such is embedded in a government structure and within law enforcement systems without any transparency and accountability. Those two are very important. When we look at these concerns, I think the broader fear is that when this is unchecked and also when they are governed by commercial interests, then what happens is that they erode fundamental freedoms or fundamental human rights, such as the freedom of speech and also privacy. And I think these are the key human right concerns when it comes to artificial intelligence.


Zach Lampell: Great, thank you. And several of those are specifically addressed within the statement, including commercial interests, the environmental impact of artificial intelligence, as well as the direct threats to fundamental human rights, such as the freedom of expression, the right to privacy, freedom of association, freedom of assembly, and others. Ambassador Adebar, thank you so much for joining us. What convinced you and the government of Germany to support this statement and what do you hope it will achieve?


Maria Adebahr: Hey, hello everybody over there. I hope you can hear me very well. Thanks for having me today. It’s a wonderful occasion to present and introduce our joint statement on artificial intelligence and human rights together. Thank you for the opening remarks and thank you Ernst together as a T-Fair co-chair. I really would like to thank you all for coming and joining us in this open forum session. So having you all here, I think it’s a really important sign of commitment for human rights and the broad multi-stakeholder participation these days. And that is even to become more important and let me explain why in doing this. Sometimes it helps to really go back to ask ourselves why do we do this and what led the government of Germany to support the statement. And this is the very essence that AI stands out as one of the most, if not the most transformative innovation and challenge and technological thing that we have to confront. And it will, it already does, change the way we live, work, express and inform ourselves. And it will change the way how we form our opinion and exercise our democratic rights. And as already Ernst said, in times of global uncertainty it offers a lot of promise but also a lot of risks to people on the planet. And that is why we as countries joining T-Fair and other forum have to answer the question in what kind of a digital future we want to live in. And I have to say, that’s really the essence of a human-centered world with a non-negotiable respect for human rights is what we have to strive for. And this is the essence that the statement gives us. And let me quote, it is a world firmly rooted in and in compliance with international law, including international human rights law, not shaped by authoritarian interests or solely by commercial priorities. And only with a wise international governance, we can harness the promise that the technologies as AI give to us and hold the horn at bay. And therefore, it is essential to us to support the statement and to support its principles, because we must stand with a strong focus on human rights and a commitment to a human-centric, safe, secure and trustworthy approach to technology. And as we all know, this is not a given yet, not anywhere in the world. So we hope to convince countries, civil societies and stakeholders to strive in every part of the world for our approach. This is crucial and this is very much the essence of what we have to do. And I’m also, as Ernst just said, very happy to have now 21 states on board. This is, I think, the majority and this gives our position even more weight. And also, let me mention the right of women and girls and all their diversity belong to groups posed to more vulnerability by AI. This is a strong commitment that we really clearly harnessed and wanted to see in there. So let me close by, yeah, I’m looking forward to questions and answers, obviously, but let me close by saying that I’m also very happy to announce that Germany is able to double its funding for the Freedom Online Coalition with the double amount compared to last year. We work through our budgets negotiations here in Germany, and so we were able to double the amount. And this is also something that makes me very happy and my colleagues. And hopefully we will make good use of our support for the Freedom Online Coalition. And with that heavy note, I get back over to you.


Zach Lampell: Thank you. Thank you, Ambassador, and thank you for doubling the funding for the Freedom Online Coalition. I can speak, I’m also a member of the Freedom Online Coalition’s advisory network, and I can say that we all believe that the FOC is a true driving force and leading vehicle to promote and protect fundamental freedoms and digital rights. And without the FOC, the governance structures that we have today, we believe, would be much weaker. So we look forward to continuing to work with you, Ambassador, as well as the government of the Netherlands, as TFAIR, as well as all of the 42 other member states of the Freedom Online Coalition, to continue our important work. And we welcome your leadership, and as civil society, we look forward to working with you to achieve these aims for everyone. Dr. Moret, if I could turn to you. You mentioned the commercial interest, and that is indeed something that is noted in the joint statement. What responsibility do private tech companies have to prevent AI from undermining human rights?


Erika Moret: Thank you very much. Well, thank you, first of all, to the government of the Netherlands, to the FOC, Excellencies, Ambassadors. It’s a real pleasure to be here today. I’m currently working at Microsoft, but in my past life I was an academic and working at the UN, including on various issues in relating to human rights and international humanitarian law, and it’s a real honor to be here. So as the tech industry representative on the panel, I’d like to address how private sector companies like Microsoft view responsibilities in ensuring AI respects and protects human rights. We are very lucky at Microsoft to have a series of teams and experts working on these areas across the company that fall under the newly created Trusted Technology Group, and this includes the Office of Responsible AI, the Technology for Fundamental Rights Group, and the Privacy, Safety, and Regulatory Affairs Group. So I’ll try and represent some of our collective work here in this area. In brief, we recognize that we must be proactive and diligent at every step in AI use, from design to deployment and beyond, to prevent AI from being used to violate human rights. The first is adherence to international standards. Microsoft and many peers explicitly commit to the UN guiding principles on business and human rights as a baseline for their conduct across global operations. For us and other companies, this involves a policy commitment to respect human rights and an ongoing human rights due diligence process that enables to identify and remedy related human rights harms. These principles make clear that while states must protect human rights, companies have an independent responsibility to respect fundamental rights. The next step is to embed human rights from the very beginning. We have a responsibility to integrate human rights considerations into the design and development of AI systems and really paying attention to areas like has already been highlighted in relation to women and girls and other particularly vulnerable groups. By performing human rights assessments and monitoring, this enables us to identify risks and address them. The establishment and enforcement of ethical AI principles is a third very important area. At Microsoft, we have a clearly defined responsible AI principles which encompass fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability which guide all of our AI development and our teams must follow our responsible AI standards which is a company-wide policy that translates these principles into specific requirements. Beyond human rights due diligence that I’ve already mentioned, we also work on protecting privacy and data security. Safeguarding users’ data is a non-negotiable responsibility for our company and also for our peers. AI often involves big data so companies must implement privacy by design, minimizing data collection, securing data storage and ensuring compliance with privacy laws. The sixth area I’d like to highlight here is the vital importance of fairness and inclusivity. Tech firms have the responsibility for ensuring that their AI does not perpetuate bias and discrimination and through working with partners at the FOC and then across civil society, we can put into measure active safeguards and ongoing work to tackle challenges in this area. Again, I’d like to highlight the important note in the FOC statement that AI harms are especially pronounced for marginalized groups. Groups. Ensuring transparency and explainability would be my seventh point of what we should be taking into consideration here, so that people understand how decisions are made and can I identify potential challenges, but then also mitigation approaches. The final area I’d like to emphasize here is the need for collaboration. We’re of course facing a fragile moment in terms of multilateralism and geopolitical tensions around the world and collaboration across borders and across sectors has never been more important. Engaging through multistakeholderism in AI governance, AI regulation developments is as much our responsibility as anyone else’s. I think the more the private sector can be working with civil society and with academia to improve its own work in these areas and to contribute as well through things like red teaming and other types of reporting. This is a really important next step, I believe, in our collective work. Thank you very much.


Zach Lampell: Thank you, Dr. Murray. I think the holistic approach from Microsoft is a fantastic model that I hope other companies will adopt or model their systems and internal standards after, and I really do appreciate the final point on collaboration and next steps. With that, Ambassador Norman, what are the most important next steps to ensure AI systems promote human rights and development while avoiding harm?


Ernst Noorman: Well, thank you. There are a number of issues we can do as governments, as international community, but first and foremost, I think the importance of good, strong regulation. We know, of course, there’s an international discussion on that. There are different views across the ocean about that, but at the same time, you create a level playing field. It’s predictability that’s important. As long as companies also know what the rules are, what the guardrails are, you protect the citizens in Europe and create trust, and then also in the products. So I think with the EU AI Act, that’s a good example, and we put rights, transparency, and accountability at the core. It’s risk-based, not blocking innovation, and I think that’s a crucial step, and you see that also a lot of countries outside the European Union look with great interest to the EU regulations and see how they can adapt it. At the same time, we have to continue to work at multilateral level. The discussion of how to implement oversight, to organize oversight, and to to ensure a safe implementation of AI. We have the Council of Europe’s AI Framework Convention, which is the next step, just like we had before on Cybercrime, the Budapest Convention. And I think it’s very important to really support the UN’s work on human rights, especially the Office of the Human Rights Commissioner, the Office of the High Commissioner of Human Rights. It’s important to keep on supporting them. They have important initiatives, which are in BTEC, which also promote human rights in the private sector. So these are important steps we have to continue. And as Eric from Microsoft already mentioned, the UN Guiding Principles on Business and Human Rights is an important tool also for the private sector to be used. And finally, as an important step too, we as governments play an important role with procurement. So we have to use procurement as a tool also to force companies to deliver products which are respecting human rights, which have human rights as a core in their design of their products, and ensure that they provide safe products to the governments which are used broadly in the society. And I think that’s a very concrete step we can use as governments. Thank you very much.


Zach Lampell: Thank you, Ambassador. I think procurement is a really great final point to mention. Ambassador Adebar, same question to you. What are the most important next steps that we can take to ensure AI systems promote human rights and development while avoiding harm?


Maria Adebahr: Thank you. Yeah, thank you for the question. Ernst just answered it, I think. So we fully can agree with that Ernst just said. It is moving forward on a lot of fronts and different fora. That starts with the EU-AI Act and its implementation and the promotion of its principles. We know certain aspects are being discussed, especially by industries and worldwide, and this is totally okay and this is rightfully so. But we can start and have a discussion on things like risk management and so on and so forth. But what the EU-AI Act really sparks is a discussion on how we want to manage and govern ourselves with AI and with a good and human-centric AI accessible for everybody. So this is one part. As EU member states, UNESCO recommendations of 2021 are another one, and we are looking forward also to the results of the Third Global Forum on the Ethics of AI, Health and Banking. at this very moment. And also we would invite all FOC members and interest actors to join the Hamburg Declaration on Responsible AI for the SDGs, recently stated. And in implementation of the Global Digital Compact we’re currently discussing modalities resolution and modalities for implementing working groups, a panel and a worldwide dialogue on AI. This is very much important to us. And one final point, AI unfortunately is also a tool for transnational repression. And transnational repression is a phenomenon, let me put it like that, that we as German governments really want to focus more on. Because it’s a fairly not a new phenomenon, it’s very, very old. But in terms of digital and AI, we are reaching here new levels, unfortunately. And so this is also a subject we want to discuss more and bring it in international forward and on the international human rights agenda. Thank you.


Zach Lampell: Thank you, Ambassador. And Director Agbedi, same question to you. What are the most important next steps to ensure AI systems promote human rights and development?


Devine Salese Agbeti: Thank you. Firstly, we have to align AI with international human rights standards. In that, for example, currently the Cyber Security Authority is working with the Data Protection Commission in Ghana to actually explore redress for this. Secondly, we can look at the UN guiding principles on business and human rights. The ICCPR can underpin national, regional and international efforts to redress global AI governance in that aspect. And we can also look at the multi-stakeholder participation, which is foundational to this. The ecosystem must include civil society to amplify marginal voices, technical communities to bring transparency into algorithms, and academia to provide evidence-based insights, a private sector to ensure responsible innovation, as well as youth and indigenous voices to reflect world diversity. I think these are the ways to do so.


Zach Lampell: Fantastic. Thank you. Thank you so much. Thank you very much for again very comprehensive ideas from both ambassadors and Director Getty. Now we would like to open up the floor to all of you. We have some time for questions and answers. We have a microphone over to my left, your right. We welcome questions on the joint statement and how best to promote human rights and artificial intelligence.


Audience: Hello. It works. All right. Sorry. It’s like to handle all these devices. Hello, everyone. Thank you so much for such a great panel. And I’m super happy that Freedom Online Coalition together with the private sector is coming up with some, not all, I mean, I would not call it solutions, but at least some recommendations. My name is Svetlana Zenz, and I’m working on Asia region. And, I mean, basically my work is actually to bring the bridge between civil society and tech and telecoms in Asia. And also they’ve been mentioned today, BTAC project, which is also a great project and was like one of the start of many other initiatives in the sector. I really hope that all of that kind of initiatives will get together. But my question will be actually to FOC members, representatives, and to the private sector. So, for a start, I mean, we know that Microsoft works, I mean, known as a company which works closely with many governments because you have products which provide the operability of those governments. And in some countries as well. especially with oppressive laws and oppressive regimes, it’s hard to make sure that the human rights of users are protected. So do you have any vision how to diminish those risks? Maybe there should be more civil society action on that side, like being more practical on that side. For the OFOC side, we’ve been working with FOC members for several years starting from Myanmar where we were one of the first who engaged FOC members in this statement on internet shutdowns. FOC, what can you do in the side of, I mean, of course statement is great, but like in practical way, how we can make sure that human rights are protected on the physical world? Thank you.


Zach Lampell: Thank you so much, I think we’ll take one more question and then we’ll answer, and then if there’s time and additional questions, we’ll go back. So please, sir.


Audience: Thank you, good day. My name is Carlos Vera from IGF Ecuador. I read the declaration in the FOC website. It would be nice if we can have some space to comment on the declaration outside the FOC members and even to sign our support for that declaration. Some government even doesn’t know that the FOC exists, so maybe we can also create some warning in the civil society space. Thank you very much for the great work. Thank you.


Zach Lampell: Thank you so much. I think that’s a great suggestion and we actually have a meeting with the Task Force on AI and Human Rights tomorrow, and I will raise this very point and see what we can do to have wider adoption, especially from civil society support. So back to the first question, it’s really a question on how we, whether FOC governments, the private sector and potentially civil society can protect human rights. regarding AI systems, especially in repressive regimes. I hope I got that question correct. Okay, thank you. So anybody who would like to start us off?


Ernst Noorman: Maybe just to kick off, but I’m sure that others also want to contribute. This is online and offline dilemma. It’s not a specific, the fact that there are tools, of course, also by big tech companies like Microsoft, of course, gives them a responsibility to look at the guardrails of their tools. But at the same time, platforms are being used for human rights violations and to threaten people, but it happens offline as well. But the point is, as an FOC, wherever I go as a cyber ambassador, and I’m sure that Maria is doing the same, we discuss the role of FOC. So also with governments who are not a logical member of the FOC, we always explain what the agenda is of the FOC. Why is the FOC there? What are we doing? And why is it important, the topic on the agenda? You mentioned internet shutdowns. I can assure you I’ve been discussing that with many countries who use this tool as politically. And it’s inside the rooms. You can have more open discussion than if you do it only online with statements, et cetera. So it’s also important to have it inside rooms, closed-door sessions. Now, why are you using these tools? Can’t you do it, can’t you avoid this? Because you harm also a lot of civil services. You harm the role of journalists, which are crucial. So it’s one of the topics, just as an example, which I often mention, but also bringing up then, why is the FOC there? And the FOC is already since 2011. So it’s known by many governments, but also for new colleagues or new people working in the governments. We always stress the important role of it and also explain why it’s there and why it’s important to respect human rights online as it is important to do it offline.


Zach Lampell: Thank you, Ambassador. Ambassador Adelbardi, do you want to come in as well?


Maria Adebahr: Thank you so much. I can only underline what Ernst just said. It’s important, and we do our work and spreading the word and have those discussions formally, but also informally in a four-eye setting, politely, diplomatically sometimes, and other times really straightforward and forcefully is the way to go. And the FOC is a very, very important forum for doing this and a reference point for everybody of us, I think. Thank you.


Zach Lampell: Thank you, Ambassador. And maybe Dr. Moray, if I, oh, sorry. Director Gbedi, please.


Devine Salese Agbeti: All right, thank you. I think that as much as we are advocating for FOC members to ensure the human rights online, especially when it comes to AI, I think FOC should. We’ve been working on the Cyber Security Authority, I have seen how citizens have used AI to manipulate online content, to lie against government, to even create cryptocurrency pages in the name of the president, etc. So it works both ways. I think FOC should be advocating for responsible use of citizens, and at the same time when this is being advocated, then FOC can engage with government also to ensure that the citizens actually have the right to use these systems and use it freely without the fear of arrest or the fear of intimidation. Thank you.


Zach Lampell: Thank you. Dr. Moray.


Erika Moret: Thank you. It’s quite hard to build on these excellent points already. So just to add a few extra points on what has already been said. I would say that from the private sector perspective and from Microsoft viewpoint, we take a principled and proactive approach to this particular question, including due diligence before entering high-risk markets, which is guided by the UN guiding principles. We limit or decline services like facial recognition where misuse is likely. We also resist over-broad government data requests and publish transparency reports where misuse is likely, and to hold ourselves accountable, we offer tools like AccountGuard to protect civil society journalists and human rights defenders from cyber threats. And we also advocate globally for responsible AI and digital use, including through a very important process like the Global Digital Compact and also, of course, in fora such as the IGF and WSIS and so on. Also just to say I’d really like to highlight here the really important developments that have been going on in terms of AI data-driven tools to protect against human rights abuses under authoritarian regimes, and many tech companies are working proactively with the human rights community for this. We personally are very actively engaged with the Office of the High Commissioner on Human Rights across numerous different projects in terms of monitoring human rights abuses and helping to detect risks, and also in areas like capacity building and AI training in order to properly harness the tools and also come up with new solutions where needs are identified.


Zach Lampell: Thank you. Great answers to what is a very tough question and a really never-ending battle to prevent abuses or misuse of AI. We have a couple questions from our colleagues joining us online. The first is, what are any global frameworks with binding obligations on states for the responsible use and governance of AI? And the second question is, how can transparency in AI decision-making be improved without exposing sensitive data, particularly to ensure that the right to privacy is protected under international human rights law is indeed protected? So would any of the panellists like to jump in on either of those two questions? We have about five minutes left. Please, Dr. Murray.


Erika Moret: Okay, well, maybe I’ll just kick it off talking about the importance of the GDC that I already mentioned, the Global Digital Compact, and I’m sure everybody in the room knows about it already. It was launched back for the future at the last UN General Assembly, the first time every UN member state in the world came together to agree a path on AI governance, and I think it’s incredibly important. There are two new bodies that are being developed right now, Dialogue and the Panel, and Microsoft has been engaged at every step of the process, sitting at the table, and we’ve been very grateful to have a voice there. And I think the more private sector, but also, of course, civil society, particularly those without the usual access to these types of processes is incredibly vital, but not just to have a seat at the table, but to actually have a voice at the table. So the more we can find inclusive, fair, transparent, participatory ways that those, particularly in the global majority, can have a meaningful way of engaging in these very, very important developments through this multi-stakeholder model is to be encouraged in my view. Thanks.


Zach Lampell: Thank you. Anybody else? Ambassador?


Ernst Noorman: I would first give the floor to Marie-Anne before I take away.


Zach Lampell: Apologies.


Maria Adebahr: Oh, that’s kind, Ant. I would use this opportunity to, again, put your attention to the Council of Europe Framework Convention on AI and Human Rights, because really this is something that was hardly negotiated. It’s global, and we would strive for more member states to join. And it’s open for all. It’s a globally open convention. You don’t have to be a member of the Council of Europe. So please have a look or tell your state representatives, respectively, to have a look and join. You can always approach, I think, any of EU member states for more information. The second internationally binding or hopefully really to be implemented thing is the Global Digital Compact already mentioned. And it is important, I think, because our headcount, in our headcount, we came to the conclusion that this is really the only truly global forum to discuss AI. And if we wouldn’t do it there, then about more than 100 states worldwide would not be present at any important table to discuss AI governance, because those states are probably not a member of the G7, G20, UNESCO, OSCE, or not able, in terms of resources, to join those discussions in depth. So this makes the Global Digital Compact even more important. The EU-AI Act has, by its nature, aspects of AI governance and principles inherent. So this would be the third framework I would like to mention here. Thank you.


Ernst Noorman: And now I can just add a few points on what Maria said. What the EU-AI Act can have is what we call the Brussels effect. Maria mentioned already the Council of Europe’s AI Framework Convention. If we would strive for a binding framework within the whole UN, it’s going to be very difficult. But if you see these more smaller coalitions, I mentioned the Budapest Convention on Cybercrime, what we have seen, just during the negotiations on the UN Cybercrime Treaty, that more and more members from other regions decided to join the Budapest Convention. From the Pacific, from Africa, from other regions, they decided, well, we want to be part of the Budapest Convention, which is very effective, very concrete cooperation on this topic. So I think that’s also a good example of how we can work in smaller coalitions with the oil spill effect to conquer the world and with good, strong legislation as well for other countries.


Devine Salese Agbeti: Thank you. Excellent points made by everyone here so far on this. And I would just like to add the Palma process also, which promotes the responsible use of these emerging technologies, which includes the artificial intelligence and it requires members. So we encourage other states to sign up as well to it and also member states to implement the convention so that we can all encourage responsible use of these technologies. Thank you.


Zach Lampell: Thank you. There are some very important, very significant binding governance mechanisms, like the Convention on AI from the Council of Europe, and that really can mimic the Budapest Convention, which has become the leading authority for combating and preventing cybercrimes. So this is a bit of a call to action from civil society. Let us, the FOC member states and FOC advisory network, help you inform your governments on these processes, and let us be able to help you advocate for them to adopt, sign and enact them. So it’s been a fantastic panel so far. Ambassador Noorman, please, some closing remarks from you.


Ernst Noorman: Thank you very much, Zach, for moderating also this panel. First of all, the statement is online right now, so go to the website Freedom Online Coalition and just copy the statement, put it on the social media, spread the word. So that’s already important, and I would really like to thank everyone who had been involved in drafting the statement, both from the members of the Freedom Online Coalition as the advisory network, who played an extremely important and meaningful role in strengthening the statement. We had the in-person meetings, one of the first ones in the Rights Conference in February this year, and we had a number of online coalitions, so a lot of work has been put in drafting and strengthening the statement. So I would really like to thank already the countries who have decided to sign on on the statement, and I’m confident that many more will follow in the days and weeks to come. And finally, I really would like to thank, on behalf, I think, of Maria and her team, and Zach and your team, and my team from the Netherlands, to thank all of you, first of all, to be present, and those who had been involved in drafting the statement for your dedication, your work, and your shared purpose on this important topic. Thank you very much. Thank you. Thank you, everyone.


R

Rasmus Lumi

Speech speed

126 words per minute

Speech length

187 words

Speech time

88 seconds

Need for human-centric AI development that puts humans at the center of technological advancement

Explanation

Lumi argues that AI development must prioritize human welfare and rights rather than being driven purely by technological capabilities. He emphasizes the importance of ensuring humans remain central to AI development processes and outcomes.


Evidence

References the joint statement developed under Netherlands leadership and mentions concerns about AI potentially deceiving humans about believing in human rights


Major discussion point

AI Governance and Human Rights Framework


Topics

Human rights | Development


Agreed with

– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti

Agreed on

Human rights must be foundational to AI governance


E

Ernst Noorman

Speech speed

137 words per minute

Speech length

1722 words

Speech time

753 seconds

Human rights must be the foundation of AI governance, not an afterthought

Explanation

Noorman contends that human rights considerations should be built into the fundamental structure of AI governance systems from the beginning. He argues against treating human rights as secondary concerns that are addressed only after AI systems are developed and deployed.


Evidence

Cites Netherlands’ experience with biased automated welfare systems that led to human rights failures, requiring years to correct personal harm caused


Major discussion point

AI Governance and Human Rights Framework


Topics

Human rights | Legal and regulatory


Agreed with

– Rasmus Lumi
– Maria Adebahr
– Devine Salese Agbeti

Agreed on

Human rights must be foundational to AI governance


AI governance requires principled vision grounded in human rights and shaped through inclusive multi-stakeholder processes

Explanation

Noorman advocates for AI governance that is based on clear principles rooted in human rights law and developed through processes that include all relevant stakeholders. He emphasizes the need for inclusive participation in shaping AI governance frameworks.


Evidence

Points to the FOC joint statement as example of principled approach involving governments, civil society, private sector, and experts


Major discussion point

AI Governance and Human Rights Framework


Topics

Human rights | Legal and regulatory


Agreed with

– Devine Salese Agbeti
– Erika Moret

Agreed on

Multi-stakeholder approach is essential for AI governance


AI is used to repress dissent, distort public discourse, and facilitate gender-based violence

Explanation

Noorman identifies specific ways AI systems are being misused to harm democratic processes and individual rights. He highlights how AI tools are being weaponized against vulnerable populations and democratic institutions.


Evidence

Notes that these practices are becoming embedded in state systems with few checks and less transparency


Major discussion point

Urgent AI Human Rights Risks


Topics

Human rights | Cybersecurity


Agreed with

– Maria Adebahr
– Devine Salese Agbeti

Agreed on

AI poses urgent risks to democratic processes and human rights


Concentration of power in few private actors threatens democratic resilience and public trust

Explanation

Noorman warns that having AI development and deployment controlled by a small number of private companies creates risks for democratic governance. He argues this concentration undermines both public confidence and democratic stability.


Evidence

Points to how handful of private actors shape what people see, influence democratic debate, and dominate key markets without meaningful oversight


Major discussion point

Urgent AI Human Rights Risks


Topics

Human rights | Economic


Strong regulation like EU AI Act creates level playing field and predictability while protecting citizens

Explanation

Noorman argues that comprehensive regulation provides clear rules for companies while protecting citizens and building trust in AI systems. He contends that good regulation enables rather than hinders innovation by providing certainty.


Evidence

Cites EU AI Act as example that puts rights, transparency, and accountability at core while being risk-based and not blocking innovation


Major discussion point

Implementation and Next Steps


Topics

Legal and regulatory | Human rights


Agreed with

– Maria Adebahr
– Devine Salese Agbeti
– Erika Moret

Agreed on

International frameworks and standards are crucial for AI governance


Government procurement should be used as tool to force companies to deliver human rights-respecting products

Explanation

Noorman proposes that governments can leverage their purchasing power to incentivize companies to develop AI systems that respect human rights. He sees procurement as a concrete mechanism for promoting responsible AI development.


Evidence

Suggests governments should use procurement to ensure companies provide safe products that have human rights as core design principle


Major discussion point

Implementation and Next Steps


Topics

Legal and regulatory | Economic


FOC members should engage governments through formal and informal diplomatic discussions

Explanation

Noorman advocates for direct diplomatic engagement with both FOC and non-FOC countries to promote human rights in AI governance. He emphasizes the importance of both public statements and private diplomatic conversations.


Evidence

Mentions discussing internet shutdowns with countries that use this tool politically, explaining FOC’s role and importance of respecting human rights online


Major discussion point

Implementation and Next Steps


Topics

Human rights | Legal and regulatory


21 countries have endorsed the joint statement with more expected

Explanation

Noorman reports on the current level of support for the FOC joint statement on AI and human rights, indicating growing international consensus. He expresses confidence that additional countries will join the initiative.


Evidence

States that 21 countries have endorsed as of the session date, with text to be published and remain open for further endorsements including from non-FOC countries


Major discussion point

FOC Joint Statement Impact


Topics

Human rights | Legal and regulatory


EU AI Act can have Brussels effect influencing global standards

Explanation

Noorman suggests that the EU’s comprehensive AI regulation can influence global AI governance standards beyond Europe’s borders. He draws parallels to how EU regulations often become de facto global standards.


Evidence

Notes that many countries outside the European Union look with great interest to EU regulations and see how they can adapt them


Major discussion point

Global Frameworks and Binding Obligations


Topics

Legal and regulatory | Human rights


Smaller coalitions like Budapest Convention can achieve oil spill effect for broader adoption

Explanation

Noorman argues that focused coalitions of committed countries can create momentum that eventually leads to broader global adoption of standards. He uses the cybercrime convention as a model for how this approach can work.


Evidence

Cites Budapest Convention on Cybercrime where during UN Cybercrime Treaty negotiations, more members from Pacific, Africa and other regions decided to join the effective convention


Major discussion point

Global Frameworks and Binding Obligations


Topics

Legal and regulatory | Cybersecurity


M

Maria Adebahr

Speech speed

131 words per minute

Speech length

1242 words

Speech time

566 seconds

Germany supports human-centered world with non-negotiable respect for human rights over authoritarian or commercial interests

Explanation

Adebahr articulates Germany’s position that AI governance must prioritize human rights above both authoritarian control and purely commercial considerations. She emphasizes that respect for human rights should be non-negotiable in AI development.


Evidence

Quotes the statement describing a world ‘firmly rooted in and in compliance with international law, including international human rights law, not shaped by authoritarian interests or solely by commercial priorities’


Major discussion point

AI Governance and Human Rights Framework


Topics

Human rights | Legal and regulatory


Agreed with

– Rasmus Lumi
– Ernst Noorman
– Devine Salese Agbeti

Agreed on

Human rights must be foundational to AI governance


AI can be a tool for transnational repression reaching new levels through digital means

Explanation

Adebahr warns that AI technologies are being used to extend repressive practices across borders in unprecedented ways. She identifies transnational repression as an emerging threat that requires international attention and response.


Evidence

Notes that while transnational repression is not new, digital and AI tools are enabling it to reach new levels


Major discussion point

Urgent AI Human Rights Risks


Topics

Human rights | Cybersecurity


Agreed with

– Ernst Noorman
– Devine Salese Agbeti

Agreed on

AI poses urgent risks to democratic processes and human rights


Need to promote EU AI Act principles and UNESCO recommendations globally

Explanation

Adebahr advocates for spreading European and international AI governance standards to other regions and countries. She sees these frameworks as models that should be adopted more widely to ensure consistent human rights protections.


Evidence

References EU AI Act implementation, UNESCO recommendations of 2021, Hamburg Declaration on Responsible AI for SDGs, and Global Digital Compact implementation


Major discussion point

Implementation and Next Steps


Topics

Legal and regulatory | Human rights


Agreed with

– Ernst Noorman
– Devine Salese Agbeti
– Erika Moret

Agreed on

International frameworks and standards are crucial for AI governance


Germany doubled funding for Freedom Online Coalition to support this important work

Explanation

Adebahr announces Germany’s increased financial commitment to the FOC, demonstrating concrete support for digital rights advocacy. This represents a significant increase in resources for promoting human rights in digital spaces.


Evidence

States Germany was able to double funding amount compared to previous year through budget negotiations


Major discussion point

FOC Joint Statement Impact


Topics

Human rights | Development


Council of Europe Framework Convention on AI provides globally open binding framework

Explanation

Adebahr promotes the Council of Europe’s AI convention as an important binding international agreement that is open to countries beyond Europe. She emphasizes its global accessibility and legal force.


Evidence

Notes the convention is globally open and countries don’t have to be Council of Europe members to join, encouraging EU member states to provide information to interested countries


Major discussion point

Global Frameworks and Binding Obligations


Topics

Legal and regulatory | Human rights


D

Devine Salese Agbeti

Speech speed

100 words per minute

Speech length

512 words

Speech time

304 seconds

Most urgent risks are arbitrary AI surveillance and use for disinformation to suppress democratic participation

Explanation

Agbeti identifies surveillance and disinformation as the most critical threats posed by AI to human rights and democratic processes. He emphasizes particular concern when these tools are embedded in government structures without transparency or accountability.


Evidence

Specifically mentions concerns about AI embedded in government structure and law enforcement systems without transparency and accountability, and fears about erosion of fundamental freedoms like speech and privacy


Major discussion point

Urgent AI Human Rights Risks


Topics

Human rights | Cybersecurity


Agreed with

– Ernst Noorman
– Maria Adebahr

Agreed on

AI poses urgent risks to democratic processes and human rights


AI systems must align with international human rights standards including ICCPR and UN guiding principles

Explanation

Agbeti argues for grounding AI governance in established international human rights law and frameworks. He sees existing international legal instruments as the foundation for AI governance approaches.


Evidence

References UN guiding principles on business and human rights and ICCPR as frameworks that can underpin national, regional and international AI governance efforts


Major discussion point

AI Governance and Human Rights Framework


Topics

Human rights | Legal and regulatory


Agreed with

– Ernst Noorman
– Maria Adebahr
– Erika Moret

Agreed on

International frameworks and standards are crucial for AI governance


Multi-stakeholder participation must include civil society, technical communities, academia, private sector and marginalized voices

Explanation

Agbeti advocates for inclusive governance processes that bring together diverse perspectives and expertise. He emphasizes the importance of including marginalized communities and various professional sectors in AI governance discussions.


Evidence

Specifically mentions need for civil society to amplify marginal voices, technical communities for algorithm transparency, academia for evidence-based insights, private sector for responsible innovation, and youth and indigenous voices for diversity


Major discussion point

Implementation and Next Steps


Topics

Human rights | Sociocultural


Agreed with

– Ernst Noorman
– Erika Moret

Agreed on

Multi-stakeholder approach is essential for AI governance


Citizens misuse AI to manipulate content and create fraudulent materials, requiring responsible use advocacy

Explanation

Agbeti points out that AI misuse is not limited to governments and corporations, but also includes individual citizens creating harmful content. He argues for education and advocacy around responsible AI use by all users.


Evidence

Cites examples from Ghana where citizens used AI to manipulate online content, lie against government, and create cryptocurrency pages in the president’s name


Major discussion point

Urgent AI Human Rights Risks


Topics

Human rights | Cybersecurity


Palma process promotes responsible use of emerging technologies including AI

Explanation

Agbeti promotes the Palma process as an important framework for encouraging responsible development and deployment of AI and other emerging technologies. He encourages broader participation in this initiative.


Evidence

Notes that the process requires member implementation and encourages other states to sign up


Major discussion point

Global Frameworks and Binding Obligations


Topics

Legal and regulatory | Human rights


E

Erika Moret

Speech speed

142 words per minute

Speech length

1165 words

Speech time

489 seconds

Companies must be proactive at every step from design to deployment to prevent human rights violations

Explanation

Moret argues that private sector companies have a responsibility to actively prevent AI systems from being used to violate human rights throughout the entire AI lifecycle. She emphasizes that this requires continuous vigilance and proactive measures rather than reactive responses.


Evidence

References Microsoft’s Trusted Technology Group including Office of Responsible AI, Technology for Fundamental Rights Group, and Privacy, Safety, and Regulatory Affairs Group


Major discussion point

Private Sector Responsibilities


Topics

Human rights | Economic


Tech firms should adhere to UN guiding principles on business and human rights as baseline conduct

Explanation

Moret advocates for using established international frameworks as the minimum standard for corporate behavior in AI development. She argues that companies should explicitly commit to these principles as foundational to their operations.


Evidence

Notes Microsoft and many peers explicitly commit to UN guiding principles, involving policy commitment to respect human rights and ongoing due diligence processes


Major discussion point

Private Sector Responsibilities


Topics

Human rights | Economic


Companies need to embed human rights considerations from the beginning and perform ongoing assessments

Explanation

Moret argues for integrating human rights analysis into the fundamental design and development processes of AI systems rather than treating it as an add-on. She emphasizes the need for continuous monitoring and assessment throughout the AI lifecycle.


Evidence

Mentions performing human rights assessments and monitoring to identify risks, with particular attention to vulnerable groups like women and girls


Major discussion point

Private Sector Responsibilities


Topics

Human rights | Economic


Private sector must ensure fairness, transparency, accountability and protect privacy through design

Explanation

Moret outlines specific technical and procedural requirements that companies should implement to protect human rights. She argues for building these protections into the fundamental architecture of AI systems rather than adding them later.


Evidence

References Microsoft’s responsible AI principles including fairness, reliability, safety, privacy, security, inclusiveness, transparency and accountability, plus company-wide responsible AI standards


Major discussion point

Private Sector Responsibilities


Topics

Human rights | Privacy and data protection


Collaboration across sectors through multistakeholder engagement is essential responsibility

Explanation

Moret argues that private companies have a duty to engage with other sectors including government, civil society, and academia to improve AI governance. She sees this collaboration as crucial for addressing the complex challenges posed by AI systems.


Evidence

Mentions importance of private sector working with civil society and academia, contributing through red teaming and reporting, especially given fragile multilateralism and geopolitical tensions


Major discussion point

Private Sector Responsibilities


Topics

Human rights | Economic


Agreed with

– Ernst Noorman
– Devine Salese Agbeti

Agreed on

Multi-stakeholder approach is essential for AI governance


Global Digital Compact represents first time all UN member states agreed on AI governance path

Explanation

Moret highlights the historic significance of the Global Digital Compact as the first universal agreement on AI governance among all UN member states. She emphasizes its importance for creating inclusive global AI governance.


Evidence

Notes it was launched at UN General Assembly with new Dialogue and Panel bodies being developed, and Microsoft’s engagement throughout the process


Major discussion point

Global Frameworks and Binding Obligations


Topics

Legal and regulatory | Human rights


Agreed with

– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti

Agreed on

International frameworks and standards are crucial for AI governance


Z

Zach Lampell

Speech speed

136 words per minute

Speech length

1194 words

Speech time

523 seconds

Statement provides actionable recommendations for governments, civil society, and private sector

Explanation

Lampell emphasizes that the FOC joint statement goes beyond general principles to provide specific, implementable guidance for different stakeholder groups. He highlights the practical nature of the recommendations as a key strength of the document.


Evidence

Notes the statement builds on previous statements and provides clear recommendations with strong foundation for governance principles


Major discussion point

FOC Joint Statement Impact


Topics

Human rights | Legal and regulatory


Statement addresses commercial interests, environmental impact, and threats to fundamental freedoms

Explanation

Lampell outlines the comprehensive scope of issues covered in the joint statement, showing how it addresses multiple dimensions of AI’s impact on society. He emphasizes that the statement takes a holistic approach to AI governance challenges.


Evidence

Specifically mentions threats to freedom of expression, right to privacy, freedom of association, and freedom of assembly


Major discussion point

FOC Joint Statement Impact


Topics

Human rights | Development


A

Audience

Speech speed

126 words per minute

Speech length

384 words

Speech time

182 seconds

Need for wider adoption and civil society support beyond FOC members

Explanation

An audience member suggests that the FOC joint statement should be opened for broader endorsement and support, including from civil society organizations and non-FOC countries. They argue for creating mechanisms to allow wider participation in supporting the statement’s principles.


Evidence

Notes that some governments don’t know FOC exists and suggests creating awareness in civil society space, plus allowing comments and signatures of support for the declaration


Major discussion point

FOC Joint Statement Impact


Topics

Human rights | Sociocultural


Agreements

Agreement points

Human rights must be foundational to AI governance

Speakers

– Rasmus Lumi
– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti

Arguments

Need for human-centric AI development that puts humans at the center of technological advancement


Human rights must be the foundation of AI governance, not an afterthought


Germany supports human-centered world with non-negotiable respect for human rights over authoritarian or commercial interests


AI systems must align with international human rights standards including ICCPR and UN guiding principles


Summary

All government representatives agree that human rights should be the central organizing principle for AI governance, not treated as secondary considerations. They emphasize putting humans at the center of AI development and ensuring compliance with international human rights law.


Topics

Human rights | Legal and regulatory


Multi-stakeholder approach is essential for AI governance

Speakers

– Ernst Noorman
– Devine Salese Agbeti
– Erika Moret

Arguments

AI governance requires principled vision grounded in human rights and shaped through inclusive multi-stakeholder processes


Multi-stakeholder participation must include civil society, technical communities, academia, private sector and marginalized voices


Collaboration across sectors through multistakeholder engagement is essential responsibility


Summary

Speakers agree that effective AI governance requires inclusive participation from governments, civil society, private sector, academia, and marginalized communities. They emphasize that no single sector can address AI challenges alone.


Topics

Human rights | Sociocultural


International frameworks and standards are crucial for AI governance

Speakers

– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti
– Erika Moret

Arguments

Strong regulation like EU AI Act creates level playing field and predictability while protecting citizens


Need to promote EU AI Act principles and UNESCO recommendations globally


AI systems must align with international human rights standards including ICCPR and UN guiding principles


Global Digital Compact represents first time all UN member states agreed on AI governance path


Summary

All speakers support the development and implementation of international frameworks for AI governance, including the EU AI Act, Council of Europe Convention, UN frameworks, and other binding international agreements.


Topics

Legal and regulatory | Human rights


AI poses urgent risks to democratic processes and human rights

Speakers

– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti

Arguments

AI is used to repress dissent, distort public discourse, and facilitate gender-based violence


AI can be a tool for transnational repression reaching new levels through digital means


Most urgent risks are arbitrary AI surveillance and use for disinformation to suppress democratic participation


Summary

Government representatives agree that AI systems are being actively misused to undermine democratic institutions, suppress dissent, and violate human rights, particularly through surveillance and disinformation campaigns.


Topics

Human rights | Cybersecurity


Similar viewpoints

Both European representatives see EU AI regulation as a model that should influence global AI governance standards and be promoted internationally.

Speakers

– Ernst Noorman
– Maria Adebahr

Arguments

EU AI Act can have Brussels effect influencing global standards


Need to promote EU AI Act principles and UNESCO recommendations globally


Topics

Legal and regulatory | Human rights


Both emphasize the importance of embedding human rights considerations into AI systems from the design phase and using economic incentives to promote responsible AI development.

Speakers

– Ernst Noorman
– Erika Moret

Arguments

Government procurement should be used as tool to force companies to deliver human rights-respecting products


Companies must be proactive at every step from design to deployment to prevent human rights violations


Topics

Human rights | Economic


Both promote specific international frameworks that are open to global participation and provide binding commitments for responsible AI governance.

Speakers

– Maria Adebahr
– Devine Salese Agbeti

Arguments

Council of Europe Framework Convention on AI provides globally open binding framework


Palma process promotes responsible use of emerging technologies including AI


Topics

Legal and regulatory | Human rights


Unexpected consensus

Private sector proactive responsibility for human rights

Speakers

– Ernst Noorman
– Erika Moret

Arguments

Government procurement should be used as tool to force companies to deliver human rights-respecting products


Companies must be proactive at every step from design to deployment to prevent human rights violations


Explanation

There is unexpected alignment between government and private sector perspectives on corporate responsibility, with both agreeing that companies should proactively embed human rights protections rather than waiting for government mandates.


Topics

Human rights | Economic


Citizens’ role in AI misuse

Speakers

– Devine Salese Agbeti
– Ernst Noorman

Arguments

Citizens misuse AI to manipulate content and create fraudulent materials, requiring responsible use advocacy


FOC members should engage governments through formal and informal diplomatic discussions


Explanation

There is consensus that AI governance challenges come not only from governments and corporations but also from individual citizens, requiring education and advocacy for responsible use by all stakeholders.


Topics

Human rights | Cybersecurity


Overall assessment

Summary

There is strong consensus among all speakers on core principles: human rights as foundation of AI governance, need for multi-stakeholder approaches, importance of international frameworks, and recognition of urgent AI-related threats to democracy and human rights.


Consensus level

High level of consensus with remarkable alignment across government, private sector, and civil society representatives. This suggests strong potential for coordinated international action on AI governance, with the FOC joint statement serving as a foundation for broader cooperation. The consensus spans both principles and practical implementation approaches, indicating mature understanding of AI governance challenges.


Differences

Different viewpoints

Unexpected differences

Scope of AI misuse concerns

Speakers

– Devine Salese Agbeti
– Other speakers

Arguments

Citizens misuse AI to manipulate content and create fraudulent materials, requiring responsible use advocacy


Explanation

While most speakers focused on government and corporate misuse of AI, Agbeti uniquely highlighted citizen misuse as a significant concern, arguing that FOC should advocate for responsible use by individuals rather than only focusing on state and corporate actors. This represents an unexpected broadening of the discussion beyond the typical focus on institutional actors


Topics

Human rights | Cybersecurity


Overall assessment

Summary

The discussion showed remarkably high consensus among speakers on fundamental principles and goals for AI governance, with only minor differences in emphasis and approach rather than substantive disagreements


Disagreement level

Very low disagreement level. The speakers demonstrated strong alignment on core issues including the need for human rights-centered AI governance, multi-stakeholder approaches, and international cooperation. The few differences that emerged were primarily about tactical approaches (domestic vs. global focus, government vs. citizen responsibility) rather than fundamental disagreements about principles or goals. This high level of consensus likely reflects the collaborative nature of developing the FOC joint statement and suggests strong potential for coordinated action on AI governance among these stakeholders.


Partial agreements

Partial agreements

Similar viewpoints

Both European representatives see EU AI regulation as a model that should influence global AI governance standards and be promoted internationally.

Speakers

– Ernst Noorman
– Maria Adebahr

Arguments

EU AI Act can have Brussels effect influencing global standards


Need to promote EU AI Act principles and UNESCO recommendations globally


Topics

Legal and regulatory | Human rights


Both emphasize the importance of embedding human rights considerations into AI systems from the design phase and using economic incentives to promote responsible AI development.

Speakers

– Ernst Noorman
– Erika Moret

Arguments

Government procurement should be used as tool to force companies to deliver human rights-respecting products


Companies must be proactive at every step from design to deployment to prevent human rights violations


Topics

Human rights | Economic


Both promote specific international frameworks that are open to global participation and provide binding commitments for responsible AI governance.

Speakers

– Maria Adebahr
– Devine Salese Agbeti

Arguments

Council of Europe Framework Convention on AI provides globally open binding framework


Palma process promotes responsible use of emerging technologies including AI


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

Human rights must be the foundation of AI governance, not an afterthought, requiring a human-centric approach that puts people at the center of technological development


The most urgent AI human rights risks include arbitrary surveillance, disinformation campaigns, suppression of democratic participation, and the concentration of power in few private actors


Private sector companies have independent responsibility to respect human rights through proactive measures from design to deployment, including adherence to UN guiding principles on business and human rights


Multi-stakeholder collaboration involving governments, civil society, private sector, academia, and marginalized communities is essential for effective AI governance


The FOC Joint Statement on AI and Human Rights 2025 has been endorsed by 21 countries and provides actionable recommendations for all stakeholders


Strong regulatory frameworks like the EU AI Act, Council of Europe AI Framework Convention, and Global Digital Compact provide important binding and non-binding governance mechanisms


Government procurement can be used as a powerful tool to ensure AI systems respect human rights by requiring companies to deliver compliant products


Resolutions and action items

The FOC Joint Statement on AI and Human Rights 2025 is now published online and remains open for additional endorsements from both FOC and non-FOC countries


Germany announced doubling its funding for the Freedom Online Coalition to support AI and human rights work


Task Force on AI and Human Rights will meet to discuss creating space for civil society comments and support for the declaration beyond FOC members


FOC members committed to continue diplomatic engagement with non-member governments through formal and informal discussions to promote human rights in AI


Participants agreed to spread awareness of the statement through social media and other channels


Promotion of existing binding frameworks like the Council of Europe AI Framework Convention and Global Digital Compact implementation


Unresolved issues

How to effectively protect human rights in AI systems operating under repressive regimes while companies maintain government contracts


Balancing transparency in AI decision-making with protection of sensitive data and privacy rights


Addressing the dual challenge of preventing both government misuse of AI and citizen misuse of AI for fraudulent purposes


Ensuring meaningful participation of Global South countries and marginalized communities in AI governance processes


Creating truly global binding frameworks for AI governance given the difficulty of achieving consensus among all UN member states


Determining optimal mechanisms for civil society engagement and support for AI governance initiatives beyond government-led processes


Suggested compromises

Using smaller coalitions and regional frameworks (like Council of Europe Convention) to achieve ‘Brussels effect’ or ‘oil spill effect’ for broader global adoption rather than waiting for universal UN consensus


Leveraging government procurement policies as a practical middle-ground approach to enforce human rights standards without requiring new legislation


Balancing innovation promotion with rights protection through risk-based regulatory approaches that don’t block technological advancement


Combining formal diplomatic engagement with informal discussions to address AI human rights concerns with non-compliant governments


Using existing frameworks like UN guiding principles on business and human rights as baseline standards while developing more specific AI governance mechanisms


Thought provoking comments

When I read through the notes that it offered me, it did say all the right things, which is totally understandable. The question is, did it do it on purpose, maybe, maliciously, trying to deceive us into thinking that AI also believes in human rights?

Speaker

Rasmus Lumi


Reason

This opening comment immediately established a critical and philosophical tone by questioning AI’s apparent alignment with human values. It introduced the concept of AI deception and whether AI systems might manipulate human perception of their intentions, which is a sophisticated concern beyond basic functionality issues.


Impact

This comment set the stage for deeper philosophical discussions throughout the session about AI’s relationship with human values and the need for human-centered governance. It moved the conversation beyond technical implementation to fundamental questions about AI’s nature and trustworthiness.


Innovation without trust is short-lived. Respect for rights is not a constraint, it’s a condition for sustainable, inclusive progress.

Speaker

Ernst Noorman


Reason

This reframes the common narrative that human rights protections hinder technological innovation. Instead, it positions human rights as essential infrastructure for sustainable technological development, challenging the false dichotomy between innovation and rights protection.


Impact

This comment shifted the discussion from defensive justifications of human rights to a proactive business case for rights-based AI development. It influenced subsequent speakers to discuss practical implementation rather than theoretical benefits.


In the Netherlands, we have learned this the hard way. The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures.

Speaker

Ernst Noorman


Reason

This vulnerable admission of failure from a leading democratic nation provided concrete evidence of how AI systems can cause real harm even with good intentions. It demonstrated intellectual honesty and showed that human rights violations through AI are not just theoretical concerns or problems of authoritarian regimes.


Impact

This personal example grounded the entire discussion in reality and gave credibility to the urgency of the human rights framework. It influenced other speakers to focus on practical safeguards and accountability mechanisms rather than abstract principles.


AI unfortunately is also a tool for transnational repression. And transnational repression is a phenomenon… that we as German governments really want to focus more on. Because it’s a fairly not a new phenomenon, it’s very, very old. But in terms of digital and AI, we are reaching here new levels, unfortunately.

Speaker

Maria Adebahr


Reason

This comment introduced a geopolitical dimension that expanded the scope beyond domestic AI governance to international security concerns. It highlighted how AI amplifies existing authoritarian tactics across borders, making it a global security issue requiring international coordination.


Impact

This broadened the conversation from individual rights protection to collective security, influencing the discussion toward international cooperation mechanisms and the need for coordinated responses to AI-enabled authoritarianism.


I have seen how citizens have used AI to manipulate online content, to lie against government, to even create cryptocurrency pages in the name of the president, etc. So it works both ways. I think FOC should be advocating for responsible use of citizens, and at the same time when this is being advocated, then FOC can engage with government also to ensure that the citizens actually have the right to use these systems and use it freely without the fear of arrest or the fear of intimidation.

Speaker

Devine Salese Agbeti


Reason

This comment introduced crucial nuance by acknowledging that AI misuse is not unidirectional – citizens can also misuse AI against governments. It challenged the implicit assumption that only governments and corporations pose AI-related threats, while maintaining the importance of protecting legitimate citizen rights.


Impact

This balanced perspective shifted the conversation toward more sophisticated governance approaches that address multiple threat vectors while preserving democratic freedoms. It influenced the discussion to consider comprehensive frameworks rather than one-sided protections.


We have to use procurement as a tool also to force companies to deliver products which are respecting human rights, which have human rights as a core in their design of their products, and ensure that they provide safe products to the governments which are used broadly in the society.

Speaker

Ernst Noorman


Reason

This identified government procurement as a powerful but underutilized lever for enforcing human rights standards in AI development. It provided a concrete, actionable mechanism that governments can implement immediately without waiting for comprehensive international agreements.


Impact

This practical suggestion energized the discussion around immediate actionable steps, moving from abstract principles to concrete implementation strategies that other speakers could build upon.


Overall assessment

These key comments fundamentally shaped the discussion by establishing it as a sophisticated, multi-dimensional conversation rather than a simple advocacy session. Lumi’s opening philosophical challenge set an intellectually rigorous tone, while Noorman’s admission of Dutch failures provided credibility and urgency. The comments collectively moved the discussion through several important transitions: from theoretical to practical (through concrete examples), from defensive to proactive (reframing rights as enabling innovation), from domestic to international (through transnational repression), and from one-sided to nuanced (acknowledging citizen misuse). These interventions prevented the session from becoming a simple endorsement of the joint statement and instead created a substantive dialogue about the complex realities of AI governance, ultimately strengthening the case for the human rights framework by acknowledging and addressing its challenges.


Follow-up questions

How can transparency in AI decision-making be improved without exposing sensitive data, particularly to ensure that the right to privacy is protected under international human rights law?

Speaker

Online participant


Explanation

This question addresses the critical balance between AI transparency requirements and privacy protection, which is fundamental to human rights compliance in AI systems


How can civil society organizations and non-FOC members provide comments and support for the FOC AI and Human Rights declaration?

Speaker

Carlos Vera from IGF Ecuador


Explanation

This highlights the need for broader participation and engagement mechanisms beyond FOC members, including creating awareness about FOC’s existence among governments and civil society


How can risks be diminished when tech companies like Microsoft work with oppressive governments, and what practical actions can civil society take?

Speaker

Svetlana Zenz


Explanation

This addresses the practical challenges of protecting human rights when technology companies operate in countries with repressive regimes and the role of civil society in mitigation


How can the FOC engage with governments to ensure citizens have the right to use AI systems freely while also advocating for responsible citizen use of AI?

Speaker

Devine Salese-Agbeti


Explanation

This explores the dual challenge of preventing both government overreach and citizen misuse of AI technologies, requiring balanced advocacy approaches


What are the global frameworks with binding obligations on states for the responsible use and governance of AI?

Speaker

Online participant


Explanation

This seeks to identify existing legally binding international mechanisms for AI governance, which is crucial for understanding the current regulatory landscape


How can the ‘Brussels effect’ of the EU AI Act be leveraged to influence global AI governance standards?

Speaker

Ernst Noorman (implied)


Explanation

This explores how regional regulations like the EU AI Act can create spillover effects to influence global standards, similar to the Budapest Convention’s expansion


How can transnational repression through AI tools be better addressed on the international human rights agenda?

Speaker

Maria Adebahr


Explanation

This identifies the need for focused attention on how AI is being used as a tool for transnational repression, which represents a new dimension of human rights violations


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #179 Privacy Preserving Interoperability and the Fediverse

WS #179 Privacy Preserving Interoperability and the Fediverse

Session at a glance

Summary

This panel discussion focused on privacy preservation within the Fediverse, also known as the open social web, which enables interoperability between different social media platforms through protocols like ActivityPub. The session was moderated by Mallory Knodel from the Social Web Foundation and featured panelists from various organizations including the Data Transfer Initiative, Meta, and academic researchers specializing in digital competition law.


The discussion explored fundamental challenges in maintaining user privacy when data flows between interconnected but independently operated platforms. A key technical issue highlighted was that when users delete posts on one platform, those deletions may not propagate across all federated services that have already received the content. This creates complications for compliance with privacy regulations like GDPR, which grants users rights to delete their personal data.


Panelists examined how existing privacy laws apply to decentralized systems, noting that current regulations were designed for centralized platforms and may need adaptation for federated environments. The European Union’s Digital Markets Act was discussed as potentially expanding interoperability requirements from messaging services to social media platforms in future reviews.


The conversation addressed governance challenges in distributed systems, emphasizing the need for shared standards and trust mechanisms across multiple platforms and instances. Speakers highlighted the importance of user education about how federated systems work, as many users may not understand that their posts can travel beyond their original platform.


The panel concluded that while the Fediverse offers promising opportunities for user agency and platform choice, realizing privacy-preserving interoperability requires ongoing collaboration between platforms, standards bodies, regulators, and civil society to develop appropriate technical and governance frameworks.


Keypoints

## Major Discussion Points:


– **Privacy challenges in interoperable social media systems**: The panel explored how federated platforms like the Fediverse can preserve user privacy while enabling cross-platform interaction, including technical issues like post deletion across multiple instances and the complexity of maintaining user control over data that flows between different services.


– **Regulatory frameworks and compliance in decentralized environments**: Discussion focused on how existing privacy laws like GDPR and emerging regulations like the Digital Markets Act apply to interoperable social media, with particular attention to the challenges of enforcing data subject rights (deletion, portability, access) across distributed networks.


– **Technical implementation and standardization of interoperability**: The conversation addressed the practical aspects of building interoperable systems, including the role of standards bodies like W3C and IETF, the importance of shared governance models, and the need for trust mechanisms between different platforms and instances.


– **User experience and education in federated systems**: Panelists discussed the challenge of making complex interoperable systems accessible to average users, including onboarding processes, setting appropriate defaults, and helping users understand what happens to their data when it crosses platform boundaries.


– **Governance and responsibility distribution across diverse stakeholders**: The panel examined how to manage a decentralized ecosystem involving thousands of instances and platforms, including the development of shared norms, trust registries, and coordination mechanisms that don’t centralize control but provide necessary oversight.


## Overall Purpose:


The discussion aimed to explore how privacy can be preserved and user rights protected in interoperable social media environments, particularly the Fediverse, while addressing the technical, legal, and governance challenges that arise when data and interactions flow across multiple platforms and instances.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with panelists building on each other’s points rather than debating. The conversation was technical but accessible, with speakers acknowledging the complexity of the issues while expressing cautious optimism about solutions. The tone became more interactive and engaged during the Q&A portion, with audience questions adding practical perspectives and real-world concerns to the theoretical framework established by the panel.


Speakers

– **Mallory Knodel** – Executive Director and co-founder of the Social Web Foundation, focused on building a multipolar Fediverse with emphasis on human rights and community building


– **Chris Riley** – Academic/Professor, co-author of “Regulating Code” book, works on interoperability and digital policy issues, currently based in Melbourne, Australia


– **Delara Derakhshani** – Director of Policy and Partnerships at the Data Transfer Initiative (DTI), focused on data portability and user agency in data transfers


– **Ian Brown** – Technologist and academic, security and privacy expert, co-author of “Regulating Code,” works on digital competition law reform and interoperability, particularly with EU Digital Markets Act


– **Audience** – Various audience members who asked questions during the Q&A session


– **Melinda Claybaugh** – Director of AI and Privacy Policy at Meta, working on Threads product and Fediverse interoperability


**Additional speakers:**


– **Edmund Chung** – From .Asia organization


– **Dominique Zelmercier** – From W3C (World Wide Web Consortium)


– **Caspian** – Audience member who asked about 10-year vision


– **Winston Xu** – From One Power Foundation in Hong Kong


– **Gabriel** – Audience member who asked about trust between instances


– Various other unnamed audience members who asked questions


Full session report

# Privacy Preservation in the Fediverse: Panel Discussion Report


## Introduction and Panel Setup


This panel discussion was moderated by Mallory Knodel from the Social Web Foundation as part of an Internet Governance Forum (IGF) session examining privacy challenges in federated social media systems. The session brought together experts to discuss how user privacy can be maintained when data flows between independently operated but interconnected platforms using protocols like ActivityPub.


The panelists included:


– Ian Brown, discussing regulatory frameworks and interoperability requirements


– Melinda Claybaugh from Meta, representing industry perspectives on federated systems (her full introduction was cut off in the recording)


– Dominique Zelmercier from the W3C, bringing standards body expertise


– Delara Derakhshani from the Data Transfer Initiative, focusing on user data portability


– Chris Riley, contributing insights on user experience and interoperability myths


The discussion emerged against the backdrop of growing interest in decentralised social media alternatives, particularly following user migrations from centralised platforms, with Chris Riley noting people “noisy quitting X” and subsequently “quiet quitting Mastodon.”


## Core Technical Challenges


### Data Deletion and Cross-Platform Propagation


Mallory Knodel highlighted a fundamental technical problem: when users delete posts on one platform, those deletion requests may not effectively propagate across all federated instances that have already received the content. This creates complications for privacy regulation compliance, particularly with GDPR requirements for data deletion.


Ian Brown provided a concrete example of this challenge, describing a scenario where a user sets their Mastodon posts to auto-delete after two months but uses a bridge service (Bridgyfed) to share content with BlueSky. The user’s privacy preferences from Mastodon do not automatically transfer to BlueSky, creating gaps in privacy protection across the federated network.


### Bridge Services and Privacy Preferences


The discussion revealed significant challenges with bridge services that connect different federated platforms. Ian Brown noted that Bridgyfed, which bridges content between platforms, doesn’t currently recognise or apply user privacy preferences across different systems. This technical limitation demonstrates the gap between user expectations and current implementation reality.


### Standards and Implementation Complexity


Dominique Zelmercier from the W3C raised concerns about the granularity of privacy controls in standardised systems. They observed that standardisation processes typically produce simple binary signals—”do or do not”—which may be inadequately coarse for the complexity of social media interactions where users want nuanced control over their digital presence.


## User Experience and Understanding


### User Comprehension Challenges


Melinda Claybaugh emphasised that many users may not understand what it means to post content on federated platforms or comprehend how their content might be shared across networks. She noted that users posting on Threads, for instance, may not realise their content could be shared with other services in the federation.


The discussion revealed that different user types—power users, technical users, and average users—require different levels of explanation and control options. Claybaugh stressed the importance of meeting people where they are and designing systems that accommodate varying levels of technical literacy.


### The “Myth of the Superuser”


Chris Riley made a significant intervention challenging what he called the “myth of the superuser.” He argued that the goal should be making interoperability invisible to users rather than requiring them to become technical experts. Riley emphasised that most users want good defaults rather than complex configuration options, creating tension between user education approaches and user experience simplicity.


## Regulatory Framework Discussion


### Existing Privacy Laws and Federation


Ian Brown discussed how existing privacy regulations like GDPR apply to decentralised systems, noting these laws provide important legal backstops against bad actors who ignore user data preferences. However, he acknowledged that current regulations were designed with centralised platforms in mind.


Brown referenced GDPR Article 22 (though noting he was speaking from memory and would need to verify) regarding automated decision-making, highlighting how existing legal frameworks need fresh implementation approaches for federated environments.


### Digital Markets Act Implications


Ian Brown provided insights into the European Union’s Digital Markets Act (DMA), which currently includes messaging interoperability requirements for designated gatekeepers. He presented slides showing the complexity of regulatory requirements, which he acknowledged were “too complicated” and “impossible to read,” illustrating the challenge of implementing nuanced regulatory frameworks.


Brown suggested that regulatory frameworks are evolving to require dominant firms to enable interoperability, representing a shift from traditional competition law approaches.


## Governance and Instance-Level Management


### Defederation as Governance Tool


Mallory Knodel highlighted defederation—where instances disconnect from others—as a mechanism for protecting users from bad actors whilst maintaining overall system openness. This represents a form of distributed governance that allows communities to maintain their safety standards without requiring centralised oversight.


### Government and Institutional Roles


Knodel made a specific policy recommendation, suggesting that “maybe governments should have instances where they’re the source of truth that’s verified.” This reflects broader discussions about how institutional actors might participate in federated systems whilst maintaining credibility and user trust.


Ian Brown supported this direction, arguing that governments should share information across multiple platforms rather than forcing citizens to use single platforms for accessing public services.


## Audience Questions and Key Responses


### Digital Identity and Democratic Institutions


An audience member raised questions about digital identity ownership and its relationship to democratic institutions. The panelists discussed how federated systems might support more democratic approaches to digital identity management, though specific solutions remained largely theoretical.


### Technical Standards Bodies Role


Questions about the role of standards organisations like W3C and IETF revealed their crucial function in developing interoperability standards and verification processes. The discussion highlighted both the importance of these bodies and the limitations of current standardisation approaches for complex social media interactions.


### Youth and Internet Interoperability


An audience question about youth involvement in internet interoperability prompted discussion about generational differences in approaching federated systems and the importance of including diverse perspectives in governance discussions.


### Cultural Context and Platform Migration


A significant audience question addressed how cultural context changes when content moves between platforms with different community norms. This highlighted that interoperability involves not just technical challenges but also social and cultural considerations that affect user safety and community dynamics.


### Trust and Security Between Instances


Questions about trust between instances and security concerns revealed ongoing challenges in federated systems. The panelists acknowledged that security vulnerabilities, particularly around direct messages and encrypted content, require continued attention as federation scales.


## Data Portability and User Migration


### Beyond Technical Transfer


Delara Derakhshani provided crucial insights into the complexity of platform migration, noting that it involves not just data transfer but joining new communities with different safety considerations. For marginalised groups particularly, the safety of the destination platform matters as much as the ability to bring content.


This reframed interoperability from a purely technical challenge to a human-centred one, highlighting that data portability is fundamentally about community safety and user agency.


### Trust Registries and Verification


Derakhshani emphasised the need for trust registries and verification processes to reduce duplication and streamline onboarding across platforms. These mechanisms could provide coordination without centralising control, addressing fundamental challenges of distributed governance.


## Future Vision and Ongoing Challenges


### Long-term Interoperability Goals


Mallory Knodel described a future where users never have to sign up for new services whilst still being able to follow people across platforms. Ian Brown envisioned social media becoming as configurable and flexible as the internet at the IP layer.


### Persistent Technical Issues


Several technical challenges remain unresolved, including:


– Propagation of deletion requests across federated networks


– Security vulnerabilities in direct messages and encrypted content


– Maintaining consistent user experiences across platforms with varying capabilities


– Handling privacy controls and auto-deletion features across different systems


### Scaling Governance


As federated systems potentially scale to thousands of instances, new coordination mechanisms will be needed to maintain trust and shared standards. The panelists acknowledged that current approaches may not be adequate for the scale of federation envisioned.


## Key Takeaways


The discussion revealed both the promise and complexity of building privacy-preserving federated social media systems. While technical solutions for basic interoperability exist, significant challenges remain in:


1. **Privacy Preservation**: Ensuring user privacy preferences and deletion requests propagate effectively across federated networks


2. **User Understanding**: Helping users comprehend the implications of posting in federated environments without overwhelming them with technical complexity


3. **Governance at Scale**: Developing coordination mechanisms that maintain standards and user protections across distributed systems


4. **Regulatory Adaptation**: Implementing existing privacy laws in federated contexts while potentially developing new regulatory approaches


The panelists emphasised that solving these challenges requires sustained collaboration between platforms, standards bodies, regulators, and civil society organisations. Success depends not just on technical innovation but on creating governance structures and user experiences that serve diverse communities while maintaining the openness and user agency that make federation attractive.


As Mallory Knodel noted with humour, the discussion completed her “bingo card” of IGF topics, reflecting how federated social media intersects with many core internet governance challenges that the community continues to address.


Session transcript

Mallory Knodel: Hi, everyone. Welcome to the session. I want to first start with a short bit of housekeeping, which is that if you’ve joined us in person, we’re so grateful, and we also have loads of seats up here at the roundtable that we would invite you to occupy if you’d like to. It means you get a microphone that’s always on, so you could ask questions directly. Otherwise, we’ll take questions towards the end, and there are mics also out in the audience, so it’s your choice. We’re also monitoring, for those of you who’ve joined online, we’ll be monitoring the questions that you drop in there as well. So welcome. We’re talking today about the Fediverse, also known as the open social web. By design, imagine this as the openness means that there’s an interoperability element that’s very critical, and we are curious in this panel about a lot of things, but particularly about the ways in which privacy can be preserved in these interoperable environments. So you’re all internet experts. You understand interoperability, and you’re here because you’re excited about the web, and we are too. So this is the panel up before you. We don’t have any remote panelists. I’m going to allow the panelists to really present this idea to you in stages. So if you’re not sure what the Fediverse is, if you’re not sure about what privacy and interoperability have to do with each other, I’m confident that the questions that we’ve posed to one another are going to help build that narrative for you, and of course come with your questions. As well, I’m going to ask the panelists, please, the first time they speak, if they can just give a brief intro into where they work, their interest in the topic. And so that way, I don’t have to read bios aloud to you all. But I should start with myself, actually, right? So they’re all switched on all the time. You have to wear a headset, though. OK, OK, that would help. So yeah, I’m Mallory Knodel. I am the executive director, one of the founders of the Social Web Foundation, which sees itself really as a steward of a multipolar Fediverse. And so that does have elements of protocol. There’s the ActivityPub protocol. There are other open protocols that are hopefully, towards the future, interoperable with each other. It also has this very strong element of community building that we all can build and construct the communities we want online, move them around, subscribe, follow one another without having impediments to that. It’s a really great vision. And I think one of the crucial pieces for me, as someone who’s been in this space with you all for many years, is this element of human rights and of we’re building a Fediverse, or we’re building an open internet, but for what? And so human rights and that sort of thing brings me to this work. So that’s me. I wanted to pose the initial question to the panel, where each person can introduce themselves. Just what do you think we need to know as a level set for this issue of privacy interoperability? It can be a short intervention, but just to try to introduce the audience and the participants to what are the edges of this issue, and why are we sort of here today, what do we care about? So keep the very first initial question maybe two minutes, when the rest of our questions will be maybe around more four or five minutes. So Delara, can I start with you, please?


Delara Derakhshani: Yes, absolutely. Thank you all so much for being here. here and for having me. So my name is Dilara Derekshani. I serve as Director of Policy and Partnerships at the Data Transfer Initiative. Our entire mission is to empower users with regard to data portability. I think it’s helpful to give just a little bit of context about what we do and you’ll see how it’s connected to our work in the Fediverse. When we’re talking about data portability in this context, it essentially means at the user’s request, companies should move bulk transfers of user’s personal data from one platform or service to another. Our work at DTI is centered on translating legal principles into real world practice. On the product side, we are building open source tools with private sector companies. And on the policy side where I sit, we work closely with regulators and policymakers to serve both as a resource and help implement portability solutions that are both meaningful and workable in practice. Ultimately, our goal is an ecosystem that promotes user agency, competition and innovation. And I am heartened to see so many folks at the table today because a lot of these issues are absolutely relevant to the Fediverse and looking forward to diving in deeper.


Mallory Knodel: Great, awesome. Let’s go to you next, please.


Melinda Claybaugh: Hi, thanks for having me and it’s a pleasure to be here. I’m Melinda Claybaugh. I’m a Director of AI and Privacy Policy at Meta. And I’m here today, our interest in the Fediverse and interoperability has really crystallized around our product we launched a couple years ago called Threads, which most people are familiar with Facebook, Instagram, WhatsApp, all of those apps. Threads is our newest app and it is…


Ian Brown: I’ve been working on since 2008 with my co-author, who just happens to be sitting over there, Professor Chris Marsden. We wrote a book in 2013 called Regulating Code, where we talked about interoperability as a solution to a number of difficult policy trade-offs and issues. And I’m really delighted to see it so high on the radar of many regulators. I spent a lot of the last five years talking to EU policymakers because they have actually put it into a law called the Digital Markets Act, which passed a couple of years ago. And I’m going to show you later a couple of tables. I’m not going to talk through them I’m not going to do death by PowerPoint. Don’t worry, but just just point us to a lot more information If you want detailed technical background, I’m a technologist I’ve written a lot about it at my website. You can see it on my website Ian Brown tech I should disclose that earlier this year Metta commissioned from me an independent assessment and it was independent of Of certain aspects of how the digital markets interoperability obligation is being applied to iOS and to iPad OS and the privacy and security Implications of that. I’m actually a security and privacy person by background Although I’ve done a lot in competition digital competition law reform the last five years and I should also disclose that I’m about to fingers crossed start consulting for the UK Competition and Markets Authority on precisely this topic


Mallory Knodel: Yeah, so you can see the spread we’ve got an awesome panel I can tell also from the audience We’ve got some really important stakeholders in this room that know a lot about this topic So I’m counting on you to come in later on with your questions and there are mics there There are a whole bunch of mics up here if you want to sit and take one you are welcome But let me get on with it. So in my view, this is an obvious question, right and we see it a lot There’s an intuition that users have so the the same kinds of users that are excited about using open social protocols for You know consuming Content articles sharing pictures and so on are the same users that are going to care a lot about their privacy and so when this comes into play there, you know if we have more of a permissionless ecosystem Where different entities are now not just maybe in business relationships, which is something I think we’re attuned to in the old social media, but in the new social media, it’s more of you know instances Allowing allow listing block listing that sort of thing. I think there’s an intuition that some users have that and Chris Riley. We’re going to start with a question from a questioner. They said they might take those actions at the instance level maybe because of privacy concerns or maybe because they don’t want to be in relationship with or associate with other corners of this very open interoperable ecosystem. So that’s the sort of general reason why I think we’ve gathered today and hopefully we’ve started out strong by explaining why each of us are here and why we care about that. So first to Dilara, so and I have Ian, I want you to respond also to this question, but first, how can interoperable systems respect user privacy and give users control over their data?


Delara Derakhshani: So I’ve thought about this question a lot, and one of the things I want to start with is what I think, what success would look like in an ideal world, and I think at its core it starts at user agency. Users should know what’s being shared, when, with whom. And why, and interoperability shouldn’t necessarily mean default exposure. You know, there are technical design issues, real-time post visibility across services, activity portability, and, you know, social graph portability as well. I mean, I think there’s an education component that could be furthered to the benefit of all in the ecosystem. But maybe most important is, you know, the idea of, you know, what is the best way to share data? What is the best way to share data? But maybe most important is actually a cultural question, a commitment to shared governance and iterative collaboration. Working in the open is a start, but true privacy-respecting interoperability demands an ongoing habit of listening, engaging with standard bodies, with respect and stay within the space of a secure dialogue that reaches out to everyone at public IoT and later implemented in Azure remote control. And these all strike me as sort of some of the ideals that we should have, I will also note there are at its core a set of smart There are some challenges that come with interoperability. I mean, it’s all about freedom of movement, enabling users to take their posts, profiles, and relationships across platforms. It challenges lock-in, a goal the Fedverse aspires to, but some may argue has not fully delivered on yet. And of course, interoperability is not without its challenges. For example, people may be able to want to move from one server to another and bring all of their content and relationships with them. That may not always be possible. I can speak later to some of the solutions that we’re looking at at DTI. And something that you’ve touched on earlier, and I think it’s relevant to human rights. You know, there’s another challenge, and that’s that decentralized systems like the Fedverse can empower users to find and build communities. But inevitably, that introduces friction. Different services have different norms, policies, controls. It can lead to inconsistent experiences. But beyond that, migration is not just about exporting and importing data. It’s about joining new communities. We’ve heard repeatedly in my experience and our work at DTI that for many users, particularly marginalized groups, the safety of the destination matters just as much as the ability to bring the content. So trust-building tools must also not just address privacy, but also moderation, consent, and survivability. And in recognition of time, I’ll stop there. I’m happy to expand further on any of those points down the line.


Mallory Knodel: Yeah, especially if there are questions from the audience about that. Ian, I want you to respond, but because you’re a technologist, I’m going to ask you if you can slightly explain, maybe for folks who don’t know, that like on ActivityPub, for example, if you delete a post, that doesn’t necessarily mean it’s deleted everywhere. And you could maybe try to hook in some of the actual ways in which these… and I’m going to talk about how these things work and the limits there for privacy and so on. But whatever you were going to say, I’ll do that too.


Ian Brown: That’s good. That’s a challenging question, but I like challenging questions. So, let me say, could our technical friends at the back, I wonder if you could put my first image up on screen, the XKCD cartoon. I’ll just leave you to look at it at your convenience. So, interoperability, I think, first of all, it sounds, to people who aren’t deep in the weeds of this, it sounds a very abstract, weird, geeky, techy notion, and it’s not an end in itself. It’s a means to greater competition, greater user choice, diversity of services, ability of people to talk across communities. I love that idea. It helps people join new communities, try out new communities. You can see this. I wonder if we could have an element of audience interactivity, as we always are told to do in universities to make sure all the students are awake. Put your hands up if you’re on Blue Sky. Yeah, a lot of people. I thought in this IGF community that would be the case. Via a bridge. Well, I was going to come to that. That’s good. That’s also good. Who’s also or differently on Mastodon or another part of the Fediverse? And still quite a few people, which, again, I imagine, I’m not going to go and ask people individually. More of the technical community are on Mastodon for various reasons. More of the policy community on Blue Sky. As Mallory said, there are services now called bridges, which let people, I think the opt-in approach is absolutely right. People who want to can choose to say, OK, from my Mastodon account, I want people on Blue Sky to be able to follow me on Blue Sky and to like my posts and to repost them and to reply to me. So I’ll see them on Mastodon and vice versa. Mallory got straight to one of the one of the important technical questions, which I think this XKCD cartoon, which someone in a tech company just sent me yesterday evening, knowing I was talking on this panel today, the details are very important for human rights, for protecting privacy, for enabling freedom of expression and opinion. for making sure people don’t get harassed by being exposed to communities they might not want to. So to give you another example, there are bridges between X, X Twitter, and some of these other services and I quite understand why many people on Blue Sky and Mastodon would not want people on X to be able to interact with them because there’s a lot of harassment that goes on on that service sadly and I deleted my own account a month or two ago. The very specific question Mallory asked, okay you might be on one service, so on Mastodon for example I know because I do it myself, you can set your posts to auto delete after X weeks or months and I do after two months because I want it to be a conversational medium, not a medium of record. I don’t write social media posts with the care that I write academic articles or submissions to parliamentary inquiries and so on. I want to talk to people, I want to share ideas, I want to be able to make mistakes. That’s a key thing that privacy enables. You can try new things out. It’s actually one of the really critical elements of children growing up, those of you who are parents or have nephews, nieces or are familiar with educational psychology, that people can make mistakes. That’s how we learn. You don’t want people to think if I make a mistake it’s going to be imprinted forevermore on my permanent record and employers, universities, governments, when I enter the United States for example, might be checking my social media posts as is the current US government policy. So that very specific question Mallory asked I think is a really good one because I think it gets to the crux of some of these issues. If say Mallory has, let’s use me as the example, so I delete my Mastodon posts after two months. I use a bridge, it’s called Bridgyfed, I love the name, which lets people on BlueSky read my Mastodon posts. BlueSky is not necessarily currently going to, you know, I have a BlueSky account but BlueSky currently does not have an option to auto delete all my posts. that I type myself into Blue Sky, nor does Bridgyfed currently pick up my preference from Mastodon to auto-delete everything after two months and then apply that on the other side of the bridge. It could, from a technical perspective it’s relatively straightforward. Also from a legal perspective I should mention in Europe at least and in the many other countries, so up to 130 other countries now who have implemented GDPR-like privacy laws and included in the GDPR, I think it’s article 22 but I’d have to check from memory, GDPR says if an organisation publishes your information in some way, if it shares it like a bridge, like a social media service and then you withdraw your consent for the use of that data, the company has to stop using it but also the company has to make best efforts to tell other organisations it’s shared your personal data with, that they should stop using it as well and I think that’s a good legal way of dealing with this issue Mallory.


Mallory Knodel: Good, well and so you set us up really nicely. I think the other, before I move on to the GDPR question, I wanted to just say I think our intuition around how social media works is clearly evolving and I think that while we rely a lot in the privacy realm and the security realm for that matter on how users think about their own data, how users think about their own experience when there’s an absence of regulation, I think this is really really critical and I think why we have to have this conversation now because the new social media is introducing so many different dimensions, it’s introducing so many ways to interact. So let’s talk about regulation because even with the existing regulation we have, even in a sort of old social media regime, there’s still these questions. So Melinda, can you tell us about what challenges you face regarding compliance with national and regional privacy regulations like GDPR and how in a decentralized environment how those challenges can either be exacerbated, alleviated, but you know essentially you know how should Fediverse platforms In instances better aligned with privacy laws in general.


Melinda Claybaugh: Yeah, it’s a really great question. And I guess this is a perfect tee up from you before me And so I think you know these cut these fundamental concepts that are enshrined in GDPR and now in many many many other laws around the world are ones that around your your data subject rights your rights to access information that you’ve shared your right to correct and To transfer and to delete I mean these are these have been enshrined for a long time now and people have come to Expect them and rely on them and operate accordingly And that works really well for things like you know if you post something on Instagram, and then you want to delete it. It’s deleted It works less well In this kind of Fediverse concept so if you post something on threads And then you delete it we will send that request along the chain to wherever you know to Mastodon or wherever it may have also flowed But we don’t have a mechanism for ensuring that that that post is deleted Down the chain, and I think this is where it’s so important That we need to take a fresh look at the existing You know data protection regimes not to say that those rights are aren’t important They should they’re important, and they should stay but the implementation I think is where we need to come together also to Dilara’s earlier point come together with some norms and expectations Within industry around how this should work equally. I think there’s a real challenge as this new social media Proliferates, I think you know it’s those in the tech community are already well-versed in in these communities But to your average user who maybe is starting on threads or starting on blue sky or you know wherever people went after Twitter You know they may not understand this network. They may not understand really what it means to be posting something on threads and then have it go to other services. And so that’s where I think the user education piece and really understanding user expectations, importantly, is going to be really critical. So at Threads, what we do is when you’re first onboarded onto Threads, we you know, kind of explain with pictures what the Fediverse is and what it means for what you’re posting and what happens when you delete. And so I think we have to really meet people where they are and understand kind of who’s a power user, who’s a tech user, and then who’s just your average user who maybe is poking around and trying out new things online, which is great. So I think the the collision of the legal regimes is one that needs to be, I guess the collision of the the new social media and the legal regime is one that needs to be worked out in the regulatory space, in the industry space among companies, and then at the user experience level as well.


Mallory Knodel: Absolutely. And thank you. Yeah, thanks for that. Hopefully there are questions from the audience about that. I think one thing I wanted to just point out is one part of how the ecosystem is evolving now, although it could always change because it’s interoperable and open and, you know, who knows what can happen, is that it feels like a lot of there are kind of two ways to get on the Fediverse. Like maybe you join a new platform for the first time because you want to try it out, like maybe Pixel Fed seems cool. And so you’re new there and you get on boarded there or you’re on already a platform that you’ve been on for a while and that platform may decide to start adopting open protocols. And so the onboarding process and that movement process looks different. And that’s kind of the beauty is that it’s it’s very diverse in how people arrive in this space. And we’re still trying to develop and communicate with end users about how that works. I’m going to move over to another regulation that you’ve already mentioned, Ian. This one’s to you. And then, Melinda, I’d love you. to respond as well. So what are the opportunities to influence the Digital Markets Act? I mean, it’s out, it’s baked, but we want to see if there’s a possibility to define or enforce interoperability requirements for the existing social media platforms. That’s, as far as I know, not part of how the DMA is structured now, but there could be some opportunities and it sounds like you’ve been working on that. So tell us.


Ian Brown: I’ve been working endlessly on that for the last five years. I must have bored all my friends to tears by now on that subject. So the DMA has a messaging interoperability requirement. So that currently applies to WhatsApp and Facebook Messenger and actually Meta, and again, to emphasise, we didn’t practise this before, we did not line up questions, we’ve never met before and my assessment ended at the end of February. But actually, I’m not always positive about Meta, but actually what Meta has done on WhatsApp privacy and interoperability is really interesting and great. I think a lot of user research in fine tuning in the same way Melinda has talked about the Threads experience, exactly the same with WhatsApp, and I think the rest of the industry could really, it would be great if Meta shares that as much as possible with other industry players and open source projects, so they can all take advantage of that great research Meta has done in that area. Civil Society groups got this close to also requiring social media, the biggest, so the DMA only applies to the very, very largest companies, which the DMA calls gatekeepers, there are only seven companies it applies to, the obvious, you can probably guess which they are, I can read the list out later if you care, but Meta is one, Apple is another, Microsoft is a third, ByteDance, TikTok is covered. Civil Society got this close to saying social media from the very, very largest companies must also be interoperable in the way that Threads is. I mean, I’m not using, actually, I am using Threads currently, but mainly I follow people on Threads from Mastodon, not from Threads, I follow Jan Le Coon, for example, who’s Meta’s chief AI scientist. because he says very interesting, quite independent-minded stuff. I’m sure Zuckerberg sometimes rolls his eyes at, you know, things like LeCun says, and that’s a good sign of independence, I would say. What can we do? You know, did civil society miss its chance? Because it persuaded the European Parliament, but it couldn’t persuade the member states, and therefore the final legislation includes it for messaging, but not for social networking services. But this is the cliffhanger. Next year, and actually every three years, every three years the European Commission has to review the operation of the legislation. That’s very standard in European law, and they have to very specifically every three years consider should Article 7, which is the messaging interoperability obligation for gatekeepers, be extended to gatekeeper social media services. And they, you know, we can say pretty much what they would be, probably, because you can look at who are the designated gatekeepers. I already said META is one. You know what social media, they’re called online social networking services in the DMA. You know what online social networking services with 45 million users in the EU, 10,000 small business users, those are the two criteria. You could figure that out for yourself if you wanted to. If the DMA, if the Commission recommended this and the European Parliament chose to act on the European Commission’s recommendation, the DMA could be extended, so that would become a legal obligation for these very largest social media companies in the way it already is for messaging. And for those of you who are not European, don’t worry, I can sort of feel you rolling your eyes that, you know, these Europeans are, you know, they love passing their baroque regulatory regimes, but what relevance does this have to us? Actually, like the GDPR, many other countries around the world are already putting these kind of principles into their own national competition laws and related laws. And I think also another trend we see is traditional competition law is very rapidly moving into this space. So though, if you want to look it up, I’ll just say one sentence about this. There was a European Court of Justice decision very recently called Android Auto, which basically says dominant firms must enable interoperability for their services full stop. Otherwise, it’s an abuse of dominance if they don’t. And I think that could have enormous effects around the world.


Mallory Knodel: I mean, you bring up this idea of jurisdiction, right? So Europe might have done GDPR, has the DMA, the DSA, etc. But I can’t even really picture how this would work if it only applied to Europeans, right? Because I mean, it seems to me, and this is maybe Melinda, I’m sure you have a response here, but to add on to it, you know, how could a company like only be interoperable in one jurisdiction rather than like, technically speaking, it would seem to me to be if you’re interoperable in one place, you’d be sort of interoperable for everyone. But anyway, that’s just my small minded thoughts on that. But tell us your reaction.


Melinda Claybaugh: I mean, I don’t have a specific answer to the jurisdiction question. But I mean, I think it does lead to a larger question around what is interoperability? What are we talking about here, right? Because you can talk about, it’s one thing to talk about, you know, text-based posts, kind of all, you know, that services are being built on a certain protocol, and it’s easier to build that from scratch, right? So something like threads and being able to federate your posts, that makes sense. Messaging, you know, is analogous to email, we kind of all understand that you should, you know, there are benefits to being able to send messages in different ways. But it’s very complicated when you start to get into encryption and encrypted messaging, you kind of start to run into the privacy and security issues. The issues then moving to a social media interoperability, writ large, right, of everything that happens in social media, is a whole other can of worms. And so I think that beyond the technical challenges, and I’m not a competition lawyer, so with the caveat, but what are we trying to accomplish here and how would this actually operate? Is this what people want? But we can leave that for another day in European discussions.


Mallory Knodel: Yeah, I do think that’s relevant, like what do people want? Because I mean, one answer to interoperability all the time on the open web is RSS. I mean, that’s a pretty basic way to just like get content out there in a way that can be consumed in any way. But I think what ActivityPub does and what other protocols are aiming to do is make it that you can push content out there, you can consume content, but you can also interact with it, comment on it, share it, re-blog it. So there’s a lot of different and more all the time, right? Wonderful thing about sort of interoperable permissionless ecosystems is it really creates new ways of interacting with content that didn’t even previously exist before. I mean, we kind of are stuck a little bit in an old school social media thing where it’s like you can click the heart, you can click the little swirly arrow thing, and you can like click the back arrow to make a comment. I mean, there’s a really, really basic ways. There are probably more, right? So Dilara, I want to round out the panel by asking you about, you know, how governance and responsibilities can be distributed so that we can steward all of this together. We’ve talked about this like very diverse and very complicated ecosystem where you can do all manner of things, but then how do we manage this together as it hopefully continues to proliferate, right? Across both existing platforms out there, right? If you run a social media platform and you’re interested or curious about interoperable platforms or interoperable protocols, we want to talk to you, but then also about brand new folks who come into this space and do things. So tell us your thoughts on that.


Delara Derakhshani: Absolutely. You know, earlier I set out a number of challenges and then I didn’t actually bother to explain any of the solutions that we were working on at DTI to address these problems. So I’m going to do that, but you know, I mean, the first thing I’ll say is that it’s pretty clear to all of us that these issues are bigger than just a single company or entity. So you know, federated ecosystems are by design distributed, but that distribution inherently obviously comes with fragmentation. We’ve talked about this already. But what DTI is doing is we’re focused on creating a sort of a shared governance infrastructure, not to centralize control, but to really provide coordination mechanisms that align responsibilities across diverse players. We’ve done this in two ways. The first is we’ve developed a trust model informed by real world use cases and with the input of a great deal of multistakeholder actors on how to establish trust during data transfers. And so, you know, this issue goes beyond just the Fediverse. We see the parallels in the broader online environment. But as I think someone referenced, you know, the Digital Markets Act, which mandates user initiated data transfers for those seven designated gatekeepers, which we can all rattle off who quickly off the top of our head. But you know, it’s silent about trust building mechanisms, and it leaves it up to each gatekeeper. And so to sort of reduce duplication and streamline onboarding across platforms, we’ve launched a trust registry now in pilot. And it allows developers to complete a verification process that, you know, doesn’t have to be duplicated and leads to harmonization and efficiency. There’s also growing recognition that trust infrastructure must scale down, not just up. , we have to recognize that they don’t always have the same resources. And then I think the only other thing I’d like to point out is that with that problem of, you know, I think it would be a good thing to improve portability. And one way we’re doing that is through initiative that we’re calling Lola, which will help users migrate safely. We’re creating shared guardrails and our work is about turning interoperability into a system that users can trust, because otherwise they won’t use it. And where community control and privacy travel together. And I think that’s a good place to stop. At the beginning of a conversation.


Mallory Knodel: Exactly. I like the idea that community control and privacy travel together. I think that’s a good place to stop. I think in a lot, you’ll hear, and I already have so far, like many of you, heard a lot of discussion still about content moderation, social media issues, and so on. And I can’t help but always think, like, right now that we have to sort of advocate and lobby, as human rights activists have done this for many years, just a few companies, right? If you just get like a few of them to like get on board with whatever current issue there is, then you can solve the problem. And in the world that we’re talking about and envisioning and hoping comes to pass, you will have thousands. And so how can you possibly imagine governing or dealing with that complex ecosystem? I think it’s exactly what you’re saying, Dilara. It’s like we now can create efficiencies actually and cross-platform institutions that have the trust that you need that then can hook in and interoperate across the whole ecosystem. So it isn’t about one platform making a decision. It’s about how have we all made the decision and how can it easily replicate across.


Melinda Claybaugh: Exactly.


Ian Brown: Just for 30 seconds, could our technical support put my second image up on screen? I’m not going to talk through it, I’m just going to point it out if you’re interested. You can see I’m not going to talk through it, but this image, and let me just flick to the third, all I want to do is make you aware they exist in the report, the independent report that Meta commissioned, which is on my website, ianbrown.tech. If you want to look at the detail, interoperability, as XKCD and many other people said, you need to get the details right, number one. Number two, it’s great that private firms like Meta and associations like DTI are doing all this research already to figure out, as Mallory said, in a shared space, and I’m very much looking forward to all your contributions the rest of this workshop, figuring out what can we learn together. I promise you, because I’ve done a lot of work for data protection authorities in Europe and the European Commission as well, regulators don’t want to regulate. They don’t have the resources to regulate at this level of complexity and get it right very often. That’s what regulation is for, to give incentives to private firms to do a good job themselves, and as Chris puts it, co-regulation is great, where you can have the private firms and civil society do the detailed work, and the regulators are only there as backstops to make sure that the public interest as well as the private interest is being genuinely represented. The reason these two slides are too complicated, you don’t have to read them now. If you’re interested, download the report, the end of the report. I’m just going to flick backwards and forwards, because they took me so long to do. I want you to enjoy the complexity of these two slides, because this is the level of complexity you have to go into when regulators do have to step in, and regulators don’t want to do this very often.


Mallory Knodel: It’s very instructive, actually. It’s impossible to read, which is maybe what you’re illustrating. Right, so I, as a moderator, am going to pat myself on the back for giving us 20… We have 51 whole minutes for the Q&A. We are not cutting this short because I really, really do want to hear from the audience. I want your questions. We would love to be able to respond. So you can make your way to the mics if you’re sitting out there, and I’ll just try to figure out a cue. If you’re up here already, your mic is actually on. Maybe I should have told you that. Your mic is on, and you can just, like, raise your hand and lean in and get yourself on the list. Yeah, please. If you could introduce yourself, that would be helpful.


Audience: Edmund Chung from .Asia. Very interesting panel. Thank you for bringing the topic in. I have actually two or three questions. One on more social political side, and one on more of the technical side. The first one, well, first of all, I just find the fire chat with the celebrity star Joseph very interesting, because he emphasized on your digital you belongs to you, and this is exactly what we want to talk about. So my first question is, especially to Ian, probably, besides privacy as one aspect, what about the ability to choose, you know, a choice for curation of information, and how that relates to our democratic institutions and processes and so on, because that is a big topic in my mind. In your research, does that, you know, does it help? This is my first question. The second question, related to the deletion part, that’s really interesting. I was just thinking about it. Even email doesn’t resolve that perfectly yet. But something like time-based encryption might be useful. But the issue here seems to be the forum or the standards body that the different players and providers are willing to and Chris Riley. So, it’s a question of whether or not we have a forum that we can abide by. ActivityPub is I think with W3C. Is that going forward? Is W3C the right place? Is IETF, is it a forum, does it need a forum to allow interoperability to really happen?


Mallory Knodel: ≫ I mean, maybe I can answer in reverse.


Ian Brown: ≫ I was going to say you’re more of an expert on question 1.


Mallory Knodel: ≫ Yeah, I think it’s a good question.


Ian Brown: I think it’s a good question.


Mallory Knodel: I think we’re still continuing to standardize extensions, which is exactly what you need to build on this as it grows. So, that still happens, and I think you’re right to point it out as an important piece. As far as like how you ensure different platforms are implementing it correctly and doing things like delete, you know, if requested, that’s something that standards bodies have to do. And I think that’s something that’s important to address, and I think that’s something that’s important to address.


Ian Brown: ≫ And on that issue, I would say the IETF, the internet engineering task force, which I’m much more familiar with than W3C, certainly organizes, I forget what they call them, bake-offs maybe, interoperability testing, making sure that software that is following IETF standards, first of all, deletes the part of the database where misuse happens, so you can counteract corruption at from of an explosive point of view, can drop a number of accounts on that sort that is very potentially a criminal life letter, right? Those are some of the ways that we’re somehow providing sense to other platforms. right approach to sharing their content with other platforms, but then one of those platforms went rogue, you know, started allowing people on that platform to harass them. Well, META can block the harassment, you know, crossing over onto threads, but it can’t enforce against a bad faith actor. Technically, the data still has crossed, you know, it’s on the other side. What could happen then, if necessary, if this was happening on a large scale in particular, META and all the individuals affected could actually take legal action in the 160 countries, including the EU member states that have GDPR-like privacy laws to say, hey, bad faith actor, you’re clearly ignoring my clearly stated limits on processing of my personal data. It’s often going to be sensitive personal data in GDPR terms about your political opinions, your health, a range of other very sensitive topics. And then if your data protection regulators are doing their job, or if your courts are doing their job, they can legally go after the bad faith actors.


Delara Derakhshani: I’d love to quickly respond to something as well, if that’s okay. Your digital you belongs to you. I think that’s an incredibly powerful statement, especially as increasingly our lives are online or increasingly personalization of, for example, AI systems is driving our lives. But I do want to just note that this is the very heart of why portability matters and why you should have control of your data. And with regard to your deletion point, for whatever it’s worth, those conversations are constantly happening, both at the state and increasingly at the federal level. And so I’m looking forward to seeing where those conversations go as well.


Audience: Just quickly, you need to attribute to Joseph Gordon-Levitt, not me.


Delara Derakhshani: I’m so upset that I missed him. I didn’t know he was here.


Ian Brown: He was tremendous.


Mallory Knodel: Yeah, very focused on personal data. It’s really right up your street, Dilara. But it’s recorded, no doubt. You can catch it. We have a question here. Anyone else at the table? Okay, we’ll come to you next. Go ahead.


Audience: Dominique Zelmercier, WCC. So just to quickly react on the interop question, the same way the ITF runs interop tests, WCC has also a very thorough interoperability testing process as part of our recommendation process, which could include indeed testing whether the implementation, implement deletion, whether the service does, is a separate question. I guess a specific question to the panel. So my experience from previous privacy interoperability work in WCC is that it’s really, really hard to define privacy to a level of granularity that matches the complexity of how people want their digital selves to be presented. And so what we’ve seen overall is that what gets through the standardization process is very simple signals like do or do not typically. In a world of social media, that feels way too coarse to actually express something users care about. So I don’t know how you see disentangling this problem of managing the complexity of the way people understand their social self and the need to bring interoperability in exchanging this social media content, in particular in the context of portability or other type of interoperability.


Mallory Knodel: Yeah, good question. Would you like to respond to it? Okay, after. Yeah, okay. Anyone else on the panel want to respond?


Ian Brown: I’m happy to, yeah.


Mallory Knodel: Go ahead.


Ian Brown: That’s a really great point, an important point. And I think one response to that is to emphasize what Mallory and our other two speakers already said about the importance of shared governance and building these norms together collectively, because that’s going to be much more effective than and Chris Riley. I’m going to start with you, Chris. I think the biggest challenge is that we’re seeing big platforms building 10 different versions. With the best will in the world and with very large resources as big platforms have, if they can work together, that’s going to be much better for their users than if they define them slightly differently because that’s going to confuse users. It’s going to lead to things happening that users don’t want because they might have set a privacy control based on their understanding of one thing, but they don’t want to do it because they don’t want to be a part of it.


Mallory Knodel: I think that’s a big challenge. I keep thinking of this instance layer. In the old social media, you have platform, you have users. But now we have users, instances, platforms, and multiple different iterations thereof. So this idea that the instance level can also be a place where default settings are set, where actions can be taken. I think that’s a big challenge. I think there’s a lot of work to do to make sure that your instance could choose to de-fed rate with your instance if you’re not acting in good faith, if you’re not respecting what my users want, if my users actually just want me to do that. There’s a good report that Samantha Lye and Yale Roth at the Carnegie Endowment for International Peace put out about de-fed ration and the politics of that. What does that really look like? I think that’s a big challenge.


Melinda Claybaugh: I think there’s a lot of work to do to make sure that we can customize some of the experiences that we think people will want or make as similar as possible across the federated universe. At the same time, we want to preserve the uniqueness of each of the communities. I think you made a point about algorithms earlier. Maybe you post something on threads and it goes elsewhere and it’s going to be surfaced differently or maybe not like what we were talking earlier. We take over once each community says no than it will harm you. That’s the beauty of it, in a way. You know that you have different types of community armies that can do that really. It’s harming benefits and that kind of thing. I think most of us would think that that’s not so. on different services. So it’s really this fine balance of what are the core things that we want to make sure are protected and actioned across the Fediverse, but what can we leave to to be variable and unique?


Mallory Knodel: Yeah, that’s great. Question down here. Go ahead and introduce yourself and please ask.


Audience: Hi, my name is Caspian. So what are your visions for a privacy-preserving, interoperable internet in 10 years? And what steps should we take to ensure that it exists? Maybe the ideas in 10 years.


Mallory Knodel: I love the idea. We can think backwards to what 10 years ago was like. Very proto of what we have now. And yeah, what’s next in 10 years? I think it’s a good question. Thank you for asking it. Anybody have a response?


Ian Brown: I do. If the other two don’t want to go first. Okay. I was reminded walking in the wonderful display at the entrance by the organizers of how Norway was a real pioneer in bringing the internet to Europe. And I know that in particular because I was looking for a photo of my PhD supervisor who was part of that process. Actually, he was in London and he worked with the Norwegians who are in those photos at the entrance to bring the internet from the United States. I think there were only four sites in the original ARPANET and it was very soon after that that Norway and my supervisor Peter Kirstein brought that over to Europe. So that’s going back to the 70s. So that’s a very long, that’s 50 years ago. I’m not going to even try and remember 10 years ago because it seems all a blur, but thinking 10 years forward is a great horizon. I would say I’m very optimistic what’s happening really quickly. So way before 10 years, actually, I think what’s happening with several of the projects we’ve already talked about today, with Free Our Feets, and I see Robin at the front there, is another great example of stuff that’s bringing this into reality much faster than I expected. software tools. So I’d recommend to you just because I use them personally. I have no financial interest to be clear. One is called Open Vibe that lets you, it’s a social media platform that lets you look at your Mastodon, Blue Sky threads and Nostra accounts all in one timeline. You can customize the timeline. You can adopt different recommendation algorithms as Blue Sky also lets you do. So that point is a really important one Melinda raised. We don’t want to standardize everything. That’s often a criticism of interoperability. You’re standardizing and therefore homogenizing everything. And absolutely we want to avoid that. We want to leave space for different services to compete and develop and do a better job for their users. And that includes the type of communities that they focus on, but we want to give users choice. So if a user on a service, you know, whether it’s threads, Blue Sky, Mastodon, any of these services want to say, look, I’m saying, because a lot of people on Mastodon, for example, are very, very privacy protective for a very good reason. And nothing is requiring them to open up even beyond their own instance if they choose to. And as Mallory said, defederation is the sort of nuclear option if that’s not working properly. So I think one of the things are happening really quickly. I think things are moving so fast here that it’s very, you know, I’ve done these kind of futures exercises before for the European Commission and things are moving so fast. I’m not sure what they will look like in 10 years. As a super nerd, I would say I hope social media looks like the internet at the IP layer, that it’s that configurable and flexible, you know, that it provides a layer in the middle. And then all of the stuff we’ve talked about, you know, the famous hourglass model going up to the, you know, the extra two layers at the top, the ITF t-shirt famously says financial and political, you know, all that stuff is being coordinated by some of these organizations and other people in the room, legal interoperability. So Luca Belli, my colleague from Fundus Audio Tullio Vargas is in the front and has done a lot of work on that. You need all this stuff working together. So this wasn’t this, I didn’t intend this to be a plea for the IGF to continue, but I know it’s been. potentially may not, and I think this is an example of how the IGF can add a lot of value to these debates.


Mallory Knodel: Well, you helped me complete my bingo cards. No, that’s great. And Chris, you had also a question I want to get in. We still have 10 minutes left. So for folks, I see a couple more questions that we should be able to take those as well. Go ahead.


Chris Riley: But the danger of putting the prof in front of a microphone that he didn’t know was already on. So thank you for that. And fantastic panel. Thank you for organizing it as well. I think that often when you look at this question of trying to predict forward 10 years, you also have to look back. And Ian and I worked on the Towards a Future Internet project 16 or so years ago. I remember teaching about interoperability to Melinda’s colleague, Marcus Reinisch, back at Warwick in the last century, the last millennium. And I think that one of the really important lessons that we learned back there, and to mention somebody who probably hasn’t been mentioned for a few IGFs, Lawrence Lessig talked about machine-readable code, lawyer-readable code, and human-readable code. And I had a great conversation last night. One of the reasons why IGF should continue is that you can have great conversations with government officials and engineers and lawyers and policymakers about these things. And one of the fascinating conversations was engineers who had started studying law said, wow, the legal elements of this are really quite straightforward by comparison with what we do. The DMA is very straightforward compared to what we do when we’re working on standards committees. So we don’t find this difficult at all. But the difficulty, of course, is users simply go for defaults. And so one thing I would hope would happen in 10 years’ time, going to Ian’s straw poll of the audience, is that people would find some kind of form of Fediverse that they are comfortable with using in a way where the interoperability is invisible to the user. So Ian and I often cited in the regulating code, but we cited the wonderful Paul Ohm article, the myth of the superuser, the danger of giving people not too many choices, but too many defaults which they find difficult to actually make into a privacy-preserving element. So I guess my question is for everybody, given the last year has been a great experiment in people quiet quitting Mastodon, having noisy quitted X, they’ve quiet quit Mastodon and moved to Blue Sky and a couple of other things, and thank you Ian for turning me on to OpenVibe and learning how I could actually quiet quit in a much more usable way, is how do people see the user out there who doesn’t have, you know, machine readable or lawyer readable abilities to understand these things, how do we make it easier for them to avoid the same problems which occurred, you know, we’re in the middle of a national, a natural experiment, right, where X has become something that almost everybody wants to avoid, except seemingly government official pronouncements which is a very weird thing. So making it more usable for the user, so that in 10 years time users aren’t stuck in this position of having to do more research than they ever want to do, and if I could just mention one thing, I’m now based in Melbourne, Australia, the Australian Competition and Consumer Commission has just issued their final digital platforms report, it’s their 10th over six years, so that has some ideas in it as well I hope.


Mallory Knodel: Yeah, good question, I mean for me I would answer kind of both these questions with, I don’t want to have to sign up for another service ever again, and thank you, I just want to be able to follow all the new people that that service has brought to the internet. But in the interest of time, if I could just maybe suggest that we take, I think we can only maybe take three of these questions, because we have five minutes if they’re quick, and then we’ll see if Dilara, Melinda and Erin have closing remarks, if that works.


Ian Brown: Can you do four, because there are only four people at the moment.


Mallory Knodel: Okay, let’s do, if you can all promise to make it really succinct, like we think we can manage four questions, three responses in five minutes, that math totally works out.


Audience: Yeah, go ahead. So hello, I’m Winston Xu, and I’m from One Power Foundation in Hong Kong, and I’m here, I want to ask a question as now that I’m Many users of the internet are like children. So is there anything that youth can do in the interoperability and fairness of the internet?


Mallory Knodel: Awesome question. Thank you, Wilson. Go ahead.


Audience: I wanted to ask a bit of an open-ended question about more the cultures of each platform and how taking things into a different platform may change the context and remove maybe some the culture from the first platform. I’m thinking of maybe like link sharing and screenshot sharing and how that can very easily spiral out of control. Yeah, how can that be done ethically in a interoperable system?


Mallory Knodel: That’s great. I like that one, too. Thank you for bringing it up. And back over to this side.


Audience: Hello, my name is Gabriel. So the Fediverse being decentralized, you have to deal with lots of different instances, probably hundreds. How do you ensure trust between them? I read a while ago it was found out that, for example, direct messages through Mastodon could be read through if someone was misusing the protocol. So how do you ensure that other instances that are interfaced with can be trusted? Thank you for bringing up direct messages. That’s a good one, especially for a privacy panel. Awesome. And then our last question, please. Good afternoon. Thank you. Perhaps as a question, a reflection, what takes a foot off the toes from the private sector that hopefully, now we talk about interoperability, can be shared? Resilience is about sharing of learnings of what went wrong or what went right. What will you share to the public sector? For instance, in public services, enhancing the services for portable health or for the municipalities, for the social welfare of children or families. What can be shared? For both the good and bad, I’m not focusing only on standards because they are waiting on standards only.


Ian Brown: Thank you. Thanks so much. Let’s go in this order. Ian, if you want to start, just respond to any of that that you can and then we’ll work our way down. Thank you. Incredibly briefly, and let’s keep the conversation going online afterwards. Let me pick out two, the great one from my right, your left, and the last question. So, I think mentioning uncontrolled sharing of screenshots, I think that’s a great way to think about how ActivityPub and at Proto, Blue Skies Protocol and other protocols like it, can help do controlled sharing. So, making sure opt-in, starting point, protocol features that make sure that users’ wishes as far as good faith actors interpret them technically are followed and legal backstops for bad faith actors that choose not to. So, uncontrolled sharing via screenshots is something that’s never going to go away because you can’t stop people screenshotting. That was the dream of the digital rights people in the 90s and the early 2000s. And from a technical perspective, I had a start-up doing it. I promise you, it doesn’t work. That’s a bad idea. Mallory’s way is the way. Mallory and Friends is the way. And the very last question, very simple point, and actually picks up what Chris said as well. One thing governments can do is, for goodness sake, share all your information on multiple platforms, not only on X. Don’t have politicians rail on the one hand about X and then force all your citizens to join X if they want to get information from the government.


Mallory Knodel: Hey, maybe governments should have instances where they’re the source of truth that’s verified.


Melinda Claybaugh: I think what we see is that we’re at the very beginning of this journey. And so I would just encourage, you know, meta is learning as we go. We just last week announced an in-threads feed that pulls, you know, posts from whatever else you’re connected to. That was at the response of users who wanted that. So you know, keep pushing for what you want in this Fediverse, and that will help drive the conversation.


Delara Derakhshani: Yeah, and I’ll finish by saying that a lot of DTI’s work is focused on this. I would really encourage you all to reach out to continue these conversations. The issue of trust during transfers, if a user wants to move their account or content from one service to another, how do we know that the service will respect expectations or that it was actually authorized? How do we confirm identity and integrity and making sure that the message was not lost or maliciously changed? But crucially, you know, these are things that we’re working on, and I really hope that you all reach out for further conversations.


Mallory Knodel: Thanks so much for coming. Thank you for your questions. Thank you to the panelists who spent their time thinking about these issues in depth at the Social Web Foundation and with all of you, right? We are actually really trying to have these conversations together. Like Melinda said, this is only the beginning, so hopefully we can stay engaged. Hopefully we can share learnings and make this space more robust and better. Great.


D

Delara Derakhshani

Speech speed

153 words per minute

Speech length

1314 words

Speech time

513 seconds

User agency requires knowing what’s being shared, when, with whom, and why – interoperability shouldn’t mean default exposure

Explanation

Derakhshani argues that successful interoperable systems must prioritize user agency by ensuring users have full knowledge and control over their data sharing. She emphasizes that interoperability should not automatically expose user data without explicit consent and understanding.


Evidence

She mentions technical design issues like real-time post visibility across services, activity portability, and social graph portability as examples of areas requiring user awareness and control.


Major discussion point

Privacy and User Control in Interoperable Systems


Topics

Human rights | Legal and regulatory


Agreed with

– Melinda Claybaugh
– Ian Brown

Agreed on

User agency and control over data sharing


Privacy-respecting interoperability demands ongoing collaboration with standards bodies and shared governance

Explanation

Derakhshani contends that achieving privacy-respecting interoperability requires continuous engagement with standards organizations and a commitment to collaborative governance structures. She views this as a cultural and procedural necessity rather than just a technical challenge.


Evidence

She references working in the open, engaging with standard bodies, and maintaining secure dialogue that reaches public IoT implementations as examples of necessary collaborative practices.


Major discussion point

Privacy and User Control in Interoperable Systems


Topics

Infrastructure | Legal and regulatory


Agreed with

– Melinda Claybaugh
– Chris Riley

Agreed on

Need for user education and clear communication


Disagreed with

– Melinda Claybaugh

Disagreed on

Implementation approach for privacy rights in federated systems


Federated ecosystems require coordination mechanisms that align responsibilities across diverse players without centralizing control

Explanation

Derakhshani argues that distributed systems need governance infrastructure that provides coordination while maintaining decentralization. She emphasizes that the goal is not to centralize control but to create mechanisms for alignment across different actors in the ecosystem.


Evidence

She cites DTI’s development of a trust model informed by real-world use cases and input from multistakeholder actors, and mentions the Digital Markets Act’s silence on trust-building mechanisms as an example of the need for such coordination.


Major discussion point

Governance and Trust Infrastructure


Topics

Legal and regulatory | Infrastructure


Agreed with

– Melinda Claybaugh
– Ian Brown

Agreed on

Technical challenges require collaborative solutions


Trust registries and verification processes can reduce duplication and streamline onboarding across platforms

Explanation

Derakhshani proposes that shared trust infrastructure can create efficiencies by allowing developers to complete verification processes once rather than repeatedly for each platform. She argues this approach leads to harmonization and efficiency while scaling to accommodate smaller platforms with limited resources.


Evidence

She mentions DTI’s launch of a trust registry pilot and their initiative called Lola designed to help users migrate safely as concrete examples of trust infrastructure implementation.


Major discussion point

Governance and Trust Infrastructure


Topics

Infrastructure | Legal and regulatory


Migration between platforms involves not just data transfer but joining new communities with different safety considerations

Explanation

Derakhshani argues that user migration in federated systems is more complex than simple data portability, as it involves users entering new community contexts with different norms and safety standards. She emphasizes that for marginalized groups especially, the safety of the destination platform is as important as the ability to transfer content.


Evidence

She references repeated feedback from DTI’s work that marginalized groups particularly value destination safety, and mentions that trust-building tools must address not just privacy but also moderation, consent, and survivability.


Major discussion point

User Experience and Education


Topics

Human rights | Sociocultural


M

Melinda Claybaugh

Speech speed

179 words per minute

Speech length

1052 words

Speech time

350 seconds

Users should have rights to access, correct, transfer, and delete their data, but implementation in decentralized systems is challenging

Explanation

Claybaugh acknowledges that fundamental data subject rights enshrined in GDPR and similar laws are important and expected by users, but argues that implementing these rights in federated systems presents unique challenges. She points out that while deletion works well within single platforms, ensuring deletion across federated networks is technically difficult.


Evidence

She provides the example that when a user deletes a post on Threads, Meta sends the deletion request along the chain to other platforms like Mastodon, but cannot guarantee the post is actually deleted downstream.


Major discussion point

Regulatory Framework and Compliance


Topics

Human rights | Legal and regulatory


Agreed with

– Delara Derakhshani
– Ian Brown

Agreed on

User agency and control over data sharing


Disagreed with

– Ian Brown

Disagreed on

Scope and complexity of interoperability requirements


Users need clear understanding of what it means to post content that may be shared across multiple services

Explanation

Claybaugh argues that user education is critical as federated social media proliferates, particularly for average users who may not understand the technical implications of posting content that can flow across multiple platforms. She emphasizes the need to meet users where they are and tailor explanations to different user types.


Evidence

She describes Threads’ onboarding process that uses pictures to explain what the Fediverse is and what happens when users delete content, distinguishing between power users, tech users, and average users.


Major discussion point

User Experience and Education


Topics

Human rights | Sociocultural


Agreed with

– Delara Derakhshani
– Chris Riley

Agreed on

Need for user education and clear communication


Different user types (power users vs. average users) require different levels of explanation and control options

Explanation

Claybaugh contends that federated platforms must recognize and accommodate different user sophistication levels, from technical power users to casual users just exploring new online spaces. She argues that user experience design must account for these varying needs and expectations.


Evidence

She references the need to distinguish between power users, tech users, and average users who may be ‘poking around and trying out new things online’ when designing user education and interface elements.


Major discussion point

User Experience and Education


Topics

Sociocultural | Human rights


Existing data protection regimes need fresh implementation approaches for decentralized systems

Explanation

Claybaugh argues that while fundamental privacy rights should remain intact, the implementation of these rights needs to be reconsidered for federated environments. She calls for collaboration between industry, regulators, and users to develop new norms and expectations for how privacy rights work in decentralized contexts.


Evidence

She points to the challenge of ensuring deletion requests propagate through federated networks and the need for industry norms around implementation as examples of where fresh approaches are needed.


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Human rights


Agreed with

– Delara Derakhshani
– Ian Brown

Agreed on

Technical challenges require collaborative solutions


Disagreed with

– Delara Derakhshani

Disagreed on

Implementation approach for privacy rights in federated systems


I

Ian Brown

Speech speed

189 words per minute

Speech length

3618 words

Speech time

1147 seconds

Technical design must enable user mistakes and experimentation while protecting privacy, such as auto-deletion features

Explanation

Brown argues that privacy protection should enable users to try new things and make mistakes without permanent consequences. He emphasizes that auto-deletion features and similar privacy tools are essential for creating conversational spaces rather than permanent records, particularly important for learning and development.


Evidence

He provides his personal example of setting Mastodon posts to auto-delete after two months, explaining that he wants social media to be conversational rather than a ‘medium of record’ and references concerns about permanent records affecting employment, education, and government interactions.


Major discussion point

Privacy and User Control in Interoperable Systems


Topics

Human rights | Sociocultural


Disagreed with

– Audience

Disagreed on

Balance between user control and system complexity


Bridges between platforms require opt-in approaches to respect user consent and prevent unwanted exposure

Explanation

Brown advocates for opt-in mechanisms when connecting different federated platforms, arguing that users should explicitly choose to allow cross-platform interaction rather than having it enabled by default. He emphasizes this is particularly important to prevent harassment and unwanted exposure to hostile communities.


Evidence

He describes Bridgyfed as an example of proper opt-in bridge implementation and explains why many users on BlueSky and Mastodon would not want people on X to interact with them due to harassment concerns.


Major discussion point

Technical Challenges of Federated Systems


Topics

Human rights | Infrastructure


Agreed with

– Delara Derakhshani
– Melinda Claybaugh

Agreed on

User agency and control over data sharing


Different platforms have varying capabilities for features like auto-deletion, creating inconsistent user experiences

Explanation

Brown identifies a technical challenge where user preferences set on one platform may not be recognized or implemented by other platforms in a federated network. He argues this creates inconsistencies that could undermine user privacy expectations.


Evidence

He provides the specific example that while he sets Mastodon posts to auto-delete after two months, BlueSky doesn’t currently have auto-deletion features, and Bridgyfed doesn’t propagate his deletion preferences across platforms.


Major discussion point

Technical Challenges of Federated Systems


Topics

Infrastructure | Human rights


Agreed with

– Delara Derakhshani
– Melinda Claybaugh

Agreed on

Technical challenges require collaborative solutions


The Digital Markets Act includes messaging interoperability requirements for gatekeepers, with potential extension to social media services under review

Explanation

Brown explains that the DMA currently mandates interoperability for messaging services from the largest tech companies but not for social media, though this could change. He notes that civil society nearly succeeded in including social media interoperability requirements and that the law requires review every three years.


Evidence

He mentions that the DMA applies to seven gatekeeper companies and specifically references Article 7’s messaging interoperability obligations, noting that the European Commission must review extending these to social media services next year.


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Economic


Disagreed with

– Melinda Claybaugh

Disagreed on

Scope and complexity of interoperability requirements


GDPR and similar privacy laws provide legal backstops against bad faith actors who ignore user data preferences

Explanation

Brown argues that existing privacy regulations in 160 countries provide legal mechanisms to address situations where federated platforms or bad actors ignore user privacy preferences. He contends that legal action can be taken when technical solutions fail to protect user rights.


Evidence

He references GDPR Article 22 requirements for organizations to make best efforts to inform other organizations when users withdraw consent, and mentions that legal action can be taken in courts and with data protection regulators against bad faith actors.


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Human rights


Traditional competition law is evolving to require dominant firms to enable interoperability

Explanation

Brown argues that competition law is rapidly moving toward requiring interoperability from dominant companies, beyond specific regulatory frameworks like the DMA. He suggests this trend will have global implications as courts interpret dominance as requiring interoperability enablement.


Evidence

He cites the European Court of Justice decision in Android Auto, which he says ‘basically says dominant firms must enable interoperability for their services full stop’ or face abuse of dominance charges.


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Economic


The vision includes users never having to sign up for new services while still being able to follow people across platforms

Explanation

Brown envisions a future where interoperability eliminates the need for users to create new accounts on different platforms while still enabling them to connect with people and content across the federated ecosystem. He sees this as the ultimate goal of interoperability efforts.


Major discussion point

Future Vision and Implementation


Topics

Infrastructure | Sociocultural


Social media should become as configurable and flexible as the internet at the IP layer

Explanation

Brown advocates for social media systems to achieve the same level of configurability and flexibility as the foundational internet protocols. He envisions social media as providing a middle layer that enables diverse applications and services while maintaining interoperability.


Evidence

He references the ‘famous hourglass model’ and mentions the IETF t-shirt that says ‘financial and political’ to illustrate how different layers of the internet stack serve different functions.


Major discussion point

Future Vision and Implementation


Topics

Infrastructure | Economic


Governments should share information across multiple platforms rather than forcing citizens to use single platforms

Explanation

Brown argues that governments should not require citizens to join specific social media platforms to access government information. He advocates for multi-platform government communication strategies that respect citizen choice and platform diversity.


Evidence

He criticizes the practice of politicians who ‘rail on the one hand about X and then force all your citizens to join X if they want to get information from the government.’


Major discussion point

Future Vision and Implementation


Topics

Legal and regulatory | Human rights


The focus should be on building tools that allow controlled sharing while preventing uncontrolled screenshot-based sharing

Explanation

Brown argues that technical protocols should focus on enabling controlled sharing mechanisms rather than trying to prevent uncontrolled sharing like screenshots, which he considers technically impossible. He advocates for protocol features that ensure good faith actors follow user preferences and legal mechanisms for bad faith actors.


Evidence

He mentions having a startup that attempted to prevent screenshotting and states ‘I promise you, it doesn’t work’ while advocating for opt-in protocol features and legal backstops instead.


Major discussion point

Future Vision and Implementation


Topics

Infrastructure | Human rights


M

Mallory Knodel

Speech speed

190 words per minute

Speech length

2960 words

Speech time

934 seconds

When users delete posts on one platform, deletion requests may not propagate effectively across all federated instances

Explanation

Knodel highlights a fundamental technical challenge in federated systems where user actions like deletion on one platform may not be properly executed across all connected platforms. This creates potential privacy and user control issues in decentralized environments.


Evidence

She specifically mentions ActivityPub as an example where deleting a post doesn’t necessarily mean it’s deleted everywhere in the federation.


Major discussion point

Technical Challenges of Federated Systems


Topics

Infrastructure | Human rights


Instance-level governance provides a middle layer between platforms and users for setting defaults and managing relationships

Explanation

Knodel argues that the instance layer in federated systems creates new governance opportunities that didn’t exist in traditional social media. She sees instances as entities that can set default privacy settings, make federation decisions, and act on behalf of their users’ interests.


Evidence

She describes how instances can choose to defederate with other instances based on user preferences or bad faith behavior, referencing a report by Samantha Lye and Yale Roth at Carnegie Endowment about defederation politics.


Major discussion point

Governance and Trust Infrastructure


Topics

Legal and regulatory | Sociocultural


Defederation serves as a mechanism for instances to protect their users from bad actors

Explanation

Knodel presents defederation as a protective mechanism that allows instance administrators to disconnect from other instances that don’t respect user preferences or engage in bad faith behavior. She frames this as a form of distributed governance and user protection.


Evidence

She references research by Samantha Lye and Yale Roth at the Carnegie Endowment for International Peace about defederation and its politics as supporting evidence for this governance mechanism.


Major discussion point

Governance and Trust Infrastructure


Topics

Legal and regulatory | Human rights


A

Audience

Speech speed

127 words per minute

Speech length

794 words

Speech time

373 seconds

Direct messages and encrypted content present particular security vulnerabilities in federated environments

Explanation

An audience member raised concerns about security vulnerabilities in federated systems, specifically noting that direct messages through platforms like Mastodon could potentially be read by those misusing the protocol. This highlights the challenge of maintaining privacy and security across distributed systems with varying levels of trustworthiness.


Evidence

The audience member mentioned reading about direct messages through Mastodon being readable when someone was misusing the protocol.


Major discussion point

Technical Challenges of Federated Systems


Topics

Cybersecurity | Human rights


The complexity of privacy controls often results in overly simple ‘do or do not’ options that don’t match user needs

Explanation

An audience member from W3C argued that standardization processes tend to produce overly simplified privacy controls that don’t capture the complexity of how people want to present their digital selves. They contend that binary privacy options are insufficient for the nuanced ways people understand their social identity online.


Evidence

The speaker referenced their experience from previous privacy interoperability work in W3C, noting that what gets through standardization processes are typically very simple signals that feel ‘way too coarse’ for social media contexts.


Major discussion point

User Experience and Education


Topics

Infrastructure | Human rights


Disagreed with

– Ian Brown

Disagreed on

Balance between user control and system complexity


W3C and IETF provide forums for developing and testing interoperability standards, including verification of proper implementation

Explanation

An audience member clarified that standards organizations like W3C have thorough interoperability testing processes as part of their recommendation development, similar to IETF’s approach. They emphasized that these organizations can test whether implementations properly handle features like deletion, though whether services actually implement these features is a separate question.


Evidence

The speaker mentioned W3C’s ‘very thorough interoperability testing process as part of our recommendation process’ and referenced IETF’s interoperability testing practices.


Major discussion point

Standards and Interoperability Testing


Topics

Infrastructure | Legal and regulatory


C

Chris Riley

Speech speed

194 words per minute

Speech length

545 words

Speech time

167 seconds

The goal should be making interoperability invisible to users while preserving choice and community uniqueness

Explanation

Riley argues that successful interoperability should be transparent to end users, allowing them to benefit from cross-platform connectivity without having to understand the technical complexity. He emphasizes that users typically accept defaults and shouldn’t be burdened with too many complex choices while still maintaining the diversity that makes different platforms valuable.


Evidence

He references Paul Ohm’s article ‘the myth of the superuser’ about the danger of giving people too many defaults, and mentions the natural experiment of users moving from X to Mastodon to BlueSky as evidence of user behavior patterns.


Major discussion point

Standards and Interoperability Testing


Topics

Sociocultural | Infrastructure


Agreed with

– Delara Derakhshani
– Melinda Claybaugh

Agreed on

Need for user education and clear communication


Agreements

Agreement points

User agency and control over data sharing

Speakers

– Delara Derakhshani
– Melinda Claybaugh
– Ian Brown

Arguments

User agency requires knowing what’s being shared, when, with whom, and why – interoperability shouldn’t mean default exposure


Users should have rights to access, correct, transfer, and delete their data, but implementation in decentralized systems is challenging


Bridges between platforms require opt-in approaches to respect user consent and prevent unwanted exposure


Summary

All speakers agree that users must have meaningful control over their data and how it’s shared across federated systems, with opt-in mechanisms being preferred over default exposure


Topics

Human rights | Infrastructure


Need for user education and clear communication

Speakers

– Delara Derakhshani
– Melinda Claybaugh
– Chris Riley

Arguments

Privacy-respecting interoperability demands ongoing collaboration with standards bodies and shared governance


Users need clear understanding of what it means to post content that may be shared across multiple services


The goal should be making interoperability invisible to users while preserving choice and community uniqueness


Summary

Speakers consensus that users need better education about federated systems while the complexity should be hidden through good design and clear communication


Topics

Sociocultural | Human rights


Technical challenges require collaborative solutions

Speakers

– Delara Derakhshani
– Melinda Claybaugh
– Ian Brown

Arguments

Federated ecosystems require coordination mechanisms that align responsibilities across diverse players without centralizing control


Existing data protection regimes need fresh implementation approaches for decentralized systems


Different platforms have varying capabilities for features like auto-deletion, creating inconsistent user experiences


Summary

All speakers acknowledge that technical challenges in federated systems require new collaborative approaches and cannot be solved by individual platforms alone


Topics

Infrastructure | Legal and regulatory


Similar viewpoints

Both speakers see the need for institutional mechanisms (trust registries and legal frameworks) to create accountability and efficiency in federated systems

Speakers

– Delara Derakhshani
– Ian Brown

Arguments

Trust registries and verification processes can reduce duplication and streamline onboarding across platforms


GDPR and similar privacy laws provide legal backstops against bad faith actors who ignore user data preferences


Topics

Legal and regulatory | Infrastructure


Both speakers emphasize the importance of designing systems that accommodate different user sophistication levels and allow for experimentation without permanent consequences

Speakers

– Melinda Claybaugh
– Ian Brown

Arguments

Different user types (power users vs. average users) require different levels of explanation and control options


Technical design must enable user mistakes and experimentation while protecting privacy, such as auto-deletion features


Topics

Human rights | Sociocultural


Both speakers recognize that federated systems involve complex community dynamics and governance structures beyond simple technical interoperability

Speakers

– Delara Derakhshani
– Mallory Knodel

Arguments

Migration between platforms involves not just data transfer but joining new communities with different safety considerations


Instance-level governance provides a middle layer between platforms and users for setting defaults and managing relationships


Topics

Sociocultural | Legal and regulatory


Unexpected consensus

Legal frameworks as enablers rather than barriers

Speakers

– Ian Brown
– Melinda Claybaugh
– Delara Derakhshani

Arguments

GDPR and similar privacy laws provide legal backstops against bad faith actors who ignore user data preferences


Existing data protection regimes need fresh implementation approaches for decentralized systems


Trust registries and verification processes can reduce duplication and streamline onboarding across platforms


Explanation

Despite representing different perspectives (academic, industry, and advocacy), all speakers view existing and emerging legal frameworks as supportive of interoperability goals rather than obstacles, which is unexpected given typical industry-regulation tensions


Topics

Legal and regulatory | Human rights


Complexity should be hidden from users

Speakers

– Chris Riley
– Melinda Claybaugh
– Ian Brown

Arguments

The goal should be making interoperability invisible to users while preserving choice and community uniqueness


Different user types (power users vs. average users) require different levels of explanation and control options


The vision includes users never having to sign up for new services while still being able to follow people across platforms


Explanation

There’s unexpected consensus that despite the technical complexity of federated systems, the goal should be to make interoperability transparent to users rather than educating them about technical details, which contrasts with typical tech community emphasis on user understanding


Topics

Sociocultural | Infrastructure


Overall assessment

Summary

Strong consensus exists around user agency, the need for collaborative governance, and making complex systems user-friendly. Speakers agree on fundamental principles while acknowledging implementation challenges.


Consensus level

High level of consensus on principles with constructive disagreement on implementation details. This suggests a mature field where stakeholders share common goals but are working through practical challenges, which bodes well for collaborative solutions in federated social media development.


Differences

Different viewpoints

Implementation approach for privacy rights in federated systems

Speakers

– Melinda Claybaugh
– Delara Derakhshani

Arguments

Existing data protection regimes need fresh implementation approaches for decentralized systems


Privacy-respecting interoperability demands ongoing collaboration with standards bodies and shared governance


Summary

Claybaugh focuses on adapting existing legal frameworks like GDPR for federated environments, while Derakhshani emphasizes building new collaborative governance structures and standards-based approaches from the ground up.


Topics

Legal and regulatory | Human rights


Scope and complexity of interoperability requirements

Speakers

– Ian Brown
– Melinda Claybaugh

Arguments

The Digital Markets Act includes messaging interoperability requirements for gatekeepers, with potential extension to social media services under review


Users should have rights to access, correct, transfer, and delete their data, but implementation in decentralized systems is challenging


Summary

Brown advocates for expanding regulatory interoperability requirements to social media platforms, while Claybaugh emphasizes the technical challenges and complexity of implementing such requirements, particularly questioning what interoperability should encompass beyond basic messaging.


Topics

Legal and regulatory | Infrastructure


Balance between user control and system complexity

Speakers

– Audience
– Ian Brown

Arguments

The complexity of privacy controls often results in overly simple ‘do or do not’ options that don’t match user needs


Technical design must enable user mistakes and experimentation while protecting privacy, such as auto-deletion features


Summary

The audience member argues that current privacy controls are too simplistic for complex social interactions, while Brown advocates for features like auto-deletion that prioritize user experimentation over granular control.


Topics

Human rights | Infrastructure


Unexpected differences

Role of government communication in federated systems

Speakers

– Ian Brown

Arguments

Governments should share information across multiple platforms rather than forcing citizens to use single platforms


Explanation

This was an unexpected policy recommendation that emerged during the discussion, with Brown advocating for multi-platform government communication strategies. No other speakers directly addressed this issue, making it a unique position that wasn’t debated but represents a significant policy implication.


Topics

Legal and regulatory | Human rights


Technical impossibility of preventing screenshot sharing

Speakers

– Ian Brown

Arguments

The focus should be on building tools that allow controlled sharing while preventing uncontrolled screenshot-based sharing


Explanation

Brown’s assertion that preventing screenshot sharing is technically impossible and that efforts should focus on controlled sharing mechanisms was unexpected and not challenged by other speakers, despite its significant implications for privacy protection strategies.


Topics

Infrastructure | Human rights


Overall assessment

Summary

The main areas of disagreement centered on implementation approaches for privacy rights in federated systems, the appropriate scope of regulatory interoperability requirements, and the balance between user control complexity and system usability.


Disagreement level

The level of disagreement was moderate and constructive, with speakers generally sharing similar goals but differing on methods and priorities. The disagreements reflect different professional perspectives (policy, technical, regulatory) rather than fundamental philosophical differences, suggesting that collaborative solutions are achievable through continued dialogue and experimentation.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers see the need for institutional mechanisms (trust registries and legal frameworks) to create accountability and efficiency in federated systems

Speakers

– Delara Derakhshani
– Ian Brown

Arguments

Trust registries and verification processes can reduce duplication and streamline onboarding across platforms


GDPR and similar privacy laws provide legal backstops against bad faith actors who ignore user data preferences


Topics

Legal and regulatory | Infrastructure


Both speakers emphasize the importance of designing systems that accommodate different user sophistication levels and allow for experimentation without permanent consequences

Speakers

– Melinda Claybaugh
– Ian Brown

Arguments

Different user types (power users vs. average users) require different levels of explanation and control options


Technical design must enable user mistakes and experimentation while protecting privacy, such as auto-deletion features


Topics

Human rights | Sociocultural


Both speakers recognize that federated systems involve complex community dynamics and governance structures beyond simple technical interoperability

Speakers

– Delara Derakhshani
– Mallory Knodel

Arguments

Migration between platforms involves not just data transfer but joining new communities with different safety considerations


Instance-level governance provides a middle layer between platforms and users for setting defaults and managing relationships


Topics

Sociocultural | Legal and regulatory


Takeaways

Key takeaways

Privacy-preserving interoperability requires user agency – users must know what’s being shared, when, with whom, and why, with interoperability not meaning default exposure


Technical challenges exist in federated systems, particularly around data deletion propagation across instances and maintaining consistent user experiences


Existing regulatory frameworks like GDPR provide legal backstops, but need fresh implementation approaches for decentralized systems


The Digital Markets Act’s messaging interoperability requirements may be extended to social media services in upcoming reviews


Shared governance infrastructure is needed to coordinate responsibilities across diverse players without centralizing control


User education is critical as different user types require different levels of explanation and control options


Standards bodies (W3C, IETF) provide essential forums for developing and testing interoperability standards


The future vision involves making interoperability invisible to users while preserving choice and community uniqueness


Resolutions and action items

Continue conversations through the Social Web Foundation and Data Transfer Initiative for ongoing collaboration


Develop trust registries and verification processes to reduce duplication across platforms


Create shared guardrails through initiatives like DTI’s ‘Lola’ project to help users migrate safely


Governments should share information across multiple platforms rather than forcing citizens to use single platforms


Keep pushing for desired features in the Fediverse to drive platform development


Reach out to panelists and organizations for further conversations on trust during transfers and implementation


Unresolved issues

How to effectively propagate deletion requests across all federated instances


Managing the complexity of privacy controls that currently result in overly simple options


Ensuring trust between hundreds of different instances in decentralized systems


Balancing standardization with preserving unique community cultures across platforms


Addressing security vulnerabilities in direct messages and encrypted content in federated environments


Determining what level of interoperability users actually want versus what technologists envision


How to handle cultural context changes when content moves between platforms with different norms


Suggested compromises

Use opt-in approaches for bridges between platforms to respect user consent while enabling interoperability


Implement instance-level governance as a middle layer between platforms and users for managing relationships and defaults


Develop co-regulation approaches where private firms and civil society do detailed work with regulators as backstops


Create shared technical standards while allowing platforms to compete on community focus and user experience


Balance core privacy protections that should be universal across the Fediverse with features that can remain unique to individual communities


Use defederation as a nuclear option mechanism for instances to protect users while maintaining overall system openness


Thought provoking comments

Users should know what’s being shared, when, with whom. And why, and interoperability shouldn’t necessarily mean default exposure… migration is not just about exporting and importing data. It’s about joining new communities… for many users, particularly marginalized groups, the safety of the destination matters just as much as the ability to bring the content.

Speaker

Delara Derakhshani


Reason

This comment reframes interoperability from a purely technical challenge to a human-centered one, highlighting that data portability is fundamentally about community safety and user agency rather than just technical capability. It introduces the crucial insight that marginalized users face unique risks in federated systems.


Impact

This shifted the discussion from technical implementation details to user experience and safety considerations. It established the framework for later discussions about trust, governance, and the need for shared standards that prioritize user protection over technical convenience.


If say Mallory has… I delete my Mastodon posts after two months. I use a bridge… BlueSky currently does not have an option to auto delete all my posts… nor does Bridgyfed currently pick up my preference from Mastodon to auto-delete everything after two months and then apply that on the other side of the bridge.

Speaker

Ian Brown


Reason

This concrete example brilliantly illustrates the complexity of privacy preservation across federated systems. It demonstrates how user privacy preferences don’t automatically translate across platforms, revealing a fundamental gap between user expectations and technical reality.


Impact

This technical example grounded the abstract discussion in reality and led to deeper exploration of GDPR compliance challenges. It prompted Melinda’s response about the collision between new social media and existing legal frameworks, fundamentally shifting the conversation toward regulatory compliance.


I think there’s a real challenge as this new social media proliferates… they may not understand this network. They may not understand really what it means to be posting something on threads and then have it go to other services… we have to really meet people where they are and understand kind of who’s a power user, who’s a tech user, and then who’s just your average user.

Speaker

Melinda Claybaugh


Reason

This comment identifies a critical user experience challenge that could determine the success or failure of federated systems. It acknowledges that most users don’t have the technical literacy to understand the implications of interoperability, which has profound privacy and safety implications.


Impact

This observation redirected the conversation from technical solutions to user education and interface design. It influenced later discussions about defaults, user agency, and the need for systems that work invisibly for non-technical users while preserving privacy.


My experience from previous privacy interoperability work in W3C is that it’s really, really hard to define privacy to a level of granularity that matches the complexity of how people want their digital selves to be presented… what gets through the standardization process is very simple signals like do or do not typically. In a world of social media, that feels way too coarse.

Speaker

Dominique (audience member)


Reason

This comment exposes a fundamental tension between the complexity of human social behavior and the binary nature of technical standards. It challenges the panel to consider whether current approaches to privacy in federated systems are adequate for real human needs.


Impact

This question forced the panel to confront the limitations of technical solutions and led to discussions about shared governance, the role of instances as intermediaries, and the need for more sophisticated approaches to privacy that go beyond simple on/off switches.


The danger of giving people not too many choices, but too many defaults which they find difficult to actually make into a privacy-preserving element… how do we make it easier for them to avoid the same problems which occurred… where X has become something that almost everybody wants to avoid.

Speaker

Chris Riley


Reason

This comment introduces the paradox of choice in privacy settings and connects it to the real-world exodus from Twitter/X. It challenges the assumption that more user control automatically leads to better privacy outcomes, referencing the ‘myth of the superuser.’


Impact

This observation tied together multiple threads of the discussion – user education, defaults, and the practical challenges of federated systems. It prompted reflection on how to make interoperability ‘invisible’ to users while maintaining privacy, influencing the final discussions about usability and adoption.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond technical implementation details to address the human and social dimensions of federated systems. Delara’s focus on marginalized users and community safety established a human rights framework that influenced the entire conversation. Ian’s concrete example of auto-deletion across platforms grounded abstract privacy concerns in technical reality, while Melinda’s emphasis on user education highlighted the gap between technical capability and user understanding. The audience questions, particularly about the granularity of privacy controls, challenged the panel to confront the limitations of current approaches. Together, these comments transformed what could have been a purely technical discussion into a nuanced exploration of how to build federated systems that serve real human needs while preserving privacy and safety. The discussion evolved from ‘how do we build interoperable systems?’ to ‘how do we build interoperable systems that people can actually use safely and effectively?’


Follow-up questions

How can we better educate users about what interoperability means and what happens to their data when they post across federated services?

Speaker

Melinda Claybaugh


Explanation

This is crucial because average users may not understand the network effects of posting on federated platforms and what happens when they delete content across multiple services


How can we develop technical solutions for ensuring deletion requests are honored across the entire federated network?

Speaker

Ian Brown and Mallory Knodel


Explanation

This addresses a fundamental privacy challenge where deleting a post on one platform doesn’t guarantee deletion across all federated instances that received the content


What governance mechanisms can ensure trust and coordination across thousands of diverse federated instances?

Speaker

Delara Derakhshani


Explanation

As the ecosystem scales from a few large platforms to potentially thousands of instances, new coordination mechanisms are needed to maintain trust and shared standards


How can we balance standardization for interoperability with preserving the unique characteristics of different communities and platforms?

Speaker

Melinda Claybaugh and Dominique Zelmercier


Explanation

There’s tension between creating common standards that work across platforms and allowing communities to maintain their distinct cultures and features


How can privacy controls be made granular enough to match the complexity of how people want to present their digital selves?

Speaker

Dominique Zelmercier


Explanation

Current standardization processes tend to produce simple ‘do or do not’ signals, which may be too coarse for complex social media privacy preferences


What can be learned from Meta’s user research on WhatsApp interoperability that could benefit the broader ecosystem?

Speaker

Ian Brown


Explanation

Meta has done significant user research on privacy and interoperability for WhatsApp that could inform best practices across the industry


How can we make federated social media as invisible and user-friendly as email interoperability?

Speaker

Chris Riley


Explanation

The goal is to make interoperability seamless for users who don’t want to become technical experts to use these systems effectively


How can we address the security vulnerabilities in federated systems, such as the ability to read direct messages across instances?

Speaker

Gabriel (audience member)


Explanation

Trust and security between instances is crucial, especially given reports of protocol misuse allowing unauthorized access to private communications


How can context and cultural norms be preserved when content moves between platforms with different communities?

Speaker

Audience member


Explanation

Content sharing across platforms can remove important cultural context and lead to misunderstandings or harassment


What role can young people play in shaping interoperability and internet fairness?

Speaker

Winston Xu


Explanation

Understanding how to engage youth in these technical and policy discussions is important for the future of these systems


What lessons from private sector interoperability work can be applied to public sector services like healthcare and social welfare?

Speaker

Audience member


Explanation

There may be valuable learnings that can improve public services through better data portability and interoperability


How will the European Commission’s review of the Digital Markets Act in the next year affect social media interoperability requirements?

Speaker

Ian Brown


Explanation

The DMA review could extend interoperability requirements from messaging to social media platforms, with global implications


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #479 Gender Mainstreaming in Digital Connectivity Strategies

WS #479 Gender Mainstreaming in Digital Connectivity Strategies

Session at a glance

Summary

This roundtable discussion at the Internet Governance Forum 2025 focused on gender mainstreaming in digital connectivity strategies, addressing the significant barriers that prevent women from accessing digital opportunities. Moderated by Rispa Arose from Tenda Community Network in Kenya, the session brought together experts from civil society, regulatory bodies, and community organizations to examine why 2.6 billion people remain offline, with women disproportionately affected by the digital divide.


Mathangi Mohan presented research from the Association for Progressive Communication on integrating gender into community-centered connectivity models, emphasizing that community networks are not automatically inclusive despite being locally driven. The research revealed that women remain underrepresented in community network governance and proposed policy recommendations including gender impact assessments during licensing processes and quotas for women in network leadership roles. Lilian Chamorro from Latin America highlighted specific challenges women face in building connectivity infrastructure, including care work responsibilities, low self-confidence with technology, and limited access to technological devices.


From a regulatory perspective, Dr. Emma Otieno from Kenya and Ivy Tuffuor Hoetu from Ghana emphasized the economic consequences of excluding women from digital strategies. Waqas Hassan shared Pakistan’s success story, where a dedicated national strategy reduced the gender gap from 38% to 25% through whole-of-government and whole-of-society approaches. The discussion revealed that connectivity strategies often focus primarily on infrastructure expansion while assuming equal access, without addressing inherent barriers like affordability, skills gaps, and societal norms.


Key recommendations emerged around the need for intentional measurement and tracking of gender inclusion outcomes, cross-sectoral collaboration beyond ICT ministries, and recognition of digital caretaking as legitimate labor. The panelists stressed that bridging the digital divide requires moving beyond gender-neutral policies to actively address the specific needs and barriers faced by women and marginalized groups in accessing and benefiting from digital connectivity.


Keypoints

## Major Discussion Points:


– **Gender gaps in digital connectivity strategies**: Despite global frameworks like SDG 5 and the Global Digital Compact advocating for digital inclusion, gender is explicitly referenced in only half of national ICT policies, leaving 2.6 billion people offline with women facing disproportionate barriers to accessing digital resources.


– **Community networks as solutions with inherent challenges**: While community-centered connectivity models offer locally-driven alternatives to traditional telecom infrastructure, they are not automatically inclusive – women remain underrepresented in governance due to care work responsibilities, low technical confidence, limited access to devices, and restricted participation in decision-making.


– **Policy implementation gaps versus policy existence**: Many countries have gender-inclusive digital policies on paper but lack intentional implementation, meaningful measurement, and disaggregated data collection to track outcomes and impact, resulting in continued exclusion despite regulatory frameworks.


– **Economic consequences of digital gender exclusion**: The exclusion of women from the digital economy has caused approximately $1 trillion in GDP loss over the last decade in developing countries, with potential for another $500 billion loss over the next five years if current trends continue.


– **Cross-sectoral collaboration as essential solution**: Effective gender mainstreaming requires moving beyond ICT ministry silos to involve education, finance, health, and gender ministries in a “whole of government” approach, supported by multi-stakeholder partnerships including civil society, private sector, and community representatives.


## Overall Purpose:


The discussion aimed to explore how to effectively mainstream gender considerations into digital connectivity strategies, moving from global policy commitments to practical local implementation. The session sought to identify barriers preventing women’s participation in digital infrastructure development and propose actionable solutions for creating more inclusive connectivity models, particularly through community networks.


## Overall Tone:


The discussion maintained a professional, collaborative, and solution-oriented tone throughout. It began with a serious acknowledgment of the scale of digital exclusion challenges, evolved into detailed problem analysis with speakers sharing candid experiences about implementation gaps, and concluded on an optimistic note with concrete recommendations and successful examples like Pakistan’s gender strategy. The tone remained constructive and forward-looking, emphasizing shared responsibility and the urgency of action while celebrating incremental progress and best practices.


Speakers

**Speakers from the provided list:**


– **Rispa Arose** – Works for Tenda Community Network based in Nairobi, Kenya; Session moderator focusing on grassroot-driven development by expanding access to connectivity and digital opportunities


– **Mathangi as Rispur** – Works at the intersection of gender technology and digital governance; leads programs in partnership of Data Currency, Artificial Intelligence; works with Self-Employed Women Association in India leading decentralized livelihood and digital upskilling program for women in the informal economy


– **Lillian Chamorro** – From Colnodo; supports community-centered connectivity in the Latin America region with focus on gender inclusion in grassroots community-centered connectivity initiatives


– **Dr. Emma Otieno** – Digital Inclusivity Champion; serves as Kenya country coordinator for REFEN (Réseau Internationale des Femmes Expertises du Numérique); previously served as Deputy Director for Universal Service Fund and Manager for Strategy Development and Fund Mobilization


– **Waqas Hassan** – Representative from Global Digital Partnership (GDIP); spearheads policy and advocacy engagement for Global Digital Inclusion Partnership in Asia; has background in inclusive policy development and gender responsive strategies


– **Ivy Tuffuor Hoetu** – Senior Manager, Regulatory Administrator with National Communication Authority in Ghana; ICT Development and Digital Inclusion Advocate; has experience in digital connectivity, internet development and ICT training


– **Josephine Miliza** – Works for Association for Progressive Communication as global policy coordinator; provides civil society perspective on integrating and mainstreaming gender in connectivity strategies


– **Kwaku Wanchi** – From Ghana IGF


**Additional speakers:**


– **Dr. Rhonda Zalesny-Green** – Co-founder and director at Panoply Digital; co-authored report on gender integration in policy and regulation for community-centered connectivity models (mentioned but did not speak directly)


Full session report

# Gender Mainstreaming in Digital Connectivity Strategies: A Comprehensive Roundtable Analysis


## Executive Summary


This roundtable discussion at the Internet Governance Forum 2025 examined gender mainstreaming in digital connectivity strategies, addressing why 2.6 billion people remain offline with women disproportionately affected by the digital divide. Moderated by Rispa Arose from Tenda Community Network in Kenya, the session brought together experts from civil society organisations, regulatory bodies, and community networks across Africa, Asia, and Latin America.


The discussion revealed that whilst global frameworks advocate for digital inclusion, gender is explicitly referenced in only half of national ICT policies. More critically, even where policies exist, they often lack intentional implementation and meaningful measurement mechanisms. Speakers highlighted that gender exclusion from the digital economy has caused approximately $1 trillion in GDP loss over the last decade in developing countries, with potential for another $500 billion loss over the next five years if current trends continue.


## Key Themes and Findings


### The Infrastructure Fallacy in Digital Inclusion


A central theme emerged around the misconception that infrastructure development automatically translates to inclusive access. Waqas Hassan from the Global Digital Partnership explained: “The connectivity strategies are still primarily developed from an infrastructure mindset… that mindset is based on the assumption that if you provide access and expansion, coverage expansion, that is somehow synonymous with inclusion… infrastructure development does not mean equitable access.”


Mathangi Mohan from the Association for Progressive Communication reinforced this point: “If we don’t address or even consider the gendered digital divide, then you’re not really closing the gap, but more like building the infrastructure on top of the inequality and marginalisation that is already existing.”


### Community Networks: Opportunities and Barriers


The discussion examined community-centred connectivity models as potential solutions to traditional telecom limitations. However, speakers emphasized that community networks are not automatically inclusive despite being locally driven. Mathangi Mohan presented research findings showing that women remain underrepresented in community network governance structures due to multiple intersecting barriers.


Lillian Chamorro from Colnodo in Latin America identified specific barriers women face, including care work responsibilities, low self-confidence with technology, and limited access to technological devices. She noted that “women prefer to stay away from technical roles” due to these confidence issues and structural limitations.


### Regulatory Perspectives and Implementation Gaps


Dr. Emma Otieno from Kenya provided a candid assessment of policy failures: “We have not been intentional in terms of putting in these clauses in the policy regulation strategies, we are putting them in as aspects of satisfying the social or compliance aspects, but not meaningfully and intentionally putting them… What has lacked completely is measurement, the intentional and meaningful tracking.”


Ivy Tuffuor Hoetu from Ghana emphasized the need to move “from equality to equitable distribution,” using the metaphor: “You cannot give us the same level of a tree to climb or something. One of them will need a ladder to be able to climb the tree.”


Both regulatory experts identified the lack of gender-disaggregated data as a fundamental barrier to creating targeted policies and noted limited coordination across ministries responsible for gender, education, and health.


### Success Stories


Waqas Hassan shared Pakistan’s achievement in reducing the gender gap from 38% to 25% through a dedicated national gender strategy. This success was attributed to inclusive policymaking processes involving multiple stakeholders and adopting both whole-of-government and whole-of-society approaches, with formal institutional structures including high-level steering committees and official working groups.


## Areas of Consensus


### Intentional Design and Implementation


All speakers agreed that gender mainstreaming requires intentional design from inception rather than treating gender as an afterthought. This consensus emerged across different sectors and regions.


### Cross-Sectoral Collaboration


There was unanimous agreement on the need for whole-of-government approaches extending beyond ICT ministries, requiring coordination with education, finance, health, and gender ministries, supported by multi-stakeholder partnerships.


### Local Context and Cultural Sensitivity


Speakers agreed on the importance of respecting local contexts, languages, and cultural values whilst creating supportive spaces for women to engage with technology, including translating digital skills training into local languages.


### Economic Imperative


Speakers emphasized that gender digital inclusion represents both a social justice issue and a critical economic imperative with measurable financial consequences.


## Concrete Recommendations


### Regulatory and Policy Reforms


– Implement gender impact assessments during licensing processes for community networks


– Create financing funds with special conditions for women’s participation in technology access and training


– Accommodate informal women’s groups and non-registered collectives in licensing frameworks


– Establish meaningful measurement, tracking, and monitoring systems for gender inclusion policies


### Community-Level Interventions


– Create women’s circles for dialogue and expression whilst respecting community values


– Translate digital skills training into local languages using retired language teachers


– Support collaboration between community networks and community libraries or information centres


### Funding and Sustainability


– Create infrastructure funds with contributions from all sectors benefiting from ICTs, not just telecom operators


– Develop financing mechanisms that support women-led community networks at scale


## Outstanding Challenges


The discussion identified several unresolved issues requiring further attention:


– How to effectively redistribute care work responsibilities that prevent women’s participation


– Developing sustainable funding models for women-led connectivity initiatives


– Balancing respect for traditional community values with promoting women’s technical participation


– Creating standardized approaches for collecting gender-disaggregated data across different contexts


## Conclusion


The roundtable demonstrated both the complexity of gender mainstreaming in digital connectivity and the potential for coordinated action. The high level of consensus among speakers from different sectors and regions suggests significant potential for collaborative policy development and implementation.


The session established that bridging the digital divide requires moving beyond gender-neutral policies to actively address specific barriers faced by women and marginalized groups. This represents a fundamental shift from treating digital inclusion as a technical challenge to recognizing it as a complex social and economic issue requiring equity-focused approaches.


The path forward requires sustained commitment to intentional implementation, meaningful measurement, and collaborative action across sectors and stakeholders. As demonstrated by Pakistan’s success, the tools and knowledge exist to address these challenges effectively; what remains is the political will and institutional commitment to transform policy commitments into practical results for women and marginalized communities worldwide.


Session transcript

Rispa Arose: Hello everyone, and a warm welcome to this roundtable discussion. To our online audience, good morning, good evening, good afternoon from where you’re joining us from today. It has been an enriching and insightful week, participating at the Internet Governance Forum 2025 here in Norway. My name is Rispa Arose, I work for Tenda Community Network that is based in Nairobi, Kenya, where we work towards fostering grassroot-driven development by expanding access to connectivity and digital opportunities. We are honored for this opportunity and platform to bring our local voices on this global stage. Today I have the privilege of moderating our discussion on a topic that really lies at the heart of digital inclusion, gender mainstreaming in digital connectivity strategies. Many of you, if not all of you, are aware that we have 2.6 billion people that are still offline. That’s a huge number and that’s a big challenge. As we know, digital connectivity is no longer a luxury, it’s really a critical enabler of socio-economic participation. Despite global efforts to close this digital gender gap, women continue to face significant barriers in accessing financial, educational, social and health resources in this digital age. And while some governments have introduced policies to address this divide, gender mainstreaming in ICT policy and regulation remains limited or sometimes even absent. According to different frameworks, such as, according to the ITU, gender is explicitly referenced in only half of national ICT policies or master plans and the frameworks such as WSIS Action Plus 20, the Global Digital Compact and Sustainable Development Goal SDG No. 5 collectively advocate for an inclusive infrastructure and digital rights. However, without intentional gender mainstreaming in digital strategies, millions of women and girls will remain excluded from the benefits of the digital economy. So this session will explore how global commitments can translate into local actions by implementing gender responsive policies and regulations in the different frameworks as well as look into how to mainstream gender and digital connectivity strategies. I’m thrilled to have a distinguished panel of experts and practitioners with me, with us today, both here in person as well as online. Three are joining online and I’ll begin with the ones we have here. So I’ll start with Josephine Miliza who is here and works for the Association for Progressive Communication as their global policy coordinator and she will be really giving us insights from a civil society perspective on what it takes to integrate and mainstream gender in connectivity strategies. We also have Lilian Chamorro from Colnodo who is also part of supporting community-centered connectivity in the Latin America region and will also be really sharing from for her experience working with this community-centered connectivity, what it means to support women and ensure that there is gender inclusion in grassroots community-centered connectivity initiatives. Online, I’m joined with Dr. Emma Otieno, who is a Digital Inclusivity Champion and serves as a REFEN, which is Réseau Internationale des Femmes Expertises du Numérique, and she is the Kenya country coordinator championing for digital gender inclusivity and promoting women’s leadership in the digital technology space globally. Previously, she has served in other capacities as Deputy Director for the Universal Service Fund and also the Manager for Strategy Development and Fund Mobilization Manager. She will be sharing a lot from a regulator perspective around this topic. Then we have Waqas Hassan, who is a representative from the Global Digital Partnership, the GDIP. He spearheads the policy and advocacy engagement for Global Digital Inclusion Partnership in Asia. He has a strong background in inclusive policy development, especially the gender responsive strategies and program. He has also implemented connectivity and community empowerment projects, with particular focus on underserved areas. Then we have Ivy Tufor Hoytil, who is a Senior Manager, Regulatory Administrator, working with the National Communication Authority in Ghana. At this specific meeting, she will be… presenting in her own capacity as ICT Development and Digital Inclusion Advocate. She has done work, she has considerable experience in digital connectivity, internet development and ICT training as well as also has been responsible for monitoring radio spectrum bands, resolving maintenance frequency database and enforcing authorization conditions. Then last but not least, we have Mathangi Mohan, who works at the intersection of gender technology and digital governance. She currently leads programs in partnership of Data Currency, Artificial Intelligence where she designs global capacity building initiatives on responsible artificial intelligence and data governance across 80 countries. She is working now with the Self-Employed Women Association in India where she leads decentralized livelihood and digital upskilling program for women in the informal economy. So that’s our lineup of speakers and I’d like to get right into the session and now welcome Ms. Mathangi Mohan to give us a keynote presentation on a recent research conducted by the Association for Progressive Communication on integrating gender in policy and regulation for community-centered connectivity models. Ms. Mathangi Mohan, please, you can have the floor.


Mathangi as Rispur: Thank you, thank you, Rispur, and thank you to APC and also the Internet Governance Forum for the opportunity to be a part of this discussion. My name is Mathangi as Rispur just introduced me and I’m speaking on behalf of my colleague Dr. Rhonda Zalesny-Green. co-founder and director at Panoply Digital and she co-authored this report with Shivam and our colleagues, our team on the gender integration in policy and regulation for community-centered connectivity models. I work alongside Ronda at Panoply on gender technology and connectivity models and governance at both a grassroots level and also in the policy spaces. So today I’ll be sharing insights from the report while also drawing connections to the practice and lived experiences from my own experience as well. So we can start with, let me start with the basics. So what are community networks or CNs? While they are not new, but they’ve been gaining global attention for a good reason because these are internet networks that are built and managed by local communities themselves. They usually operate in places where large telecom companies don’t see a lot of commercial value like remote areas, informal settlements or underserved regions. But what makes them so powerful is not that they fill coverage gaps, but they are mostly locally driven and because of their cooperative spirit and also often they are non-profit, they allow communities to control and own their digital infrastructure, deciding how it is built, how it is maintained and who it benefits. In many ways these CNs represent a form of digital sovereignty as well because they can support local businesses, they can strengthen education and healthcare systems and even provide emergency communications when most national systems fail. And because they are rooted in community needs, they can adapt very quickly and responsibly in ways that traditional internet service providers and telecoms often cannot. And that is what our report was trying to lay bare, that these networks are not automatically inclusive just because they operate in this way and just because the network is community run. It does not mean that everyone in the community is equally represented and we have taken a gender angle. Women, especially in rural and low-income contexts, remain underrepresented in these CN governance. They are most often underpaid or most often offers a suite of policy recommendations. I would like to just highlight a few examples because we have tried to approach it from how do we tailor the recommendations for each ministry involved because we wanted to adopt a whole of government approach as well. So, one would be for the ministries of ICT in these countries and beyond should require a gender impact assessment during the licensing processes itself for the CNs or some kind of an incentive for women-led CNs and they can design more simpler license procedures to reduce the barriers for women to access the leadership roles in these cooperatives or in these networks. And when it comes to the gender ministries itself, they can introduce quotas for women in the CN governance, fund women-only training spaces when it comes to community networks or access cooperatives and also campaign against online gender-based violence because that is also a major deterrent which is keeping women offline. And to the finance ministries, they can of course offer subsidies for smartphones or for connectivity for low-income women or provide tax breaks, consider providing tax breaks to the CNs with a strong gender inclusion or women-led CNs. And finally, for education ministries to mandate that public ICT… and all the other researchers. And I just thought it’s really important to highlight what Risper earlier said, that we often want to say that we want to bridge the digital divide. But if we don’t address or even consider the gendered digital divide, then you’re not really closing the gap, but more like building the infrastructure on top of the inequality and marginalization that is already existing. And community networks are, they offer, they give us an opportunity to do things differently, to make things right. And we definitely need to take the opportunity up and design for it and regulate in it and invest more in these networks. And yeah, with that, I want to say thank you, and I look forward to the discussion as well.


Rispa Arose: Thank you. Thank you so much, Ms. Mathangi, for really giving us a grounding on what community networks are, what community networks bring towards bridging the digital divide, as well as speaking to some of the challenges that exist with regards to thinking of connectivity, not thinking of gender, when thinking about connectivity, having gender not as an afterthought, but thinking through it, even at the inception stage, at the design stage. And this is, as you’ve correctly said, is the opportunity that community. Thank you so much for joining us for this session. We’re going to be talking about the challenges that community networks bring in connectivity strategies. Now, this really gives us a good segue to the next speaker, who will touch on some of the challenges that women face when trying to build and sustain this community network infrastructure. So, I’m going to turn it over to you, Lillian, to talk a little bit about the challenges that women face when trying to build this network, but also supported the ecosystem, we’ll share her experience on what are some of the specific challenges that women face when trying to build this infrastructure, this community-centered connectivity infrastructure. Lillian, over to you.


Lillian Chamorro: Thank you so much for having me. I’m going to start with you, Emma, and then I’m going to turn it over to you, Lillian. Okay, from the experience of accompanying this type of initiative, we can mention some of the challenges for women. One of them is the care work that women must assume makes it difficult for them to participate in the different activities involved in the implementation, but also in the maintenance of the network. So, this is a challenge for women, especially in rural areas, where there is a lot of work that is done exclusively in women. Other challenges for some women, especially in rural areas, have low self-confidence, as they do not feel capable of appropriating technologies and technological knowledge, and access to the technological knowledge. So, they prefer to stay away from the internet, but also, in general, the developing countries, there are some and Dr. Josephine Miliza, Dr. Emma Otieno, Waqas Hassan, Ronda Zelezny-Green, Ivy Tuffuor Hoetu Hoetu. Also around the low participation in decision-making and leadership. Then positive actions are required, but also appropriate methodologies that provide women with confidence in approaching and appropriating these issues. All these result in more difficulty to be part of workshops or participate in training processes. And in many occasions, there are difficulties to leave the home and participate in these kind of spaces. Finally, I can mention lower access to technological devices, since other expenses are privileged and there are not appropriate technological tools to assume certain tasks. Thank you.


Rispa Arose: Thank you so much, Lilian, for sharing some of the challenges that women face when trying to build and sustain connectivity infrastructure. Being just working for TANDA community network, I totally relate with some of these challenges, starting from the low confidence level. It really takes a lot, especially because community-centered connectivity is very technical. There has to be some encouragement, some technical training, some sort of mentorship to even get women coming in to support the network infrastructure, be it of the connectivity or the community. This really helps to frame the challenge of this discussion today. What I would like to hear from the next speaker. Waqas, who’s joining us online, is what do you see as the core disconnect? Why gender is still largely missing from connectivity strategies? Why do we still have a huge gender gap within the different connectivity strategies? And yes, over to you, Waqas.


Waqas Hassan: Thank you. Thank you, Christopher. I hope I’m audible. So I think much of what you just asked was covered by Matangi as well. But if you see it from a holistic level, there are, of course, several factors which are dependent on geographical profile and the level of ICT development and things like that, and lack of data awareness capacity of the government itself to understand and then act on the problem. But from my own experience, from my previous life with the government, I think there are a few things that I have observed, and I think two of them stand out for me, which I would like to, of course, share with all of you. The first one is that the connectivity strategies are still primarily developed from an infrastructure mindset. And that mindset is based on the assumption that if you provide access and expansion, coverage expansion, that is somehow synonymous with inclusion. Now, I think that if a particular community is then provided with mobile connectivity or broadband or whatever they mean, there is now an equal opportunity for men and women to use the digital services. But I think what they need to realize is that irrespective of how much technological innovation we bring about, infrastructure development does not mean equitable access. There are inherent barriers of affordability, skills, societal norm, online safety, and security. These are the Thank you so much for joining us for this session of the Global Dialogues. I’m Ronda Zelezny-Green, and I’m here with Dr. Emma Otieno, Dr. Ronda Zelezny-Green, and Dr. Waqas Hassan, Dr. Emma Otieno, and Dr. Ronda Zelezny-Green. Thank you so much for joining us. And there is undeniable evidence around that. Our own research, GDIP’s research, it proves that the exclusion of women from the digital economy in the developing countries has caused about $1 trillion in the loss of GDP over the last decade. And if it continues, another $500 billion could be lost over the next five years. So I think this realization and this advocacy angle that we also pursue as GDIP when we talk to the governments is that it is not just a women empowerment issue or a social issue. It is actually an economic crisis that we want to work here, all of us together. I think I’ll just end with saying that this is the realization that needs to be propagated through our interactions with the governments, that they have to integrate gender in the connectivity policies and then take concrete actions on it, not just mention it like we see in many of the policies like the IT policies or the broadband policies, and there is some way of mentioning those kind of instruments, but never really the essence of those policies. So I’ll just stop there. And yeah, looking forward to the discussion. Thank you.


Rispa Arose: Thank you so much. Thank you so much, Waqas, and thank you for bringing in such heaviness on the consequences or the grief, loss that we face when we are not inclusive in our strategies, when we are, when the agenda is left behind in promoting or in, for example, connectivity strategies, then it means not only social loss, but it goes to even economic loss from a global perspective. With that in mind, I want to move to the next speakers, who are the regulators, Dr. Emma from Kenya and Ms. Ivy from Ghana. From a regulatory perspective, why is it important to integrate gender in digital connectivity strategies? And building on what Waqas has just mentioned, as some of the losses that we are having because of the gap, gender gap, what do you think is at stake if we don’t mainstream gender in these digital connectivity strategies? I’ll start with Dr. Emma, and then we’ll move to Ivy, Ms. Ivy.


Dr. Emma Otieno: Okay, thank you very much, Rispa, and I really want to thank the speakers who have gone before me. Can you hear me? Yes. Okay, so I just want to start by looking at the question you’ve asked, why? Why should we really worry, or why should we really prioritize this subject of agenda inclusivity? And my simple answer would be that we have contracted both at the global level and at national levels, and agreed generally that we will not leave anyone behind. And we said we will not leave anyone behind because we want the world connected to the last person who’s under a cave, or if there’s another one under the sea, they must all be connected. So, for me, when I look, when I sit at the national level, that is the mantra of a digital transformation, leaving no one behind. When we go international, be it at the UN level, be it at the International Telecommunications Union level, be it at any other forums that we and any other place where the subject of digital transformation is being pursued, the agenda has been leaving no one behind. So, when we come with the eye of the regulator, because we’ve been actually been, why regulation exists is to create enabling environments so that the government can achieve the broader social economic development goals. So, for us to achieve that, leaving no one behind, which is very critical for the government’s social economic development, we must pursue goals, we must pursue regulations that


Rispa Arose: Thank you so much, Dr. Emma. Over to you, Ivy.


Ivy Tuffuor Hoetu: Yes, thank you. And am I audible? Yes, we can hear you. Okay, good morning from my location. So one of the blind spots I would touch on is most of the time we perceive technology as gender neutral and policies are made that technology will benefit everyone equally. But we have diverse needs. And so one thing we are lacking or is at our blind spots is equitable access and distribution of technology as against gender neutrality of technology. It is assumed that technology or digital policies are inherently gender neutral and will benefit everyone equally, which is not the case. Most of the time, the focus has been on overall connectivity metrics without disaggregating data or considering how societal gender inequalities translate into data disparities. And this is largely due to data deficiencies in gender disaggregated data on data access, usage patterns, and online experiences. Governments have been promoting gathering of ICT indicators and all that, but the in-country, the national, the countries, how often do we disaggregate this data? Even if we have to analyze them, do we analyze them based on the gender specific needs? And here, gender, I’m talking about going even beyond women. The gender definition cuts across children, the physically challenged, the elderly, and women, which most of these people are mostly underrepresented. And policies or regulations usually does not affect their specific needs. And it’s all because the data we collect or we gather, we collect them holistically without disaggregating them. So policies are made generally instead of maybe equitable basis. And it makes it difficult to identify these specific needs. So this is one of the blind spots that we are lacking. And I believe if it’s addressed holistically, it can address and target the needs of all these specific genders. Most of the time, yes, it’s women, but there are also even the elderly, we are saying inclusivity, connecting everyone everywhere, ISOC goal. But how can we reach out to these people if we don’t understand their specific needs? If we make regulations or laws equally, that whatever we have in place will benefit everyone equally. It cannot. My height, I am short. Someone is shorter or someone is taller. You cannot give us the same level of a tree to climb or something. One of them will need a ladder to be able to climb the tree. The other will not need a ladder. So we need to disaggregate the data we collect, make specific indicators based on the specific gender needs. And I believe it will address these blind spots. Thank you.


Rispa Arose: Thank you so much, Ms. Ivy. Thank you so much, Dr. Emma, for bringing in the regulatory perspective to this conversation. Before we move into the next segment, which we’ll be looking at now, we’ve talked about the challenges, we’ve talked about the problems, the gaps that exist, and what is at stake. Before we move to some of the solutions, I’d like to see if there’s any question or any comment online or in person. If you have any question, you can just move to the roundtable. We can take it. Yes. We can have that question and then move to the next segment.


Kwaku Wanchi: For your presentation, my name is Kwaku Wanchi from Ghana IGF. There’s something so organic of what you’re talking about in terms of community network, which is speaking about collaboration and being able to provide solutions to the problem. One of the things that I always see as being helpful in some of this is more I want to know more about what I would call our community businesses. And again, this is gender-based or women-led, especially in Africa. I don’t know about Lilian’s experience in Latin America and the rest. And what you would find, I think there was a report about a year or two ago from MasterCard Foundation, which found that most businesses in Africa are run by women and small, medium enterprises. That’s what I mean. And I think one of the things and the challenges and opportunities in thinking about how we go, I don’t know how the next conversation is going to be, is one, what I would like to call synthesizing or having organic technologies or solutions in terms of connectivity. Where you’d see that most of our local businesses or small, medium enterprises are small businesses, but how do we get them digital or being able to do it? An aspect about the work we do in community networks is about imparting digital skills. And I always give this keynote, coming from Africa, we are multilingual. And most of the things that we do need to be owned by the people who can be able to speak their own language or interpret it. So my point is this. The skills impartation and things that we do, we need to be able to translate into our local languages. And I’m giving this for free. I’ve been doing this for five years. Let’s try and translate things into our local languages. And the best people to be able to do that are language teachers or retired language teachers. That’s what I would say. That’s inclusivity in gender. Second part is including our medium-sized businesses to get them digital, to teach them the skills. And also give them their business experience and create that business environment which gives opportunities to give value to our community. And you’d be surprised. You probably might be getting financing from some of these local businesses. So that’s what I’d like to contribute. Thank you.


Rispa Arose: Thank you so much for that contribution. I believe it was just to add to the conversation. The speakers agree that diversity should go beyond just the gender, but also look at languages. Look at age. How can different age levels contribute to digital connectivity for different communities? This now will take us to the next question. I don’t know if there’s any question online.


Josephine Miliza: There’s no question, just a comment from Peter Balaba. He says, I do support the establishment of community networks approach in Kenya. However, collaboration with community libraries and information centers can help bridge the gaps. In Keseke Telecenter and Library, we are now focusing on empowering the youth and women in digital skills and we train them in the local language to enable them to understand the basics. So it’s just a comment and not a question. Thank you, Peter.


Rispa Arose: Thank you. Thank you, Josephine. To the next segment, which we will have another round of interventions from our speakers, we’ll be looking at what it will take to realize the gender inclusive connectivity. And now I’d like to go back to our first speaker that gave us the presentation remark, Ms. Mathangi and Lilian, if you can talk about what actions are necessary to better support the emergence and sustainability of women-led connectivity models. You’ve already mentioned what are some of the challenges and barriers for women to participate in community-centered connectivity models. Now, what are some of the actions that can be taken? For Mathangi, you can talk about some of the recommendations that that came out of the research that you’ve just presented. So I’ll start with Ms. Mathangi and then we can go to Lillian.


Mathangi as Rispur: All right, sure, thanks, Rizpal. And while also talking about the policy lens and recommendation from the report, I would also like to share something that I’ve seen in practice. Because while working with some of the self-help groups and particularly the Self-Employed Women’s Association here in India, I’ve seen where we supported some of these artisan-led cooperatives access digital markets, and that too, during the first and the second waves of the pandemic. And then what happened was something very interesting was that to address the connectivity gaps in one cluster, in several clusters of villages, actually, the women managed to have shared hotspot system, not like the community network, but also not very unlike the CNs that we spoke about earlier. And they used to train each other, watch tutorials, and figure out how do they ensure it gets to the online inventory and all of those, even troubleshooting the basic tech issues. They supported each other, but the catch was it was sustainable only because the initiative worked because it was under an umbrella like SEWA where they could step in to subsidize the hardware. They could, like one of our audience members pointed out, they coordinated all the training in their local language, Gujarati, and they anchored the work in existing self-help groups. So then what finding came out was very similar to what we also studied in the report that if you remove that scaffolding, that model would not have survived or it would not have been sustainable. So combining these two insights, just quickly summarizing three recommendations would be, first, the licensing frameworks for these CNs need to very explicitly accommodate non-registered collectives, informal women’s group, like some of them rightly pointed out that the MSMEs and SMEs, these are majorly owned by women. So supporting a more informal women’s group, that is also equally important because these are mostly cooperatives rooted in trust and solidarity and just assuming that it should be a registered entity and a technical documentation. So that’s also going to exclude a lot of groups. Secondly, there are a lot of already funding mechanisms, but one key insight is that the funds should also allow for experimentation and thereby allowing failure and more iteration to figure out what works and what does not work, because these community networks are not going to scale if you take a startup timeline and when we compare. And third, should be echoing Waqas’s thought on, this is not going to be a social inclusion or women’s empowerment work, but it’s going to be a socio-economic, it’s going to be an infrastructure work as well, and it’s time that we recognize that. And this could mean paying stipends to local women, managing their Wi-Fi and maintenance and doing all the operational ground level operations, recognizing the digital caretaking that involves, that entails as a form of labor, and also not leaving out individual champions when we look at the community networks, recognizing who is anchoring these and who is pulling this together. So it’s not just going to be what they are doing well or just if the access is there, but also go beyond it and see as an infrastructure work. So these are three recommendations that I would like to add.


Rispa Arose: Thank you so much. Lilian, anything to add?


Lillian Chamorro: Well, thank you, Jess. I have a few options that we think could improve women’s participation, and one of them is establish financing funds, both for technology, but also for training with special conditions for women’s participation. For example, in closing the gaps in the basic use of technological devices and technological literacy, and facilitate access to devices. Also, adapting methods. We are a team of epidemiologists that transcend the technical approach for the community networks and also consider other women’s interests and needs related to the care and the support of the community. This diversification of roles in technical, administrative and communicative areas. The communities should promote the participation of women in technical areas such as installation and support. Also men should be involved in the administrative and financial aspects of sustainability. Also promote the participation of women in leadership roles in the communities without neglecting the recognition of the importance of the care work. I think it’s important to recognize the importance because sometimes the women, for example in our case, when we were working on establishment of community networks, of antennas, of that kind of things, many women are caring of us, are working on the food, working on the care of the people. This work is not recognized and we need to recognize that. Promote the participation of women in discussion spaces. Make women in the field of technologies visible so that they can serve as reference for other adult and young women in traditional masculinized scenarios. Reflect on care and how to approach it, recognizing its value and establishing mechanisms for its redistribution. People who require some care should be assumed as a collective responsibility of the communities and if it is necessary, resources should be allocated within the spending structures for the care of these people. Create women’s circles. We think women’s circles are a very good methodology. where women can express their concerns, opportunities, difficulties, not only about technology, but also about their participation in the life of the community. In any case, the action should be respectful of the environment and the values of the community, since we often find ourselves in communities with very different visions of gender roles, and we are not looking to break with the local issue, but rather to seek spaces for dialogue and reflection on the opportunities and inequalities faced by the women and other populations. Finally, conversations on gender must be strengthened in the entities and organizations that accompany these processes, both in men and women. If these reflections have not been integrated in our own work, it would be difficult to integrate them in the processes of accompanying the communities and in the spaces that are generated from there.


Rispa Arose: Thank you so much Liliane, and what I like about the recommendations that you have brought in is that it’s coming from working at the community level, and you know what the challenges are, and what best practice can be adopted by community-centered connectivity strategies at the grassroots. And that really fits in also very well with what Mathangi has just presented, coming from a research report that she conducted in different countries within Africa, Latin America, and Asia, and together these are really strong actions that can be taken to support the emerging and sustainability of women-led connectivity models. Now I want to go back to you Akash.


Waqas Hassan: This was a public perception survey, there was an IVR survey from around 100,000 respondents using the help of the mobile operators. So when all of this that was put into context and then this strategy was being developed, it shaped up into the form of a strategy where there is at the top level, there is this high-level steering committee, which consists of the institutional heads and led by the IT minister. And under that steering committee are six working groups, groups like access, research and data, affordability, safety and security, digital literacy inclusion. And in between these groups, each of the group has its own chair, by the way, is the implementation body, which is PTA and now GDIP has joined PTA as the joint coordinators for the strategy implementation. So with this whole framework that you can see, you can see that there was this whole of government approach that was adopted, but for the implementation itself, it is now essentially a whole of society approach that has been adopted because all of the working groups have representation, not only from the public sector, from the private sector and also from civil society, from telecom operators, from national statistical organizations, you name it and you’ll find probably find a body in one of the working groups contributing to one of the working groups. And one unique feature about this strategy is the support it extends to community networks. The working group on access, it is mentioned in the strategy as one of the DORs of the access working group that it will adopt policies around community networks. And it also mandates the establishment of at least 15 community networks across Pakistan. So this is one of the unique features of the policy that I really like and very relevant to the discussion that we are having today. Now, in the. The latest report that has just been released by GSMA on mobile gender gap, Pakistan’s gender gap has actually reduced from 38% down to 25%. And this is the largest reduction in any of the surveyed countries, which are essentially the developing countries that GSMA has surveyed. So I think in terms of lessons for any policymakers out there, I think we would recommend that first of all, please follow the inclusive policymaking process. Make sure that all of the stakeholders are on board if you want to develop this kind of a strategy, and also take the best use of other use cases, like best practice examples. And a lot of organizations are also working in this space, including ourselves, including APC, GSMA, and others. So I think this is an example from a developing country in the South Asian region, a region having the widest gender gap in the world, along with sub-Saharan Africa, of trying to make a difference in the state of affairs and actually achieving it. Thank you.


Rispa Arose: Thank you so much, Waqas, for those great points. And really, it’s good to know that this can be done, and it has been done from Pakistan. And also thank you for sharing some of the things that we can be in consideration of when thinking about a dedicated gender strategy for digital inclusion. Now I’d want to go back to Dr. Emma and Ms. Ivy, who are online as well. If you could propose one policy or practice shift to improve gender integration in digital connectivity, what would that be, and why?


Dr. Emma Otieno: Okay, thank you very much, Rizpa. And I really thank Waqas, Madhanji and Lillian for the insights that they’ve provided, which really link very well to what I’ll provide as an answer. And I like the Pakistan example, and the way they have actually gone ahead. So if I would just make it short and say, which is this one thing that we can actually do, be it at policy or as an initiative to make this improve or work better. I will go back to this, the statement that Madhanji gave during the first session of the presentation and cited Kenya as even having a national gender policy, which is true. But we have the policy and the policy has been there for quite a number of years. And it also speaks about issues of digital inclusivity and matters gender. But why are we like looking at the report that Waqas has talked about, we’re still at about 39% gender inclusivity gap. We are aware that Pakistan was years ago. So it is because of something, I think, which we’re not doing. And that is what’s going to be my answer to your question, Rizpa, is that we have not been intentional in terms of putting in these clauses in the policy regulation strategies, we are putting them in as aspects of satisfying the social or compliance aspects, but not meaningfully and intentionally putting them. And after we have also inserted and we’ve ticked the box of, we have a policy, or we have a framework, we have a guideline. What has lacked completely is measurement, the intentional and meaningful tracking, measurement, so that we can see what are the outputs out of these policies, or this policy, a clause that we have put in to promote digital inclusivity. What is the outcome, what is the impact and then how can we sustain or how can we improve and which are the areas that are paining. So when we ask anybody even in the area of policy or in the area of regulation or in any other space, we don’t have that information and Ivy said it very well, lack of even having disaggregated data to the extent that we know exactly at the point to understand what is the pain point and how do we move from here to the next step. So for me in terms of just having frameworks, most African countries are doing well because we are really complying with issues of SDG 5 of equality in terms of having those policies, but meaningfully measuring them and looking at them as contributors to socio-economic development is what has been lacking. So to summarize that answer is that we must actually start to implement things like the gender intentional digital infrastructure designs in terms of intentionally measuring, tracking, monitoring impact and feeding back through so that we sustainably improve to ensure that we are going back to closing all the gaps and of course our first answer leaving no one behind. There’s also an aspect of how can this be supported. Somebody mentioned that as I think the gentleman from the audience. PPPs bring together partnerships and collaborations because we cannot achieve this as government alone or as regulators alone or as a private sector alone. We must bring on board the financials. We must bring on board the people who are volunteering to say that they can actually translate it to their local languages. We must bring along the people who can cause devices to be cheap, connectivity to be cheap, you know, access issues. So those partnerships become the base of ensuring that actually we are making the progress in this regard. So two things, measurement and collaboration and partnership so that we achieve these aspects. Thank you very much.


Rispa Arose: Thank you so much Dr Emma. Very, very important to consider. Well said on the two critical aspects that we could be incognizant of while thinking about genomic streaming. I’d like to now hand it over to Ivy, Ms. Ivy.


Ivy Tuffuor Hoetu: Yes, thank you. And picking up from where Dr. landed, it’s true we need the metrics and we need the data. And this point was also built from the blind spots I mentioned earlier. The first thing starts from here, from the IGF here. We need from here the communique to move or go with cross-sectoral collaboration. For the past five years, multi-stakeholder collaboration has been pushed. We are doing well and I believe we can do better to address these gender-specific needs. So, different sector regulators often operate in silos with limited coordination with ministries responsible for gender, education, health, social development, et cetera. This fragmentation prevents holistic approach to digital inclusion. Our focus now should shift from one specific ministry to handle digital connectivity at least. We have now known a lot of sectors, governments know that ICT has become an enabler to all sectors. So, we cannot sit as ICT ministry or policymakers and make ICT decisions without these sectors. So, we need all of them. What IGF has done well in recent past is to involve members of parliament who are key legislators and lawmakers. I believe as they were involved, they have come in to understand the specific issues of what IGF. and other leaders, and also the IGF address. And gradually, it’s making an impact, especially in my country, Ghana. In the next decade, we need to create this platform, expand it, extend invitations to educational ministries, finance ministries, task experts. To also come to the IGF, come to the table, we need to involve them. Closely involve them. Once they sit on the table with us, understand the issues. When they are formulating their sector-specific policies, they will maybe have budgets or targets for contributing to ICT. What we don’t usually have is an infrastructure fund. Usually, this fund is taxed or contributed by mobile network operators, the ICT sector. But ICT goes beyond just the communications ministry or the regulators. It’s affecting every sector. So if a fund is created, the contribution shouldn’t just be on the limited field, the operators. The contribution should come from each of these sectors who are benefiting from ICTs. And this can help to promote community networks establishment in underserved communities. Lastly, one other thing we can do in formulating ICT policies is that stakeholder engagement. Dr. mentioned that the ICT frameworks, they are there. They’ve been developed for years. It’s the lack of implementation. Yes, they may have been prepared or drafted without extensive consultation, but it’s not too late. As we leave from here, we should involve all these stakeholders, especially those that these women… focus activists, gender activists, consumer advocates, when we are developing or when we plan to implement them, we need to involve them because they represent this specific focus group and they understand the issues. So when we involve them and we understand their basic needs, that is when we can formulate these policies to target and address their needs specifically, not making it generally, but making it specifically or addressing their specific, sorry, their specific needs. And when we do that, we get to know their issues that are bothering them, that is a concern. Then it will enable policy formulators to implement or draft and develop the right policies that will address their specific gender needs. For instance, if it’s children, ensuring their protection online. We cannot make a general, broad ICT policy just that is going to address everything equally. It cannot. We are moving from equality to equitable distribution. We need to understand the specific needs and then target them specifically. So with online, with women, we have those who are educated, those who are not educated, those who work in various sectors, and we will prepare these regulations or policies to target and address these specific needs. And when we do that, it will encourage them to use the technologies, because now we’ve gone beyond access. Availability is there. There’s availability, it’s access that we are struggling to, someone mentioned meaningful connection and access, which is key. So To be able to address this and make it meaningful, we need to understand the issues. So cross-sectoral collaboration in terms of preparing ICT policy and regulation is key in addressing gender mainstreaming issues in digital connection. Thank you.


Rispa Arose: Very, very well said, Ivy. Thank you so much for those interventions. I wouldn’t want to dilute it further. We’ll just take this time to see if there’s any question on the floor. And online, we have less than five minutes for any question, any comments that you’d want to contribute to this conversation.


Josephine Miliza: There’s no question online. Also, just a comment that women are the backbone of every society. There are fewer women taking part in the digital connectivity strategy. All they need is motivation, guidance, and support to get a lot of them in the space to make better policy. And this is from the IGF Ghana Hub. Thank you.


Rispa Arose: Thank you, thank you so much. I would really want to extend gratitude to our panelists and speakers for your honesty, your clarity, as well as your valuable contribution to this discussion. I would say that we cannot tire to talk about inclusivity because it’s towards the goal of connecting the unconnected as well as bridging the digital gap.


M

Mathangi as Rispur

Speech speed

145 words per minute

Speech length

1271 words

Speech time

522 seconds

Community networks are locally driven but not automatically inclusive, requiring intentional gender integration from design stage

Explanation

While community networks are built and managed by local communities and represent digital sovereignty, they are not automatically inclusive just because they operate in this way. Women, especially in rural and low-income contexts, remain underrepresented in community network governance and are often underpaid or excluded from leadership roles.


Evidence

Report findings showing women’s underrepresentation in CN governance despite networks being community-run


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Development | Gender rights online | Infrastructure


Agreed with

– Lillian Chamorro

Agreed on

Community networks are not automatically inclusive and require intentional gender integration


Licensing frameworks should accommodate informal women’s groups and non-registered collectives

Explanation

Current licensing frameworks exclude many women-led initiatives by requiring formal registration and technical documentation. Many women’s cooperatives are rooted in trust and solidarity and operate informally, so licensing should accommodate non-registered collectives and informal women’s groups.


Evidence

Experience with Self-Employed Women’s Association in India where women managed shared hotspot systems and trained each other, but sustainability required organizational scaffolding


Major discussion point

Community Networks and Women’s Participation


Topics

Legal and regulatory | Gender rights online | Development


Disagreed with

– Waqas Hassan

Disagreed on

Approach to accommodating informal vs formal structures in community networks


Implement gender impact assessments during licensing processes and provide incentives for women-led networks

Explanation

The report recommends that ministries of ICT should require gender impact assessments during licensing processes for community networks and design simpler license procedures to reduce barriers for women accessing leadership roles. This includes providing incentives for women-led community networks.


Evidence

Policy recommendations from research conducted across Africa, Latin America, and Asia on gender integration in community-centered connectivity models


Major discussion point

Solutions and Recommendations


Topics

Legal and regulatory | Gender rights online | Development


L

Lillian Chamorro

Speech speed

132 words per minute

Speech length

711 words

Speech time

322 seconds

Women face barriers including care work responsibilities, low self-confidence with technology, and limited access to devices

Explanation

Women’s participation in community network implementation and maintenance is hindered by care work responsibilities that are exclusively assigned to women, especially in rural areas. Additionally, many women have low self-confidence and don’t feel capable of appropriating technological knowledge, preferring to stay away from internet and technology in general.


Evidence

Experience from accompanying community-centered connectivity initiatives in Latin America region


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Gender rights online | Development | Sociocultural


Agreed with

– Mathangi as Rispur

Agreed on

Community networks are not automatically inclusive and require intentional gender integration


Establish financing funds with special conditions for women’s participation and technology access

Explanation

To improve women’s participation, there should be financing funds for both technology and training with special conditions for women. This includes closing gaps in basic use of technological devices and technological literacy, and facilitating access to devices for women.


Major discussion point

Solutions and Recommendations


Topics

Development | Gender rights online | Economic


Create women’s circles for expression and dialogue while respecting community values and gender roles

Explanation

Women’s circles are recommended as a methodology where women can express concerns, opportunities, and difficulties about technology and community participation. Actions should be respectful of community environment and values, seeking dialogue and reflection on opportunities and inequalities rather than breaking with local customs.


Evidence

Experience working with community-centered connectivity initiatives showing this as an effective methodology


Major discussion point

Solutions and Recommendations


Topics

Gender rights online | Sociocultural | Development


Agreed with

– Kwaku Wanchi
– Josephine Miliza

Agreed on

Importance of local languages and community-centered approaches


W

Waqas Hassan

Speech speed

139 words per minute

Speech length

965 words

Speech time

415 seconds

Connectivity strategies are developed from infrastructure mindset assuming coverage equals inclusion, ignoring inherent barriers

Explanation

Connectivity strategies are primarily developed from an infrastructure mindset based on the assumption that providing access and coverage expansion is synonymous with inclusion. This assumes that providing mobile connectivity or broadband creates equal opportunity for men and women to use digital services, but ignores inherent barriers of affordability, skills, societal norms, and online safety.


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Infrastructure | Development | Gender rights online


Gender exclusion from digital economy has caused $1 trillion GDP loss over last decade in developing countries

Explanation

Research shows that the exclusion of women from the digital economy in developing countries has caused about $1 trillion in loss of GDP over the last decade. If this continues, another $500 billion could be lost over the next five years, making this not just a women empowerment or social issue, but an economic crisis.


Evidence

GDIP’s research proving the economic impact of women’s exclusion from digital economy


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Economic | Development | Gender rights online


Agreed with

– Dr. Emma Otieno

Agreed on

Economic imperative of gender inclusion in digital strategies


Pakistan’s dedicated gender strategy reduced gender gap from 38% to 25% through inclusive policymaking

Explanation

Pakistan developed a dedicated gender strategy for digital inclusion using a whole-of-government approach with high-level steering committee and six working groups. The strategy included support for community networks and mandated establishment of at least 15 community networks across Pakistan, resulting in the largest reduction in gender gap among surveyed developing countries.


Evidence

GSMA mobile gender gap report showing Pakistan’s gender gap reduction from 38% to 25%, the largest reduction among surveyed developing countries


Major discussion point

Solutions and Recommendations


Topics

Legal and regulatory | Development | Gender rights online


Agreed with

– Ivy Tuffuor Hoetu
– Dr. Emma Otieno

Agreed on

Need for whole-of-government and cross-sectoral collaboration


Disagreed with

– Mathangi as Rispur

Disagreed on

Approach to accommodating informal vs formal structures in community networks


I

Ivy Tuffuor Hoetu

Speech speed

121 words per minute

Speech length

1156 words

Speech time

571 seconds

Technology is perceived as gender neutral but policies fail to address diverse needs and equitable access

Explanation

There’s a blind spot in assuming technology is gender neutral and will benefit everyone equally, when in reality there are diverse needs requiring equitable access rather than equal treatment. The focus has been on overall connectivity metrics without considering how societal gender inequalities translate into digital disparities.


Evidence

Analogy of different heights requiring different tools – some people need ladders to climb trees while others don’t, illustrating need for differentiated approaches


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Gender rights online | Development | Human rights principles


Lack of gender-disaggregated data makes it difficult to identify specific needs and create targeted policies

Explanation

There are data deficiencies in gender-disaggregated data on access, usage patterns, and online experiences. Countries collect ICT indicators holistically without disaggregating them, making policies generally instead of on equitable basis and making it difficult to identify specific needs of different gender groups including women, children, physically challenged, and elderly.


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Legal and regulatory | Development | Gender rights online


Regulators operate in silos with limited coordination across ministries responsible for gender, education, and health

Explanation

Different sector regulators often operate in silos with limited coordination with ministries responsible for gender, education, health, and social development. This fragmentation prevents a holistic approach to digital inclusion and limits effective policy implementation.


Major discussion point

Regulatory and Policy Framework Challenges


Topics

Legal and regulatory | Development | Gender rights online


Agreed with

– Waqas Hassan
– Dr. Emma Otieno

Agreed on

Need for whole-of-government and cross-sectoral collaboration


Need for whole-of-government approach with cross-sectoral collaboration beyond just ICT ministries

Explanation

ICT has become an enabler to all sectors, so ICT ministries and policymakers cannot make decisions without involving other sectors. There’s a need to expand platforms like IGF to include educational ministries, finance ministries, and other sector experts, and create infrastructure funds with contributions from all sectors benefiting from ICTs, not just telecom operators.


Evidence

Ghana’s experience with involving members of parliament in IGF leading to gradual impact and better understanding of issues


Major discussion point

Regulatory and Policy Framework Challenges


Topics

Legal and regulatory | Development | Infrastructure


Involve gender activists and consumer advocates in policy development to understand specific needs

Explanation

Stakeholder engagement should extensively involve gender activists, consumer advocates, and representatives of specific focus groups when developing or implementing ICT policies. This enables policy formulators to understand specific issues and develop targeted policies that address gender-specific needs rather than making broad, general policies.


Major discussion point

Solutions and Recommendations


Topics

Legal and regulatory | Gender rights online | Development


D

Dr. Emma Otieno

Speech speed

158 words per minute

Speech length

892 words

Speech time

338 seconds

Policies exist but lack intentional implementation and meaningful measurement of gender inclusion outcomes

Explanation

Many countries have national gender policies that speak about digital inclusivity, but they insert gender clauses to satisfy social or compliance aspects rather than meaningfully and intentionally implementing them. What’s completely lacking is measurement, intentional tracking of outputs, outcomes, and impacts of these policies to understand pain points and improve sustainably.


Evidence

Kenya has a national gender policy for years that addresses digital inclusivity, but still has about 39% gender inclusivity gap


Major discussion point

Regulatory and Policy Framework Challenges


Topics

Legal and regulatory | Gender rights online | Development


Need meaningful measurement, tracking, and monitoring of gender inclusion policies with public-private partnerships

Explanation

Countries must start implementing gender intentional digital infrastructure designs with intentional measuring, tracking, monitoring of impact and feedback loops for sustainable improvement. This requires partnerships and collaborations bringing together government, private sector, financials, volunteers, and others to achieve progress, as no single entity can accomplish this alone.


Major discussion point

Solutions and Recommendations


Topics

Legal and regulatory | Development | Gender rights online


Agreed with

– Waqas Hassan

Agreed on

Economic imperative of gender inclusion in digital strategies


R

Rispa Arose

Speech speed

107 words per minute

Speech length

2025 words

Speech time

1131 seconds

Gender is explicitly referenced in only half of national ICT policies despite global frameworks advocating inclusion

Explanation

According to ITU, gender is explicitly referenced in only half of national ICT policies or master plans. While frameworks such as WSIS Action Plus 20, Global Digital Compact, and SDG No. 5 collectively advocate for inclusive infrastructure and digital rights, gender mainstreaming in ICT policy and regulation remains limited or sometimes absent.


Evidence

ITU data showing gender referenced in only half of national ICT policies; reference to WSIS Action Plus 20, Global Digital Compact, and SDG No. 5 frameworks


Major discussion point

Regulatory and Policy Framework Challenges


Topics

Legal and regulatory | Gender rights online | Development


K

Kwaku Wanchi

Speech speed

156 words per minute

Speech length

395 words

Speech time

151 seconds

Community libraries and information centers can help bridge gaps through local language training

Explanation

Collaboration with community libraries and information centers can help bridge digital gaps by focusing on empowering youth and women in digital skills training conducted in local languages. This approach recognizes that most communities are multilingual and that skills impartation needs to be translated into local languages, with retired language teachers being ideal for this work.


Evidence

Keseke Telecenter and Library example of training youth and women in digital skills using local language; MasterCard Foundation report showing most businesses in Africa are run by women and small-medium enterprises


Major discussion point

Community Networks and Women’s Participation


Topics

Development | Multilingualism | Online education


Support collaboration with community businesses and translate digital skills training into local languages

Explanation

There should be focus on supporting women-led small and medium enterprises by getting them digital and teaching them skills in their local languages. This includes synthesizing organic technologies with local businesses, imparting digital skills in local languages, and potentially getting financing from these local businesses to create value for communities.


Evidence

MasterCard Foundation report showing most businesses in Africa are run by women and small-medium enterprises; five years of experience translating content into local languages


Major discussion point

Solutions and Recommendations


Topics

Development | Economic | Multilingualism


Agreed with

– Lillian Chamorro
– Josephine Miliza

Agreed on

Importance of local languages and community-centered approaches


J

Josephine Miliza

Speech speed

136 words per minute

Speech length

138 words

Speech time

60 seconds

Community libraries and information centers can bridge digital gaps through collaboration with community networks

Explanation

Collaboration with community libraries and information centers can help bridge gaps in digital connectivity by providing platforms for empowering youth and women in digital skills. These centers can serve as local hubs for training and support, particularly when training is conducted in local languages to ensure better understanding and accessibility.


Evidence

Peter Balaba’s comment about Keseke Telecenter and Library focusing on empowering youth and women in digital skills and training them in local language to enable understanding of basics


Major discussion point

Community Networks and Women’s Participation


Topics

Development | Online education | Multilingualism


Agreed with

– Kwaku Wanchi
– Lillian Chamorro

Agreed on

Importance of local languages and community-centered approaches


Women are the backbone of society but need motivation, guidance, and support to participate in digital connectivity strategies

Explanation

Despite women being fundamental to every society, there are fewer women taking part in digital connectivity strategy development and implementation. The key to increasing women’s participation lies in providing them with proper motivation, guidance, and support systems to enter and contribute meaningfully to this space for better policy development.


Evidence

Comment from IGF Ghana Hub emphasizing women’s role as backbone of society and their potential for better policy making


Major discussion point

Gender Mainstreaming in Digital Connectivity Strategies


Topics

Gender rights online | Development | Legal and regulatory


Agreements

Agreement points

Community networks are not automatically inclusive and require intentional gender integration

Speakers

– Mathangi as Rispur
– Lillian Chamorro

Arguments

Community networks are locally driven but not automatically inclusive, requiring intentional gender integration from design stage


Women face barriers including care work responsibilities, low self-confidence with technology, and limited access to devices


Summary

Both speakers agree that while community networks represent a promising approach to connectivity, they face significant challenges in achieving gender inclusion. Women encounter multiple barriers including care work responsibilities, confidence issues with technology, and structural exclusion from leadership roles.


Topics

Development | Gender rights online | Infrastructure


Need for whole-of-government and cross-sectoral collaboration

Speakers

– Waqas Hassan
– Ivy Tuffuor Hoetu
– Dr. Emma Otieno

Arguments

Pakistan’s dedicated gender strategy reduced gender gap from 38% to 25% through inclusive policymaking


Regulators operate in silos with limited coordination across ministries responsible for gender, education, and health


Need meaningful measurement, tracking, and monitoring of gender inclusion policies with public-private partnerships


Summary

All three speakers emphasize the critical importance of breaking down institutional silos and adopting comprehensive, multi-stakeholder approaches to digital gender inclusion, with Pakistan’s success story serving as a concrete example.


Topics

Legal and regulatory | Development | Gender rights online


Importance of local languages and community-centered approaches

Speakers

– Kwaku Wanchi
– Lillian Chamorro
– Josephine Miliza

Arguments

Support collaboration with community businesses and translate digital skills training into local languages


Create women’s circles for expression and dialogue while respecting community values and gender roles


Community libraries and information centers can bridge digital gaps through collaboration with community networks


Summary

Speakers agree that effective digital inclusion requires respecting local contexts, languages, and cultural values while creating supportive spaces for women to engage with technology in culturally appropriate ways.


Topics

Development | Multilingualism | Sociocultural


Economic imperative of gender inclusion in digital strategies

Speakers

– Waqas Hassan
– Dr. Emma Otieno

Arguments

Gender exclusion from digital economy has caused $1 trillion GDP loss over last decade in developing countries


Need meaningful measurement, tracking, and monitoring of gender inclusion policies with public-private partnerships


Summary

Both speakers emphasize that gender digital inclusion is not just a social justice issue but a critical economic imperative, with measurable financial consequences for countries that fail to address it.


Topics

Economic | Development | Gender rights online


Similar viewpoints

Both regulatory experts identify the gap between policy existence and effective implementation, emphasizing the critical need for better data collection, measurement, and monitoring systems to make gender inclusion policies meaningful and effective.

Speakers

– Dr. Emma Otieno
– Ivy Tuffuor Hoetu

Arguments

Policies exist but lack intentional implementation and meaningful measurement of gender inclusion outcomes


Lack of gender-disaggregated data makes it difficult to identify specific needs and create targeted policies


Topics

Legal and regulatory | Gender rights online | Development


Both speakers advocate for more flexible and supportive institutional frameworks that recognize the informal nature of many women-led initiatives and provide targeted financial and regulatory support to enable their participation in digital connectivity.

Speakers

– Mathangi as Rispur
– Lillian Chamorro

Arguments

Licensing frameworks should accommodate informal women’s groups and non-registered collectives


Establish financing funds with special conditions for women’s participation and technology access


Topics

Legal and regulatory | Gender rights online | Development


Both speakers critique the prevailing assumption that technological solutions are inherently neutral and equally beneficial, arguing instead for recognition of structural barriers and the need for differentiated, equity-focused approaches.

Speakers

– Waqas Hassan
– Ivy Tuffuor Hoetu

Arguments

Connectivity strategies are developed from infrastructure mindset assuming coverage equals inclusion, ignoring inherent barriers


Technology is perceived as gender neutral but policies fail to address diverse needs and equitable access


Topics

Infrastructure | Development | Gender rights online


Unexpected consensus

Recognition of care work as legitimate labor in digital infrastructure

Speakers

– Mathangi as Rispur
– Lillian Chamorro

Arguments

Implement gender impact assessments during licensing processes and provide incentives for women-led networks


Create women’s circles for expression and dialogue while respecting community values and gender roles


Explanation

There was unexpected consensus on recognizing and compensating care work and digital maintenance as legitimate forms of labor, moving beyond traditional technical roles to acknowledge the full spectrum of work required for sustainable community networks.


Topics

Development | Gender rights online | Economic


Community networks as infrastructure requiring formal policy support

Speakers

– Waqas Hassan
– Mathangi as Rispur

Arguments

Pakistan’s dedicated gender strategy reduced gender gap from 38% to 25% through inclusive policymaking


Community networks are locally driven but not automatically inclusive, requiring intentional gender integration from design stage


Explanation

Unexpected consensus emerged on treating community networks not just as grassroots initiatives but as legitimate infrastructure requiring formal government policy support and integration into national digital strategies.


Topics

Infrastructure | Legal and regulatory | Development


Overall assessment

Summary

Strong consensus exists around the need for intentional, measured, and collaborative approaches to gender mainstreaming in digital connectivity, with agreement on moving beyond infrastructure-focused solutions to address structural barriers and recognize diverse forms of participation.


Consensus level

High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers represent different sectors (civil society, government, academia, community organizations) but share fundamental agreement on core challenges and solution approaches. This strong consensus suggests significant potential for coordinated action and policy development, with the main challenge being implementation rather than conceptual disagreement.


Differences

Different viewpoints

Approach to accommodating informal vs formal structures in community networks

Speakers

– Mathangi as Rispur
– Waqas Hassan

Arguments

Licensing frameworks should accommodate informal women’s groups and non-registered collectives


Pakistan’s dedicated gender strategy reduced gender gap from 38% to 25% through inclusive policymaking


Summary

Mathangi advocates for accommodating informal, non-registered women’s collectives in licensing frameworks, emphasizing trust-based cooperatives. Waqas presents Pakistan’s success through formal institutional structures with high-level steering committees and official working groups, suggesting more structured approaches work better.


Topics

Legal and regulatory | Gender rights online | Development


Unexpected differences

Role of traditional community structures in gender inclusion

Speakers

– Lillian Chamorro
– Ivy Tuffuor Hoetu

Arguments

Create women’s circles for expression and dialogue while respecting community values and gender roles


Technology is perceived as gender neutral but policies fail to address diverse needs and equitable access


Explanation

Unexpectedly, there’s a subtle disagreement on how to handle traditional gender roles. Lillian advocates for working within existing community values and not ‘breaking with local customs,’ while Ivy pushes for moving from equality to equity, which may require challenging traditional assumptions about gender neutrality.


Topics

Gender rights online | Sociocultural | Human rights principles


Overall assessment

Summary

The discussion shows remarkable consensus on the problem (gender gaps in digital connectivity) and general solutions (better policies, data, and inclusion), with disagreements mainly on implementation approaches – formal vs informal structures, working within vs challenging traditional frameworks, and policy reform vs grassroots organizing.


Disagreement level

Low to moderate disagreement level. The speakers largely agree on goals and broad strategies, with differences primarily in methodology and emphasis. This suggests a mature field where practitioners have identified common challenges but are still developing best practices for implementation. The disagreements are constructive and complementary rather than conflicting, indicating potential for integrated approaches that combine different methodologies.


Partial agreements

Partial agreements

Similar viewpoints

Both regulatory experts identify the gap between policy existence and effective implementation, emphasizing the critical need for better data collection, measurement, and monitoring systems to make gender inclusion policies meaningful and effective.

Speakers

– Dr. Emma Otieno
– Ivy Tuffuor Hoetu

Arguments

Policies exist but lack intentional implementation and meaningful measurement of gender inclusion outcomes


Lack of gender-disaggregated data makes it difficult to identify specific needs and create targeted policies


Topics

Legal and regulatory | Gender rights online | Development


Both speakers advocate for more flexible and supportive institutional frameworks that recognize the informal nature of many women-led initiatives and provide targeted financial and regulatory support to enable their participation in digital connectivity.

Speakers

– Mathangi as Rispur
– Lillian Chamorro

Arguments

Licensing frameworks should accommodate informal women’s groups and non-registered collectives


Establish financing funds with special conditions for women’s participation and technology access


Topics

Legal and regulatory | Gender rights online | Development


Both speakers critique the prevailing assumption that technological solutions are inherently neutral and equally beneficial, arguing instead for recognition of structural barriers and the need for differentiated, equity-focused approaches.

Speakers

– Waqas Hassan
– Ivy Tuffuor Hoetu

Arguments

Connectivity strategies are developed from infrastructure mindset assuming coverage equals inclusion, ignoring inherent barriers


Technology is perceived as gender neutral but policies fail to address diverse needs and equitable access


Topics

Infrastructure | Development | Gender rights online


Takeaways

Key takeaways

Gender mainstreaming in digital connectivity strategies requires intentional design from inception rather than treating gender as an afterthought


Community networks, while locally driven, are not automatically inclusive and need deliberate gender integration policies


The exclusion of women from the digital economy has caused $1 trillion in GDP loss over the last decade in developing countries, making this an economic crisis rather than just a social issue


Technology is often perceived as gender neutral, but policies fail to address diverse needs and equitable access requirements


Successful gender inclusion requires a whole-of-government approach with cross-sectoral collaboration beyond just ICT ministries


Pakistan’s dedicated gender strategy successfully reduced the gender gap from 38% to 25% through inclusive policymaking processes


Women face multiple barriers including care work responsibilities, low technological self-confidence, limited device access, and exclusion from decision-making roles


Lack of gender-disaggregated data makes it difficult to identify specific needs and create targeted policies


Many countries have gender policies but lack intentional implementation and meaningful measurement of outcomes


Resolutions and action items

Implement gender impact assessments during licensing processes for community networks


Create financing funds with special conditions for women’s participation in technology access and training


Establish women’s circles for dialogue and expression while respecting community values


Accommodate informal women’s groups and non-registered collectives in licensing frameworks


Involve gender activists and consumer advocates in policy development processes


Translate digital skills training into local languages using retired language teachers


Create infrastructure funds with contributions from all sectors benefiting from ICTs, not just telecom operators


Establish meaningful measurement, tracking, and monitoring systems for gender inclusion policies


Promote cross-sectoral collaboration by inviting educational, finance, and health ministries to IGF discussions


Support collaboration between community networks and community libraries/information centers


Unresolved issues

How to effectively redistribute care work responsibilities that prevent women’s participation in technical roles


Specific mechanisms for ensuring sustainability of women-led connectivity models without external scaffolding


Methods for balancing respect for traditional community gender roles while promoting women’s technical participation


Standardized approaches for collecting and analyzing gender-disaggregated data across different countries and contexts


Concrete funding mechanisms and amounts needed to support women-led community networks at scale


How to address the technical complexity barrier that discourages women from participating in community network infrastructure


Suggested compromises

Diversify roles in community networks with women in technical areas and men in administrative/financial aspects while recognizing care work value


Adapt methodologies that transcend purely technical approaches to include women’s interests and community care needs


Seek dialogue and reflection spaces on gender opportunities rather than breaking with local cultural values


Move from equality to equitable distribution by understanding and targeting specific gender needs rather than general broad policies


Recognize digital caretaking as a form of labor deserving of compensation while maintaining community cooperative spirit


Thought provoking comments

If we don’t address or even consider the gendered digital divide, then you’re not really closing the gap, but more like building the infrastructure on top of the inequality and marginalization that is already existing.

Speaker

Mathangi Mohan


Reason

This comment reframes the entire approach to digital infrastructure development by highlighting that gender-neutral solutions perpetuate existing inequalities rather than solving them. It challenges the common assumption that providing access equals inclusion.


Impact

This insight became a foundational theme that other speakers built upon throughout the discussion. It shifted the conversation from simply identifying problems to understanding the systemic nature of digital exclusion and influenced subsequent speakers to emphasize intentional, targeted approaches rather than broad connectivity strategies.


The connectivity strategies are still primarily developed from an infrastructure mindset… that mindset is based on the assumption that if you provide access and expansion, coverage expansion, that is somehow synonymous with inclusion… infrastructure development does not mean equitable access.

Speaker

Waqas Hassan


Reason

This comment identifies a fundamental flaw in policy thinking – the conflation of physical access with meaningful inclusion. It exposes the gap between technical solutions and social realities, backed by concrete economic data ($1 trillion GDP loss).


Impact

This observation prompted the regulatory speakers (Dr. Emma and Ivy) to acknowledge policy failures and led to deeper discussions about measurement, intentionality, and the need for whole-of-government approaches. It elevated the conversation from technical challenges to systemic policy critique.


We have not been intentional in terms of putting in these clauses in the policy regulation strategies, we are putting them in as aspects of satisfying the social or compliance aspects, but not meaningfully and intentionally putting them… What has lacked completely is measurement, the intentional and meaningful tracking.

Speaker

Dr. Emma Otieno


Reason

This is a remarkably honest admission from a regulatory perspective about the performative nature of gender inclusion in policies. It distinguishes between compliance-driven inclusion and genuine commitment, while identifying the critical gap in accountability mechanisms.


Impact

This candid assessment validated the concerns raised by civil society speakers and shifted the discussion toward concrete solutions. It led to more specific recommendations about cross-sectoral collaboration and stakeholder engagement, moving the conversation from problem identification to actionable solutions.


We are moving from equality to equitable distribution. We need to understand the specific needs and then target them specifically… You cannot give us the same level of a tree to climb or something. One of them will need a ladder to be able to climb the tree.

Speaker

Ivy Tuffuor Hoetu


Reason

This vivid metaphor crystallizes the difference between equal treatment and equitable outcomes, making complex policy concepts accessible. It challenges the one-size-fits-all approach to digital inclusion and emphasizes the need for differentiated solutions.


Impact

This analogy provided a clear framework that other speakers could reference and build upon. It helped consolidate the discussion around the need for targeted, differentiated approaches and influenced the final recommendations toward more nuanced, context-specific solutions.


The licensing frameworks for these CNs need to very explicitly accommodate non-registered collectives, informal women’s group… because these are mostly cooperatives rooted in trust and solidarity and just assuming that it should be a registered entity and a technical documentation… is also going to exclude a lot of groups.

Speaker

Mathangi Mohan


Reason

This comment reveals how formal regulatory requirements can inadvertently exclude the very communities they aim to serve. It highlights the tension between institutional frameworks and grassroots organizing, particularly for women-led initiatives.


Impact

This insight prompted deeper discussion about the need for flexible regulatory approaches and influenced recommendations about simplifying licensing procedures. It connected the technical aspects of community networks with the social realities of women’s organizing patterns.


Overall assessment

These key comments fundamentally transformed the discussion from a surface-level exploration of gender gaps to a sophisticated analysis of systemic barriers and solutions. The conversation evolved through three distinct phases: first, establishing that gender-neutral approaches perpetuate inequality; second, diagnosing the infrastructure-focused mindset and compliance-driven policies as root causes; and third, developing concrete recommendations for intentional, measured, and equitable approaches. The most impactful comments came from speakers who challenged fundamental assumptions about digital inclusion, provided honest assessments of current failures, and offered clear conceptual frameworks for moving forward. The discussion’s strength lay in how speakers built upon each other’s insights, creating a comprehensive analysis that connected grassroots experiences with policy frameworks and regulatory realities.


Follow-up questions

How can community networks better integrate with existing community libraries and information centers to bridge digital gaps?

Speaker

Peter Balaba (online comment)


Explanation

This suggests exploring partnerships between community networks and established community institutions like libraries and telecenters to leverage existing infrastructure and trust relationships for more effective digital inclusion.


How can local languages be better integrated into digital skills training and community network operations?

Speaker

Kwaku Wanchi


Explanation

This addresses the need for multilingual approaches to digital inclusion, particularly involving retired language teachers and ensuring technology training is accessible in local languages rather than only dominant languages.


How can small and medium enterprises (SMEs) led by women be better integrated into community network ecosystems?

Speaker

Kwaku Wanchi


Explanation

This explores the potential for leveraging existing women-led businesses as anchor points for community networks, potentially providing both funding and local expertise for sustainable connectivity solutions.


What specific methodologies work best for building women’s confidence in technical roles within community networks?

Speaker

Lillian Chamorro (implied)


Explanation

While challenges around low self-confidence were identified, specific evidence-based approaches for addressing this barrier need further investigation and documentation.


How can licensing frameworks be redesigned to accommodate informal women’s groups and non-registered collectives in community network governance?

Speaker

Mathangi Mohan


Explanation

Current regulatory frameworks may exclude informal women’s cooperatives that operate on trust and solidarity rather than formal registration, requiring policy research on alternative licensing approaches.


What are the most effective models for recognizing and compensating digital care work performed by women in community networks?

Speaker

Mathangi Mohan and Lillian Chamorro


Explanation

Both speakers highlighted the need to recognize women’s unpaid labor in maintaining community networks, but specific compensation models and their sustainability need further research.


How can gender-disaggregated data collection be systematically implemented across different countries’ ICT policies?

Speaker

Ivy Tuffuor Hoetu and Dr. Emma Otieno


Explanation

Both regulators emphasized the lack of gender-disaggregated data as a major barrier, but specific methodologies for consistent data collection and analysis across different regulatory contexts need development.


What are the most effective cross-sectoral collaboration models for integrating gender considerations across different government ministries?

Speaker

Ivy Tuffuor Hoetu


Explanation

The need for collaboration beyond ICT ministries was identified, but specific frameworks for coordinating gender mainstreaming across education, finance, health, and other sectors require further research.


How can the Pakistan model of dedicated gender digital inclusion strategy be adapted to other developing country contexts?

Speaker

Waqas Hassan (implied)


Explanation

While Pakistan’s success in reducing the gender gap was highlighted, research is needed on how this whole-of-government approach can be contextualized for different political, economic, and social environments.


What are the most effective funding mechanisms that allow for experimentation and iteration in women-led community networks?

Speaker

Mathangi Mohan


Explanation

Traditional funding models may not accommodate the experimental nature of community networks, requiring research into alternative funding approaches that support learning and adaptation.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.